Organizing SQL tables and named queries efficiently

No idea either about transaction groups, but varchar columns have a fixed number of characters for their data and append space characters to the start if the data doesn’t match the character length. Use nvarchar if you don’t want prepended spaces and have variable length strings

Changing data types is easy if they’re getting bigger. Going smaller is an issue as data may be truncated

No, that would be char columns. The "var" means variable length.

1 Like

Yes, int2 is equivalent to smallint.

:grimacing: oops!

Sounds like nvarchar is the way to go over varchar though:

Meh. In non-braindead DB platforms, varchar can use utf-8, so its really a question of optimizations. Sometimes less data to transfer, but with a conversion, outruns more data already converted. I generally stick with varchar.

2 Likes

What Phil said.

While SO is a great source of info, I find a number of their database answers are demonstrably wrong. Always take them with a grain of salt and a healthy dose of the docs.

6 Likes

I wanted to say thanks for this post

I thought SO answers were vetted more, didn’t know

is a much better database focused stack exchange fwiw.

6 Likes

Lists aren’t better than tables for getting fault descriptions into my queries correct?

Going to need a bit more context… Where are your fault descriptions stored currently?

I had the descriptions in an excel sheet, a list, and now also in a table.
I am using join for now.
I think I need to read more about how to get data outside of my table into my table when I perform the binding on perspective page to know how to better ask about optimal usage or learn that.

I have named queries for each machine stops and each machines rejects.
These are events recorded in transaction groups.
I have them combined in a table.

I was trying to figure out how I will get the outfeed or infeed, then I realized I can collect those when I have the events for stops and rejects.

So now I have this one big table that is very efficient.
rejects, stops, infeed, and outfeed

1 Like

I am not sure I am setting up the transaction group correctly.

I ran alter table, set the infeed/outfeed columns as bigint
I made tags, they work like positive counters, datatype bigint

In my transaction group, I think I have to use Int8.

Is this okay?

These queries are taking over 15 seconds.
I am def doing something wrong.

Check out the execution plan of the query. Put the query into the database workbench and there should be something like EXPLAIN or similar - look up how to checkout the execution plan for your flavor of database. This will help you figure out where you need to add an index or where the query may be doing something repetitive.

2 Likes

I think I might need to run the queries once a week or day, and store as a dataset for the various pages.

Is there a rule of thumb, or chart for good query speeds?
I want to know if I am asking too much of the query to be so fast, or collecting too many datapoints.

If it is something you only need to run once a day/week, then 10-15 seconds isn’t really problematic imo, but you still may want to look at the EXPLAIN of the query to see if the way it executes will grow exponentially as time goes one, to see if its a very poor execution plan.

The only rule of thumb I would say is the more often the data has to refresh the quicker the query should be. If it needs to be refreshed once a minute, then query that takes over a minute is not going to work. But if it only needs to be refreshed once a week, even a 5 minute query is not really an issue.

it will grow, I am collecting data points

will be linear growth

Right now the query refreshes whenever someone views the page.

I tend to agree, however, I generally error on the side of less network traffic is better, and don't make the database work harder than it needs to.

The general rule of thumb that I use, is that data should update at half the rate of the required refresh rate. This goes for queries and data collection. If I care about what a data point is every hour, then I put it on a 30 min update.

For historical data, if I need to ensure that I have data, then this is how I do it, otherwise I set it to On Change.

2 Likes