Historical logging with a legacy project

Great news! I have approval to upgrade to the Ignition platform but I have to wait until IT upgrades their server (Hopefully, this month). At that point I get the old one for all “my” stuff. I like what I have tested so far and am looking forward to it. My biggest question right now is what to do about the historical logging.

Here is the current setup:

We have 8 Production lines with several machines per line. I am using a separate database for each line and each machine has it’s own table. In FSQL I am using several historical groups to log the data. Inside the tables there is a column for each tag in the group.

Remember the Click to Graph goodie? Well I copied the table structures and modified the script to use the underlying tables (now modified). The end result is: whenever I add groups/tables to FSQL a stored procedure reads the column names of all tables in all databases and compares them with the pens in the pens table. This presents me with a list of tags that I can then add as new pens to the system or ignore them. I also have screens that alow a user to create and save their own pen groups. This way I do not have to modify the project just to add PLC logging points. This strategy works because I mainly log the data for future analysis.

Now I realize that I will have to make major changes if I move to the new historical logging feature but that is the price for improved performance.

The biggest problem is, I do not know how to overcome the next feature.

When a process engineer is analyzing the data (in the easy chart component) for a new product he clicks on a point when the process was stable and then clicks a “Generate Report” button. This button script reads the x axis timestamp and passes it to a report window using the reporting plugin. The report window uses the timestamp to query all the tables for values at that time then prints a report.

My question is this:
Is there a way to use the new historical feature with what I am currently doing? Specifically, how would I query the new tables for data?

I have not played with this feature yet because our existing server is “getting full” and IT does not not want me to experiment until they upgrade. As a result I am just speculating on how to do it.

What are your thoughts on this topic?

[quote="TimE"]My question is this:
Is there a way to use the new historical feature with what I am currently doing? Specifically, how would I query the new tables for data?[/quote]
Sure, you can just use the system.tag.queryTagHistory function.

I'm not convinced you should upgrade. If it ain't broke, don't fix it, right? Are you experiencing performance problems with the current setup?

Yes, for example, I just openned a chart with one pen looking at the last two hours and it took 40 seconds to load the pen data to the chart.

Table Size: 56 columns by 4,717,271 rows
Date range of 8/11/2008 to now
One record every ten seconds

Here is where our ignorance really shines! Neither IT nor myself know that much about indexes.

When the data is stored by FSQL it also stores the order information. Since I sometimes query by order information IT added that to the indexes as well. Here are the indexes set for this table:

[code]L3Log_Winder_t_stamp_ndx
t_stamp Ascending

OrderLineNumber
OrderLineNumber Ascending

OrderNumber
OrderNumber Ascending

PK__L3Log_Winder__4D94879B
L3Log_Winder_ndx Ascending
t_stamp Ascending
OrderNumber Ascending
OrderLineNumber Ascending

RollSequence
RollSequence Ascending
[/code]

If I search by Order number then it will return the data usually within 1 or 2 seconds for the same amount of data.

I did some testing and comparing with some other tables and I noticed that they are not all indexed the same. This is what I am planning to do.

The order information comes from a separate database populated by IT. I read this and put it into a list for the user to select. Since the start and stop times are already there I do not need the order information. Therefore, I removed all references to the order number in my queries and will change the indexes in the morning. (I went to the IT manager and have approval) :thumb_left:
Here is what the indexes will look like. I have a different table that is indexed similar to this and it is working very fast.

[code]L3Log_Winder_t_stamp_ndx
t_stamp Ascending (Clustered)

PK__L3Log_Winder__4D94879B
L3Log_Winder_ndx Ascending (Unique)
[/code]

If this works then I will still plan the upgrade but keep the structure the same.

By the way… What affect does the “create clustered” check box have on the above scenario since only one column is used?

Yes, I think that’s the way to go. The multiple indexes, especially with the 4-way index, in your previous post made me think that you might actually be doing more harm than good.

Clustered indexes affect how data is physically stored- the “clustered” part means that related data is stored more closely together, which can help reduce disk I/O when querying related data. Because the clustered index affects how the data is stored, a table can only have 1 of these indexes. From what I’ve read, using it on the timestamp of a table that you query for ranges of dates is a good idea.

Hope this helps a bit,

Yes, this helps. I am having difficulty modifying the 4 way index though. IT is still working on it. I am about to shut down the gateway and try it again.

It took a little figuring out but we are done. We could not create the t_stamp column as unique! Get this… There were a few hundred duplicates in the t_stamp column! They were all on November 1 some in 2008 and in 2009. All were around 1:00am. I was screwed by congress again, this was caused by Daylight savings time! Looking through the tables I saw:
[ul]1:59:20
1:59:30
1:59:40
1:59:50
1:00:00[/ul]

Since I am basing all of my searching by time I will now have some of this data overlapping by one hour.

Our managers are OK with this considering the performance increase is HUGE. I am now seeing performance in my charts like others here have posted. I went from 40 seconds to less than a second!

Thanks for the help. I might even get a raise after this. :laughing:

Haha, that’s great. Making dramatic performance improvements always feels good.

The only additional comment I wanted to make is that it’s rarely a good idea to make a unique index on timestamps, because unlike an actual ID integer, it’s really not guaranteed to be unique. Your example of DST is a good one. A standard index is better.

Since the t_stamp is a clustered index, do I even need the other index? It was created by default by MSSQL when the table was created. Since the _ndx column is an identity column won’t it be unique anyway?

In order to create the t_stamp as a clustered index I have to remove the existing PK constriant and then drop the …_ndx index. It would be faster if I do not need to re create the unique index.

I don’t quite get what you’re asking… if things are as you outlined above, clustered t_stamp and Primary key on the _ndx table, everything should be fine.

If you’re asking whether you need the primary key column, I’d say that it’s probably a good idea, because in cases where timestamp overlaps, you can look to it to see the exact order that things were inserted.

Regards,