Tag History Values Won't Return

I setup a single historical tag. The historical data doesn’t seem to return values after the start of October. The values are currently being written into the database but when an EasyChart is bound to the Historical Tag, it only returns values up to 10/1/2010 11:59:59 pm and the rest of the chart is not populated. I also tried a Table Tag History binding. As soon as I specify the end date into October (>10/1/2010 5:00 am), no values are returned and the query times out. When choosing start and end times both in October, the queries also time out.

While trying to configure the historian, I did change the partition units from month to week, and I know I deleted tag history and then added again in the designer. Not exactly sure what the sequence was. I’m pretty sure the partition tracking may have gone a little whacky.

I configured this tag the last week of Sept. and I have three tables:
sqlt_data_1_2010_10
sqlt_data_1_2010_09
sqlt_data_1_858

The last table is currently being populated. Any ideas what might have caused this or how to recover?

If it’s easiest enough for you to export the tables from the database, sending us the “sqlth_*” tables and those partitions would probably be the best thing to do. If you can export and zip them up, email them to “support AT inductiveautomation.com”, or upload them to inductiveautomation.com/upload.

Otherwise, what are the values in “sqlth_partitions”, and how many rows to you have in “sqlth_sce”?

Regards,

The files have been emailed. Thanks.

Thanks.

What version of Ignition is this? My initial observation indicates that it’s caused by a problem that I’m fairly certain we’ve fixed, but I want to be sure about that.

Regards,

The database is SQL Server 2005, (9.0.4035) and it’s Ignition 7.1.6 (b5739).

Do you happen to know when you upgraded to 7.1.6? Also, what are your store and forward settings like?

Basically, what’s going on is that something strange is happening with the “scan class execution” record, with many rows being inserted unnecessarily. This is causing problem when calculating the quality during querying, and the values are coming back as bad quality - and don’t get shown on the chart.

I’m trying to figure out how this could happen, but it’s going to take a bit longer.

Regards,

So, off the bat, there’s something not quite right about the sce table. HOWEVER, I just noticed- all of the data in the october partition has a data quality of 900 - “demo expired”.

This is the biggest thing affecting your october data. For some reason, it’s being stored as expired. Is the system licensed? If you go to “modules”, does it show them all as licensed? It’s most likely coming from the device driver, but it could be a different part as well.

The problem with the “sce” table is leading to much slower query times, so I’m still going to try to track that down.

Regards,

This gateway is licensed for Vision and SQL Bridge modules, plus the OPC, OPC-UA and Allen Bradley drivers which are the ones in use.

HOWEVER, I had another unlicensed gateway running for part of the time which was setup to the same SQLTags database for part of the time. I don’t believe this was the case after ~ 10/3/2010 10:00 pm est. So i guess the quality would bounce back and forth? This was where the project was initiated and really our first licensed Ignition test project, so I never that much thought.

The queries were slow.

I see you’ve specifically changed the system name to “IgnitionBTDev02”- would the other system have been set to the same name?

I’m trying hard to reason out what in the world could be happening, but nothing is quite lining up. It’s possible that two systems with the same system name and 1 second historical scan class might help explain.

The strangest data in the SCE table does seem to stop in the evening of 10/3. From 7:13pm (presumably being shown to me in PST) til 10/5 11:32am it works correctly. Then, there are 6 or so entries that I wouldn’t expect between 11:32-11:34, and then it works correctly again up until 10/6 6am.

Sorry, I somehow missed your previous post asking about when I upgraded and what store and forward settings were. I think I upgraded to this version at beginning of September, but I’m not exactly sure what date. The store and forward settings are all default.

IgnitionBTDEV02 is the name gateway name that was running on the other PC also. The other PC’s machine name is actually BTDEV02 and when I setup Ignition, I thought it would be a good idea at the time to change the name. I guess I didn’t translate these thoughts when I moved the gateway to run on the licensed machine.

The 10/3 time seems like it matches up for that machine shutting down. It was actually running in virtual machine which the host had an issue. I just checked the system log and the previous, last unexpected shutdown was on 10/3 @ 10:13:06 pm EST. I restarted it on the 10/6 at 2:29:49 pm EST, so I don’t believe it could have been actively been causing issues on 10/5 or until this time on 10/6.

Ok. Well, having two gateways with the same system name logging SQLTags History could certainly create a lot of confusion for the system. For the record, it’s completely supported to have multiple systems logging to the same database, just make sure they have different names.

In your situation, I would almost suggest just wiping it all out and starting the test over. If you don’t quite feel like doing that, I’ve attached a condensed form of the “sce” table that only has 14 rows instead of the 26k in the other table. I’m not sure this is what was causing the slow down- but it was thing only thing I found that was way away from normal in your data. So, you could just delete what is currently in the “sqlth_sce” table and load this.

Going forward, with only 1 system logging, keep an eye on this SCE table. It shouldn’t really grow. The “end_time” of the most recent row should get updated as long as the system is running. That anomaly that I mentioned on the 5th had to do with rows getting inserted when I wouldn’t expect them to be- but at least they seemed like valid entries. If this continues to happen I’d like to try to find out why.

If the queries are still slow, try going to Console>Levels in the gateway, and searching for “History.SQLTags” (no quotes). Change the level to “debug”. After running a few queries, look at the console/log. You can send the wrapper.log file to us if you want, but you probably will be able to see where the time is being taken. For example, you’ll see “Query finished in x seconds”. Previous to that, you’ll see information about loading tag information, scan class information, and seed values. Let me know if you find anything interesting.

Regards,
sqlth_sce_fixed.csv (535 Bytes)

I think I will just wipe it out and move forward. I believe another contributing issue causing confusion was the licensed system had the historian data partitions set to Week which I changed when I setup the history tag, while the other had it set to the default Month. It looks like a new week table was created (sqlt_data_1_864) last night at around 11:30 and that data was returning data from the creation time forward. However, this morning I started the unlicensed system briefly to make sure the settings were the same, and I believe it updated the end_time reference in sqlth_partitions and created a new entry for sqlt_data_1_2010_10.

So I have a couple questions:
I can just delete all the sqlth_* tables and the partition tables to start over?

If I would have renamed the Gateway System Name to something different would that differentiate everything, or would interference still happen? Is this true for History SQLTags and realtime SQLTags? In order for two different gateways to operate against the same realtime SQLTags database, the SQLTags driver name should also be changed, right?

Thank you for your help!

Yes, you can just delete all of those tables. I would probably suggest shutting down the gateway before you delete them, so it can just create them on startup without any problems caused by in-memory cached values, etc.

If each system had a unique “system name” there wouldn’t have been any interference. Instead, you would have seen two entries in the “sqlth_drv” table, and you would have had two sets of data partitions (and corresponding entries in the “sqlth_partitions” table). The data would be stored separately for each driver, with their own respective scan class ids, etc.

On the query side, you would be able to pull data from either driver, or combine the data freely. The only difference is that you would see tags from the other driver get an additional component in the tag path- the system name. The potential full path of a historical sqltag is “[Provider/System]Path\To\Tag”.

All of this can be a tad confusing, especially when combined with realtime SQLTags. Realtime SQLTags can be either internal or external. If internal, the driver is the system they’re running on. If external, each tag has a “driver name” associated with it. This driver name is specified in the configuration of the driver, and ISN’T the system name. This was required because you can technically define multiple drivers per Ignition system (though I’m not aware of anyone whose done that).

Anyhow, all of that is probably unnecessary confusing info, but I figured since you asked… :slight_smile:

Basically, it comes down to this: if you have multiple Ignitions connected to a single database, you’ll want the system names to be different (which they are by default- each Ignition gets a randomized naem), and if using external SQLTags, you’ll want the driver name in the “external driving provider” to be different for each.

Regards,

You explanations helps a lot. Thanks!