Tag History Issue When Upgrading to 7.6.3

Hey guys,

I upgraded our servers to 7.6.3 from 7.6.2 and now some, not all, of my tags won’t show past history on trends, they just started over from the upgrade point.

[attachment=1]TagHistory1.png[/attachment]

I looked in the sqlth_te table and noticed that there are two entries with null values in the retired column for most tags. As a test I tried deleting the latest entry for a tag but I still can’t view past tag history.

[attachment=0]TagHistory2.png[/attachment]

Is there a way to force the tag history system to be able to view all of the past history data? It’s all there the tag history query system just won’t find it for some reason. Thanks!

Hi,

Please please don’t delete anything else from that table! That’s the only thing linking the stored data to the tag path, and if you’ve only deleted one, we can reconnect it, but more than that and you’ll have no way of knowing what data belongs to which tag!

Can you call in tomorrow so we can take a look? It’s very possible that for some reason it just things the data is bad quality now, and is not displaying it. We can try to track down why that is.

Regards,

Hey Colby,

I’m out of the plant for the rest of the week but I’ll give a call next week when I’m back. Thanks!

Hi,

If you have access to the system, you could try looking through the suggestions in this document, and see how everything looks:
[attachment=0]TroubleshootingGuideTagHistory.pdf[/attachment]

In particular, you might try doing the following: In the client, go to Help>Diagnostics, and click “Clear Tag History Cache”. Then, in the gateway, turn History.SQLTags to “Trace” (this is going to turn everything on), go back to the graph, and refresh once (just change the date). Turn the logger off, and then post/email the wrapper.log file to us so we can see if anything is apparent. My guess is that we’ll see all the values written by the queryresultwriter logger, but they’ll be stale.

Regards,

Hey Colby,

I looked through the document but I don’t see anything that stands out to me as a problem. I ran through the steps to log everything and I’ll attach the log file on this post. I trimmed it to the time where I had turned the logging to trace (should be around 12:18:00).

There were 25 open clients at the time so I don’t know how much junk is in there from the other open clients but you can look for any tags that have HACCP in the tag path.

[attachment=0]wrapper.log[/attachment]

Oh man, so close… that log contains about the first 50% of the query. It cuts off before the actual value loading and result writing.

There definitely is a lot going on, which might contribute to the slowness of queries, but for the one in question, for example, reading the initial values took over 30 seconds for the 7 tags. This is really pretty rough. What database are you using?

Anyhow, you can try it again and let it run for longer, or we can wait until we can look at it interactively. Since next week is the community conference, if you want an answer and possible fix for it soon, we should probably try the logs one more time.

Regards,

I tried it a few more times, but I don’t know if it’s going to be any different. With the logging set to trace all of the queries seem to take forever and the client actually gets disconnected from the gateway after 20-30 seconds. The last time I turned off everything but the query result writer.

When the level is not set to Trace everything is very quick so I don’t think it’s a database issue. Everything is virtual connected physically with gigabit Ethernet, MS SQL 2005. Maybe it has something to do with me being connected via VPN and not at the plant but when the logging is not set to Trace it’s very fast (less than half a second to update).

Maybe there is a timeout value that I need to adjust when doing the logging?

Oh well, see if it’s any different in the included file. If not, no one has complained yet so it can probably wait. I would just like to be able to access that years worth of data eventually.

[attachment=0]wrapper.log[/attachment]

Hi,

While that log is basically a mish-mash of information, I did manage to see a couple things. There is one example of a query that covered early morning 9/7, where data was loaded but written out as “stale”. This is most likely caused by some issue the with “sqlth_sce” table.

If you want, you can follow the steps in that guide under “Resetting the sqlth_sce Table” and see if that helps. Note that with SQL Server, there is no “unix_timestamp()”, so you’ll have to use something like “cast(DATEDIFF(SECOND,{d ‘1970-01-01’}, CURRENT_TIMESTAMP) as bigint)*1000” instead

Yes, the logging can really slow things down. The only thing that threw me off, though, was some long query messages, which I don’t think should be related to that… but maybe. In this particular log, some of the queries failed with bad quality because the database faulted, presumably because it hit the max connection limit (probably still at the default of 8?). If none of this is a problem normally, then don’t worry about it.

Regards,

I deleted all of the rows in the sqlth_sce table and set all of the start_date rows to 1980 something and now everything is working. Thanks Colby!

Great, glad to hear it. The SCE idea is nice, but tends to cause too many issues like this. I think we’re going to make it an option to turn off soon, or maybe we could just make the graph show the data, but with some other indication that it might not be reliable.

Regards,

Yeah, I like those ideas. Just weird that it happened this time I updated the servers. I’ve updated them many times and never had that problem. Oh well, you learn something every day. Thanks again, you guys are the best!

Colby:

Can you repost a link to this guide or provide a link the knowledgebase article?

TroubleshootingGuideTagHistory.pdf specifically the section on ‘Resetting the sqlth_sce Table’

:Thanks