When a new alarm, lose a point of historical

I met a strange problem, I do not see any reason.
I historizes one SQLTag everything goes well, a new alarm arrives I lose a point of history.
I do not know or watch …

In an analog variable historized red, blue variable with an alarm.
Whenever there is an alarm on the GATEWAY I lose the history.

I try to change the history SCANCLASS but in any case I lose a historic

thank you

It looks like your historical data is going to bad quality, which won’t be plotted on an easy chart. It’s interesting that the alert seems to correlate with bad quality.

What are the specifications of your Ignition Gateway system (number of cores, RAM, etc…)?

What version of Ignition are you running?

What type of event does the alert represent?

After doing various tests it works well with little SQLTag.
So I think it’s a problem of my computer performance.
In short I 200 analog variables to historicize all 500ms and 2,000 alarms.
My alarms are on SQLTag on boolean change (to trace the changes). There is no connection between the analog and alarm on my curve.
It seems that when an alarm GATEWAY manages it blocks a historical cycle. With a history of 1000 scanclass I have the same problem I lose one when I have a new alarm.

ignition V7.5.5 beta
4 cores
Ubuntu 12.04 LTS
I look for the rest tomorrow

I try to make a transaction history group of 200 analog 500ms cycle, except that it works well in the field in the MySQL table t_stamp I do not milliseconds (that is a DateTime field), is’t possible to have?

Thank you for your help. :prayer:

The only real way to store milliseconds in MySQL is to use a bigint and store the value as unix time. It’s the same method we use for SQLTags history. I hope that helps clear up your question.

As far as your logging vs alerting problem, it is possible that it is performance related. Could you post up a thread dump from the gateway control utility?

I understand in “group transaction history” I can not get the milliseconds? the “t_stamp” is datetime.

To confirm that my problem does not come from the purchase I made a screenshot:

  • The green rectangle represents the history 500ms standart TagSql
  • The blue rectangle represents the result of my transaction history 500ms Group
    The two representations are the same variable.
    I see in my history group transation value changes and not in history
    I also join you dump thread

Ubuntu 12.04 LTS
Intel® Core™ i5-2510E CPU @ 2.50GHz × 4

Gateway 7.5.5-beta1 (b1208) | 32-bit

thank you

thread-dump003.txt (91.1 KB)

From the thread dump, I can see that some tags are executing queries to get their values. From the fact that it shows up in the dump, it either means that the queries are taking some time, or there are many of them.

I would guess that you are using the same scan class for tag execution and scan class execution. The long execution time for query based tags is causing the scan class execution to take a long time, which in turn makes the data “stale” in the historical database (and causes values to be missed).

I would recommend the following: separate scan classes for history and execution, and a separate execution scan class for the expression tags.

Going further, it may be possible to improve the tag query performance with better indexes or a different where clause, but that’s a different issue. There should be a way to get the OPC tags to store history with no gaps.

The machine specs are more than fine, it should easily handle this.


Colby thank you, it works very well.
You’re doing a very good job on the forum, congratulations to the whole team …