Unknown tag writing

Hello for everyone. Week ago i got some bad bug:
I create a memory tag and in a timer script writing this:

HasAlarm = system.db.runScalarQuery(“SELECT COUNT(*) FROM alert_log where sent = 0”)
system.tag.write(“gsm_sms/Count”, HasAlarm)

This is 3 sec fixed delay timer script with dedicated threading. It work correctly but after some time it has some crash.
In DB i have count 0, in tag writes count 0 too. But tag changed himself to another value, sometimes 15 (like today), some times another value.

I am go to stop this script, and try to change value by myself. The values go to 0 and then return to 15. I dont have another scripts or writings to tag. I try to delete this tag and create new. Few hours it goes good, but then error happens again. I try to rename this tag and change his name in my script. After few hours error starts again.

Sorry for my english, help please.

Look at this video for better understending:
sendspace.com/file/a5hajb

You aren’t writing anything to the tag with that code.
Use this:

system.tag.write("gsm_sms/Count", HasAlarm)

[quote=“jpark”]You aren’t writing anything to the tag with that code.
Use this:

system.tag.write("gsm_sms/Count", HasAlarm)

Writing… Just mistake in the forum post. Look video.

Hello
From your video, that timer script is running every 3 seconds.
Inside that timer script, you have several time.sleep(x) which is waiting for a total of at least 7 seconds.
It then updates the alert_log table with Sent = 1.
But if the timer is set to run every 3 seconds, and there is a 7 second delay inside that script you are going have some issues with timing.
What’s the purpose of the time.sleep inside that script?
You may want to use system.tag.writeAll instead.

[quote=“jpark”]Hello
From your video, that timer script is running every 3 seconds.
Inside that timer script, you have several time.sleep(x) which is waiting for a total of at least 7 seconds.
It then updates the alert_log table with Sent = 1.
But if the timer is set to run every 3 seconds, and there is a 7 second delay inside that script you are going have some issues with timing.
What’s the purpose of the time.sleep inside that script?
You may want to use system.tag.writeAll instead.[/quote]

No matter. I stoped the script later but cant to change the tag.

I watched your video a number of times, however, I am not fully convinced that something isn’t writing to the Count2 tag from somewhere else. You can do a Find/Replace and search for ‘system.tag.write(Change2’. Please search the entire project and all gateway and timer scripts. Also, place the same SELECT query on a query tag and see if it has the same issue.

As i say, this tag i was create day ago. Then, when i going to rename or delete and create new tag with uniqe name the bug started again after few hours.

Hi,

Is this system redundant? I’m trying to guess at any potential cause, and writes to static tags are, in fact, shared between redundant systems, so there might be something there.

The other question I have is: when you rename the tag, do you see errors in the gateway console about attempting to write to the incorrect address? This might give a clue as to where the write is coming from. Or, you might also try to edit the tag and make it read only when this problem is occurring, in order to see what kind of errors are raised.

I suppose it might also be helpful to print the HasAlarm value from the script, so we can see if the incorrect number is actually somehow coming from the query. Print writes only to the wrapper log file, but you can also use system.util.getLogger, which will write to the console.

Final question: Does this only seem to start after you do something else, like save the project?

Regards,

Bug goes away when i was change the script to this:

[code]import time

HasAlarm = system.db.runScalarQuery(“SELECT COUNT(*) FROM alert_log where sent = 0”)
system.tag.write(“gsm_sms/Count”, HasAlarm)

AlarmText = system.db.runQuery(“SELECT displaypath, STATE_NAME, ACTIVE_TIMESTAMP FROM alert_log where sent = 0 order by alert_log_ndx asc limit 1”)
system.tag.write(“gsm_sms/1_len”, len(AlarmText))
if len(AlarmText) != 0:
table = system.db.runQuery(“SELECT TelNumb FROM Users WHERE TelNumb IS TRUE”)
for row in AlarmText:
Text = “%s, %s” % (row[0],row[1])
Time = “%s” % (row[2])
for row2 in table:
time.sleep(1.0)
system.tag.write(“gsm_sms/sDestNumber”, row2[0])
system.tag.write(“gsm_sms/sMsg2Send”, Text)
time.sleep(2.0)
system.tag.write(“gsm_sms/iSMSFlag”, 2)
time.sleep(2.0)
sPath = “gsm_sms”
system.tag.write(sPath + ‘/iSMSFlag’, 3)
sN = row2[0]
sM = Text
time.sleep(2.0)
iRes = app.gsm_io.ProcessTask(sN,sM,sPath)
system.tag.write(sPath + ‘/iSMSFlag’, iRes)
system.db.runUpdateQuery(“UPDATE alert_log SET Sent = 1 WHERE ACTIVE_TIMESTAMP = ‘%s’” % (Time))[/code]

Now i am not using HasAlarm in the script for compare with “!= 0”, tag writes good. This script now work correctly.

But we have another lager scripts which takes values from electrocounters. We make dialog with Gregory Simpson and want to show him another but similar and more important problem.

Today takes this issue again and i see, that this problem goes with any memory-tags which used to writing by script.

In logs i saw this:

[code]14:50:03 SQLTags.TagProviders.Provider[default].TagStore Error storing tag values.

simpleorm.utils.SException$Jdbc: Executing UPDATE SQLTagProp SET DoubleVal = ? WHERE TagId = ? AND Name = ? AND DoubleVal = ? AND Path = ? for [TagPropertyRecord 164809, Value, Dirty0]
at simpleorm.sessionjdbc.SSessionJdbcHelper.flushExecuteUpdate(SSessionJdbcHelper.java:409)
at simpleorm.sessionjdbc.SSessionJdbcHelper.flush(SSessionJdbcHelper.java:376)
at simpleorm.sessionjdbc.SSessionJdbc.flush(SSessionJdbc.java:425)
at simpleorm.sessionjdbc.SSessionJdbc.flush(SSessionJdbc.java:410)
at simpleorm.sessionjdbc.SSessionJdbc.commit(SSessionJdbc.java:344)
at com.inductiveautomation.ignition.gateway.sqltags.tagproviders.internal.InternalTagStore.internalStoreTagValues(InternalTagStore.java:1252)
at com.inductiveautomation.ignition.gateway.sqltags.tagproviders.internal.InternalTagStore.storeTagValues(InternalTagStore.java:1140)
at com.inductiveautomation.ignition.gateway.sqltags.providers.AbstractStoreBasedTagProvider.tagValuesChanged(AbstractStoreBasedTagProvider.java:2029)
at com.inductiveautomation.ignition.gateway.sqltags.scanclasses.SimpleExecutableScanClass$ScanClassTagEvaluationContext.processAndReset(SimpleExecutableScanClass.java:1138)
at com.inductiveautomation.ignition.gateway.sqltags.scanclasses.SimpleExecutableScanClass.run(SimpleExecutableScanClass.java:872)
at com.inductiveautomation.ignition.common.execution.impl.BasicExecutionEngine$TrackedTask.run(BasicExecutionEngine.java:573)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.runAndReset(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.sql.SQLTransactionRollbackException: transaction rollback: serialization failure
at org.hsqldb.jdbc.Util.sqlException(Unknown Source)
at org.hsqldb.jdbc.Util.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCPreparedStatement.fetchResult(Unknown Source)
at org.hsqldb.jdbc.JDBCPreparedStatement.executeUpdate(Unknown Source)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:102)
at simpleorm.sessionjdbc.SSessionJdbcHelper.flushExecuteUpdate(SSessionJdbcHelper.java:407)
… 17 more
Caused by: org.hsqldb.HsqlException: transaction rollback: serialization failure
at org.hsqldb.error.Error.error(Unknown Source)
at org.hsqldb.error.Error.error(Unknown Source)
at org.hsqldb.Session.executeCompiledStatement(Unknown Source)
at org.hsqldb.Session.execute(Unknown Source)
… 21 more[/code]

Can’t to upload logs. Your forum writes “413 Request Entity Too Large” but file is only 10 MB.

sendspace.com/file/kdsmam - logs

The problem still not solved.