FSQL writes to database counter

Can you answer me this: Does FSQL have a diagnostic log file somewhere that can be viewed that would give information on what FSQL is doing with the database? IE: how many writes it has done, if it has sent the command to delete records, if it got overloaded and choose not to write records, etc… I know the PMI has error logs but am wondering if there is a similar log in FSQL.

Thanks and have a great day.

You can accomplish this with auditing features that are built into your SQL database system (RDBMS). Maintain a separate FactorySQL username to track.

Right, the database should provide a way to audit activity. FactorySQL itself does not have a place to see that activity, though I wouldn’t disagree with the notion that it might be a good idea.

As for the other point, you ask about if it “got overloaded and chose not to write records”. This situation isn’t as much of an event as it is a state. First off, I should probably mention a bit about how FactorySQL executes: each group executes, and then gets rescheduled after it finishes. So, if a group is set to execute every second, but takes 3 seconds to run, it’s going to effectively run every 3 seconds. So, FactorySQL didn’t choose to not execute those other 2 expected executions, they just weren’t scheduled.

Now, these types of situations are easier to monitor in FactorySQL, as there are a few ways that this is tracked. First off, on the “status tab” of each group, there is information about the last “Execution Duration”- how long the last execution took. This can be useful if the group doesn’t have a trigger, or is triggered but check infrequently, because when a group executes but has nothing to do (because the trigger isn’t active), it takes almost no time.

For further information, you can look at the “System Status” screen under the Connection menu. Look at the “Statistics” panel. At the top there is information about the avg group exec cost, which is the average of all the group execution durations. The normal value for this should be <50ms, though with a lot of groups or depending on the situation, <150ms
isn’t horrible. More than that and something might be up.
The avg. exec delay is how long groups are waiting to execute, should be around 15ms or so.
Group efficiency doesn’t really matter much. However, the Exec queue length should definitely be 0. The system can run fine even if it’s >0, but it would indicate something of an overload. Now, on some system with a lot of groups, if things line up there can be a big burst of activity, and the queue length can shoot up momentarily. So if it’s >0, hit refresh a few times and see if it’s consistent. The queue length is the number of groups who are due to run but are waiting.

Finally, under “Statistic Monitors” right below that information above, you can drill down to a particular group and check statistics for writes to the database, writes to opc, etc. that might help give an idea of how long things are taking.

Hope this gives you some more info to work with.


Just want to say that this info was a great help. I can now prove that we can log a lot of values very fast (relatively) with cheap hardware!

I’m glad it helped you. On the topic of logging values quickly, I have the following comments:

  1. Usually quick data logging uses a trigger. If it’s a one-shot trigger, the group can be executed very quickly if necessary (~50ms), because it does very little work if the trigger isn’t active.

  2. If it’s not a one-shot trigger, and instead the group logs for the duration of the time that the trigger is high, with the default settings I wouldn’t try to execute the group quicker than about 250ms. You could try to go below that, down to maybe 100ms, but it may not be reliable. The reason is that generally speaking, the insert queries take little time (4-15ms, depending on driver & connection method). However, things tend to occur that could affect that, such as some other program taking a lot of cpu, the database doing some sort of maintenance, etc.

  3. If you need to log very quick burst of data (so, similar to #2, but you need to insert every 50ms reliably, but only for a few seconds at a time), you could looked into Buffered Writing. It’s an advanced option on the standard group (to see it, you have to enable advanced options under Settings->Frontend Settings). It causes the records to be written into a buffer before going into the database. This means that the group can run virtually as quickly as it wants, and the database will write as it can. Obviously, the DB needs to catch up at some point, so that why I say “bursts of data”.

Note: buffered writing is always enabled for historical groups (the actual type of group, not a “standard group” storing history).

Also note: buffered writing can be a good idea for normal logging as well, as long as it isn’t too fast (basically following the principals outlined above). It’s really beneficial when you’re logging to a remote database whose connection might go down. The reason is that if the connection drops, it can sometimes data the database system a while to detect (sometimes up to 30 seconds or more). During this time a normal group would be blocked and unable to write to the db. When the failure is detected, it falls over to the datacache- but you will have missed the samples for that time period. With buffered writing, the group always executes in very little time, and if there is a temporary delay with the database, the queue just grows. When it switches over to the data cache, the queue gets written there.

And then yes, you can use those statistics to actually monitor the situation :laughing: (I know, lots of info you didn’t ask for, but hey, I think it’s useful)