Had an application where we are using a client as an andon board to display if a line/station has been down for a length of time. Using the alert_log table and a few alarm summary components does a slick job. The only minor issue was that the cleared_timestamp was not always getting and entry.
This was worked around by looking at the OPC downtime tag. If it the machine was running, but the alert didn’t clear, we’d go ahead and clear it via a transaction group. Here’s a sample query used to determined if there are any uncleared rows for a specific line:
AND system='SQLTags.DS Station DT'
AND Path LIKE '3428/%'
ORDER BY State_Name
This killed my overall performance, cpu usage going into the 90%+ range-- on a quad-core system, no less, missing triggers because of no free cpu cycles, and all sorts of other undocumented features.
Using LIKE in the WHERE clause causes a very slow query because there no indices for the System or Path columns. Attempts to create indices fails, because they data types are set to VARCHAR(255), giving truncation issues.
Since my entries tend to be fairly short, I set my columns to VARCHAR(120) and as able to index with no issues. Actually, I think you can go up to 250 and be all right. IIRC, InnoDB allocates four bytes for the index, so shortening the column size makes sense.
Now things are back to what passes for normal around here.