Alarm journal slow query

I have a project with Ignition 7.9.8 that has been running for about 3 years. I have a screen that shows an alarm journal. It is programmed so the user can choose the start and due date ( by default it starts calling 1 day).

Besides, the user can filtrate
with a text field. It works without any problem, but this year the queries of the journal to the data base take too much time ( 1 minute and a half approximately), and interrupt the connection to the data base for a few seconds. It requires at least 1 day of alarm data and it is not working properly, ignition lose connection yo the data base. Before it could display even a month. It was configurated so it deleted the alarm record each year, but the alarm levels have not been configurated. All were created as “low” and the alarm journal stores everything in a single table

What can I do in order to prevent that Ignition loses data base connection for timeout? or Should I delete the data from my event chart?

Does the alarm_events table have an index? Is the index ‘healthy’? How many rows are actually in alarm_events and alarm_events_data?

@PGriffith By default Ignition seems to create 3 default indexes on the alarm events. Is there any reason all columns aren’t indexed by default?

Also, what would prevent alarm events from being broken into 1 month tables like tag history so old tables could simply be dropped? We don’t use pruning or delete statements because they tend to be super slow and can block other data from being written.

Thanks,

Nick

The tables have id column for the index. I have 2.800.000 rows in each tablet. Should I delete some rows?

If you are doing a query based on date/time you will also need an index on the columns you are searching on to speed that up. Might be worth seeing what other columns have indexes.

1 Like

I have tags that should not be saved in the alarm journal because they are process alarms that are continually changing, what I will do is configure this type of alarm in diagnostic so that they are not saved in the history. I will also erase most of the data that is in the tables so that only the alarms that really matter from now on remain.

Us refactoring the system?
In some ways, the simplicity of the alarm journal system is an advantage; you can easily write queries against it yourself or insert your own records, should the need arise. In contrast, tag historian's architecture is literally impossible to write a single SQL query against.
It wouldn't be impossible to change things now, or have some way to 'opt in' to a fragmented structure, but it would be something low on our priority list. Many database engines have special support for fragmented time series data that allows them to break up the data in a way that's transparent to consumers; depending on your RDBMS that may be worth looking in to.

May be worth adding an index on the timestamp? 2.8 million rows isn't that many, but could be too many for a resource-poor database instance.

@PGriffith we are looking into taking advantage of DB side functionality to handle partitioning. The main problem we have with alarms is size control leading to issues such as those mentioned on this post. We are adverse to issuing delete statements because they cause blocking. Pruning issues a delete statement.

@MoisesZV here are a few things that may be helpful to you. Bear in mind these are all for MSSQL so it may be slightly different for your case, syntax wise

At first, you can see which indexes already exist with this statement:

Use Ignition_Data;
SELECT 
     TableName = t.name,
     IndexName = ind.name,
     IndexId = ind.index_id,
     ColumnId = ic.index_column_id,
     ColumnName = col.name
FROM 
     sys.indexes ind 
INNER JOIN 
     sys.index_columns ic ON  ind.object_id = ic.object_id and ind.index_id = ic.index_id 
INNER JOIN 
     sys.columns col ON ic.object_id = col.object_id and ic.column_id = col.column_id 
INNER JOIN 
     sys.tables t ON ind.object_id = t.object_id 
WHERE 
     ind.is_primary_key = 0 
     AND ind.is_unique = 0 
     AND ind.is_unique_constraint = 0 
     AND t.is_ms_shipped = 0 
     AND t.name = 'alarm_events'
ORDER BY 
     t.name, ind.name, ind.index_id, ic.is_included_column, ic.key_ordinal;

On a table that is auto-created by Ignition, we can see that there are 3 indexes that are created by default on eventtime, eventid, and eventtype.

What we have observed helps with speed a little bit is to add indexes like follows noting that we start form index 4 since up to 3 already exists:

USE Ignition_Data;

CREATE NONCLUSTERED INDEX alarm_events_ix4 ON alarm_events (source);
CREATE NONCLUSTERED INDEX alarm_events_ix5 ON alarm_events (displaypath);
CREATE NONCLUSTERED INDEX alarm_events_ix6 ON alarm_events (priority);
CREATE NONCLUSTERED INDEX alarm_events_ix7 ON alarm_events (eventflags);
CREATE NONCLUSTERED INDEX alarm_events_ix8 ON alarm_events ([eventtime], [eventtype]);
CREATE NONCLUSTERED INDEX alarm_events_ix9 ON alarm_events ([eventid], [eventtype]);

For reference, the places we start to see issues with query speed (how long it takes to populate the Alarm Journal Table) is when we get to several hundred million rows.

In addition to indexing, which I think will help, you should also take the chance to see if what alarms you are recording are actually useful. We did this activity recently and found that the majority of alarms are just for conveyor full status and that was not useful to operations. So we turned it off and then the DB grows in size much slower.

Nick

1 Like