Out of curiosity, is there any chance they had a error with their database connection around the time of the issue?
If the database causes the group to error out someone would have to manually restart it. If this is critical data that you don’t want to lose, I would look at other options for collecting the data if you want to make you always have it. But if your database connection goes down you will run into issues either way.
With the alarm I mentioned before, you could add the data points into the email so it can be manually entered later. I know that normally wouldn’t be preferred but if you have it emailed to you then you always have it unless something goes wrong with email too.
You could also add in a dataset memory tag that the data can be written to if there is a failure. In your alarm pipeline you could have a script read the dataset and the values you want to record then append the dataset. You could then give a method to review the saved data, then insert into your database, and then clear the dataset for future use.
You could even build a named query that does the same insert as the transaction group is set to do. In your alarm script you can attempt to run that named query and then check the value returned. When doing an update using system.db.runNamedQuery() it will return an int representing how many rows are affected. If your database connection is down it may error our so I would wrap it in a try/except and have the backup be writing the data to the dataset like mentioned above or sending it out in an email.
I know these aren’t what you would prefer to do but I don’t see any way to monitor the transaction group either and even if you could monitor it I haven’t noticed a way to re-start the transaction group through a script if it fails. With that being the case, I’d plan for a backup method in case things go wrong if its critical data. With any of the ones I mentioned, you would also want to clear the alarm memory tag at the end so it will trigger the next time the data should be sent out.