Can someone provide guidelines to decide which approach to follow in which case?
Each one is for different purpose, depends on what you want to do!
There’s no general rule. Tell us what you are trying to do. But these links might help:
I am new, but as a general rule for myself:
Data Collection from Machines
I find that the work to create the transaction groups creates a very uniform and standard approach that results in clean and organized data. It has been great for data that is automated from machines.
I saw a person using the python updating for this, and it seemed like they were extremely picky which tags on which machines, and it got rather complex quickly I thought. Their data is very accurate, but in doing so I thought that they avoided using a standard method that would then force/encourage standards on the plc programs in the machines.
Any manual entry data, interface buttons, tricky customization
When I have people doing some manual entry, I like the runprepupdate system function in python scripting. This is the case even if most the data is automated.
This has been useful in cases where I wanted to use subsets of table to delete rows as well, like if a user clicks on a row and then presses delete, the query is easy to run as a script.
In some cases I think that there are more complex things that you can do easier with the scripting. I think it is a tradeoff though between standardizing the plcs or doing this custom scripting for me. Some customization of the tables, like coloring cells, I think is done in the scripting.
Ad Hoc Trends, Power Chart, exchange visualization
I have many historized tags specifically for use with the data visualization resources on the exchange.
The historizing creates for me a uniform tabulation of the data that then works great with these premade resources.