What triggers the queries to run now? Someone pressing a button? If so add some expression logic that disables the button during certain hours.
If the queries run on a schedule via gateway timer or similar, add some extra logic to an if condition that checks the time so it only runs during the interval you deem ok.
Maybe try optimizing the queries as well with appropriate indexes and then this doesn't become as much a problem in the first place if possible.
When the page is opened, the queries run once, unless they change a parameter, then it runs again.
Though I do have a few realtime queries, that I am not trying to turn off.
As far as the indexing, the few queries I am worried about are tracking events that happen so fast that putting indexes in isn't reasonable I think. Though I have streamlined some of them, and some are older and I need to go speed them up.
So these are binding's that that run queries ie a named query binding or SQL binding? If thats the case there's no way to turn a SQL binding on/off and you would have to then do this via scripting.
However if these are named queries you could just add a long cache time and then they will only re-query the database when that cache time is up or if you manually ran system.db.clearNamedQueryCache() for it in between.
I will say though this is a bit concerning. Users shouldn't be able to influence data collection at least not just by opening a page imo. The gateway/server level that is what should be doing all your data collection via transaction groups/historian/tag change events etc etc and then any vision/perspective client only gets a view of what's been collected via select queries, and can only influence via some action like a button press, not just from navigation. Or maybe I am misunderstanding your issue here.
Among my tables, I have a few unindexed tables with very frequent inserts that I need to split into an agile fast inserts table and an indexed long term table.
I should do that instead of working on a band-aid for the interim. Put it off long enough.
In order to keep a rolling long term table that is indexed, I schedule a script that does this:
create table based on current month that includes the past months of data
index the table
drop the table that was created 2 months ago, keeping a backup if the current one is not working
Is this a good way?
Or what is the best practice?
I put it off because it is a little intimidating.
I think I asked about it a few different ways, but it is intimidating I think.
Took a bit to get used to calling queries in scripts.
Get a better DB. There shouldn't be anything "too fast" to index on the fly. If $$ is a problem, then PostgreSQL's BRIN indices are particularly well suited to fast ingest of timestamped data.
Is the tag historian indexed?
Honest, I don't know.
I think I try to collect historian data at like once in 6 seconds for most things, and only like speed I try to capture faster. So I am not sure if my usage of the historian is a good measure.
It took 4 minutes for the index to be created. I thought it would take that long each insert.
Though they did have 100 million instead of just 1.5 million records.
Yea I think you're worried for nothing regarding the index. As Phil said, it's not even one a second. Only when were talking constantly having multiple INSERT's every second would I think that I wouldn't add an index.
Nah, even then I'd have an index. You need the index to access large quantities of data. What is the point of inserting large quantities of data if you cannot access it?