I have multiple gateways on the same network that will be running the same project with the same (sub) set of tags (master has all tag providers, others have only one each). Some of these tags have history configured. There's one central gateway that has the historical database.
I was thinking it would be easiest to have all of the different gateways just connected to that database over the network for viewing. Only one of the gateways (the one with access to all of the tags) needs to actually log the data, the rest can just view.
However, since all the gateways are running the same project with the same tags, the database connection name would be the same. If multiple gateways have duplicate tags logging to this historical database what happens? Just a ton of extra data? I was thinking I could conditionally enable/disable history on the tags on that gateway based on which gateway the tags are on.
Right, I saw that on a previous post, but I don't really need to take up twice the storage space to log the same data twice (one from the central gateway and one from the sub-gateways).
We have a similar situation that we came across and what we ended up doing was having a hub and spoke architecture where we had site gateways and one corporate gateway. You can add the database connection to each site gateway and log the data from each gateway to the database. Then on the corporate gateway you can add these tags as remote tags using the gateway network, and have gateway network connections. Then you can see the data from the corporate gateway but you are not writing the data twice to the gateway. Hope this helps you !
Don't log from the central gateway. Configure the remote tag providers in the central gateway to divert history queries directly to the database, using the remote gateway names to disambiguate.
That's the other option I'm considering, but then you're depending on each of the sub-gateways to be up and running (more points of failure) for the master to function as intended.
@pturmel (wish we could reply to multiple comments at once)
Imagine, if you will, one of the remote tag provider gateways dies. You now have NO control of that local area with that setup. However, if you get all tags to the central gateway independent of the sub-gateways all of the sub-gateways could be down and you could still control everything from the center.
If the central gateway has history queries diverting to the DB, those will still work when that remote gateway is down.
When the remote gateway is down, you don't have anything to record, so no loss there. And the remote gateway can do store and forward if it is a connectivity issue.
Or is the central gateway making duplicate connections to the PLCs? (Ewww, if so.)
Sounds like you are trying to avoid proper deployment of redundancy at the remote sites. No good solutions for making a central gateway the "backup" for multiple other gateways. That way lies madness (and misery).
For now, until the central gateway is upgraded and all of the remote gateways are up and running, there would be 2 connections to each PLC - one from the central and one from the remote.
My original plan was remote tag providers, but the way it's running now (temporarily, maybe) made me think about this other option to do it in a way that the central always has control, independent of the remote gateways.
Is it really that big of a deal?
It's not really even a backup, they're just independent gateways in the end. The central gateway is the only one with a redundant gateway planned.
Yes. Don't do it. The gateway physically closest (network-wise) to the PLCs should be in control, with a redundant backup if applicable. That gateway (or pair) does all logging and supervisory control.
Central gateways have no business running productions lines, and generally need no redundancy. (They might need load balancing, though.)
I understand that that's best practice and will try to sway our customer, but in the end it's up to them - they want control of everything at the center.
This was more a question about the multiple connections to the PLCs.
Yes, it's a big deal, especially with latency-sensitive PLC protocols. Or complex PLCs. It can be a struggle to get one gateway performing well in many cases--doubling the comms load at the PLC can be a disaster. Don't do it.
Got it. (and I'm guessing none of this changes if the remote gateways are edge installs)
Ok, so back around to historical and trending.
If the remote gateways and central gateway need access to trending, they should all log to and view that same central database using Sync Services with store and forward and the trends at the remote gateway's view can be limited based on the tag provider that created the entry?
How would the remote gateway behave if connection to that central database was lost? Would a limited amount of historical data still be available locally from the Sync Services?
I guess that answers my original question then. Since the edge gateways can't connect directly to the database on the network they'll have to show trends from the internal database and then the central gateway will be in charge of logging historical data for long-term use.
edit: are you sure it can't run redundant? The pricing selector allows redundancy to be selected for an edge panel and the status page for edge has a slot for redundancy, along with a redundancy settings page.
Last thing then, I'll need to find a clever way to automatically assign the Storage Provider for tags with history to either "MySQL" at the central or "Edge Historian" at the remote gateways.
Too bad the expression binding on the Storage Provider doesn't allow bindings to tags...
I'm running 8.1.27. No mention of this addition in any of the 8.1.0-8.1.27 release notes.
For alarms, I found a decent solution for selecting the alarm pipeline for all (UDT) alarms within a provider.
Use a top-level string expression tag in the provider to hold the default alarm pipeline. This tag has one disabled alarm configured on it with the desired alarm pipeline for all tags in the provider.
Now, all I have to do is set the alarm pipeline for that one tag and it populates to the rest of the UDT alarms. Any individual tag alarms can bind directly to the tag.
I was hoping to do something similar with a binding to the History Provider property, but I can't access tags in the binding.
Have you actually tested that? (Changing the tag to see if the UDTs with pipeline default actually change.) I would not expect it to work.
Because...
Bindings for tag properties are not like any other bindings--they cannot subscribe to arbitrary resources outside of the parameters supplied by a UDT. The latter only work because altering a parameter at runtime restarts the whole UDT, and it is the restart that makes tag property bindings re-evaluate.