To whom this may concern,
I have a customer with specific configuration and I need supporting documentation that will state if the setup will work or not. The customer has 4 plants; each plant is in a different state. They have a centralized database in another state . I need to design for worst case scenario. So I chose to use the machine that has the most tags for constant polling and recipe tags. Each machine contains 60kb worth of data that needs to be exchanged between the sql server and plc. If each site has 13 machines; that’s 3.12MB of data that needs to be transferred across the network assuming the customer wants to send all their recipes down to all the machines simultaneously. There will be a total of 31kbs worth of data that will be constantly polled.
I need documentation stating how the transaction groups work and if it will saturate the network or not and how much time it would take to send 3.5MB worth of data. Currently when I trigger recipe transaction groups it seems to be sending the data down in parallel. Is there a setup that will help limit the amount of bandwidth being used? or a better setup in that matter. Customer will not install a separate server on each site locally. I only have a 4Mb/s network to work with. If someone could help me with this it would be greatly appreciated. Is there some way to queue the transaction groups?
To whom this may concern,
- I follow the math for 3.12MB of data if you want to write all recipes at a time (4 plants x 13 machines x 60 KB).
- Where does the 31kbs of constantly polled data come from? Is this looking at “realtime” status? Do you have any “historical” data that you’re constantly logging?
- 3.5MB of data will take a theoretical 7 seconds to send over a dedicated 4Mb/s link. (3.5MB x 8b/B divided by 4Mb/s link). In practice it will take longer.
- How long does it take now to trigger a recipe? Are you using “Block Groups”?
- There is no built in way to restrict Ignition bandwidth that I’m aware of. Your network guys can implement QoS or may be able to limit bandwidth of the network tunnel that an application uses. Your TCP/IP network is inherently “parallel”.
- You could devise a scheme that prevents Ignition from triggering a batch transfer if another is in progress. This should be unnecessary with the amount of data you’re dealing with and a 4MB/s connection.
- Are you planning on using Vision clients over the WAN link?
I would utilize Ignitions spoke and wheel layout for this. Gotta jet, swimming lessons, but I can chime in when I get back or Im sure someone else can explain this.
- 31kbs of constantly polled data is the system overview screen or “realtime” status, if the machine is running a recipe the data is stored both historically and into a report table. I use them as historical tags for the realtime plots on the easy chart.
- We saw how long it took on my side from a gigabit line to transfer using wireshark and task manager. We calculated it would’ve taking a lot longer than 7 sec with the values i received on the gigabit line and convert it into the 4Mb/s line. Also the line is not solely dedicated to us. The line will also have other company network traffic as well.
- Recipe is triggered right away, yes I cam using transaction block groups. For me it takes less than a second for one machine. But if i trigger it with more than one machine simultaneously, it increases to 2-3 seconds to transfer 3 machines worth of data. I have 3 machines at most currently on site that I can work with. Sometimes I’ll have 1 or 2 or none. Depending on how many machines we sell
- I’ll look into the QoS and limit bandwidth by application with the IT guys at the company.
- I need to devise a way to limit as much traffic as possible without saturating the network. So some buffer scheme would be optimal.
- Yes Vision clients will be use all over the 4 states. I do not know how many will be running at one time. The more clients be used will there be more network bandwidth usage?
What is and how do I implement the spoke and wheel layout?
The bottom line is that a 4 meg link should be more than enough - that is, unless users are free to do file downloads/uploads or other bandwidth gobbling activities over the circuit.
The Hub and Spoke model would utilize Ignition gateways at each site running the SQL Bridge (spokes), with a centralized Ignition gateway running Vision and the SQL bridge. This model provides the following advantages:
A. Recipes are stored locally (No recipe WAN traffic and not dependent on link). I would still devise a less frequent central recipe backup/restore in case a local Ignition gateway or DB becomes unavailable.
B. Historical data is “cached” locally if the WAN link drops.
Going to a wide area (meshed) architecture gives you the above, plus:
C. Vision client + project downloads are always local
D. You have unlimited flexbility in where historical data and recipes are stored and controlled from. Any facility could be set up to directly control, or “retarget” to any others. You can still use a “master” or “central” site.
Your projects will be very similar with all 3 architectures. I’d recommend testing your options with trial (free) installations.
Timing a large PLC write on a gigabit line won’t scale nicely to a 4mb circuit. Try a 100mb/s connection - I’d estimate that you’ll accomplish it in a similar time. The 60KB of “actual data” will translate to more data over the wire, but should still be easy on the network.
Other bandwidth considerations that you’ll have are: Vision client downloads, Vision project downloads, and large historical reads (like viewing graphs over huge date spans). These can be mitigated with local gateways.
5 & 6. Dedicated bandwidth or QoS priority would go a long ways for batch performance. You really need to determine what your “high priority” traffic is. QoS, in this case, should be more focused on prioritization and less on “shaping”. It doesn’t matter if the whole pipe is being used, as long as the traffic that matters is making it through.
- Are you updating your Vision project frequently? Do you expect to have new computers loading Vision clients often? Will you be upgrading the software version your Vision Gateway often. Every yes here will cause more traffic. Having a local Vision gateway will do all of this on the LAN segment.
Again, my best advise is to give it a shot and test the performance on your system.
This seems to preclude the spoke and wheel design. What is the problem here? Does the customer not want extra physical boxes? We use VMWare to great effect to overcome this. You can easily put an ignition server and client on the same box. To keep things separate you can vertuailize one or both of them.
Im sure a small din rail mounted X86 computer would suffice.