Tags and Folders

I think this was answered in here before but I can not seem to locate the topic so I thought maybe I could ask again and either be redirected to the proper topic or have it answered.

Right now I have FSQL setup with a folder representing each machine, then under that folder I have the other folders that contain the tags and such for each function I am monitoring. So here is a few questions pertaining to tags and folders.

Lets say you have two folders. Each reports back to a different database for that particular function. If you duplicate a tag in two different folders, will that tag need to be read twice, ‘One time for each folder’ or will it know that that tag was recently read and wont need to re-read that tag?

I think the following is the one question that was asked already. But is there a way to make a folder not read unless a certain event occurs? I would like to have a folder that is only ran once or twice a day so I would like to do something where the tags that are in that folder and not actively pinged throughout the day, they will only be pinged (read) when I want that data. So can something like this be done?

In the line of this program if there are a couple of tags that are shared between two or more tables in a database. Would it be best to:

a: Bring it is as a kind of virtual tag. Then have each of the folder retrieve the information from the virtual tag.

b: Put the tag in each folder and dont worry about it.

c: Find a way to make these two folders into one folder.

Which would be the best and recommended approach? I am trying to minimize wire useage. I think that is the correct term. So any thoughts would be appreciated. Thank you.

In my OPC software, which is KepServer there is an error bit in the systems tags which tells me when a machine is properly connected and reporting back. Would this be the best bit to monitor for if a machine is online, or loses connection? Or is there a better way to do this?

  1. Having the same OPC item in several FSQL groups isn’t a problem. There are several layers of abstraction between the read request and polling the PLC. In fact, most OPC servers keep track of this sort of thing and act as a cache. FactorySQL works similarly.

  2. A FactorySQL group that is set to use a trigger will only evaluate the items after the trigger is evaluated AND condition is true. You can use an Action Item that is a DB query that triggers on certain times. Similarly, you could really slow the group down so that it only runs every so many hours. Are you also saying that you want the user to be able to force a data update?

  3. There are several ways of sharing data that vary depending on what you’re trying to accomplish. I’ll comment on each of your ideas.
    a. Virtual tag - I think this would refer to using a field in the SQL database without mapping an OPC tag. This is an “Action Item” in FactorySQL talk. There are many applications where this makes sense.
    b. Put the tag in each folder and don’t worry - hard to say yes or no to this. In many cases you’re perfectly fine doing that.
    c. Make two folders into one - That really depends how well your data templates. Your only constraint is that a group all writes to the same database table. Nothing stops you from using one folder with 2 groups that are each machines that write to their own table. Better yet (in many cases), have 2 groups that write to different rows in the same table for both machines.

It’s pretty hard to generalize how to be efficient without the discussion getting more specific. A few things to consider:

  1. What’s your bottleneck when discussing “wire usage”? Is it expensive whenever you’re up (dialup) or constantly slow (satellite/leased line)? Is network traffic to/from the SQL database expensive?
  2. Can you consolidate your traffic? Using block data groups to do array reads can make a huge difference, for example.

This is typically a matter of taking a big step back and noting your architecture. Does it make sense for the computer to talk to one PLC or a PLC at each machine? What points are communications slow and what are fast? How does my system work in terms of fault tolerance?

You can monitor the Kepware bit for status. You might consider using Alerting to send an email (text message) and log it to the database. You can also monitor the FSQL status table.

Mark,
I just noticed your post on plctalk about synchronizing PLC clocks. I’m not sure if this post relates to that or not. I responded to it.

I did make a post on PLC.net about the clock. And in some respect this does pertain to that post. Lets first get some of the formalities out of the way. 1. This is the first time I have done any kind of project like this. 2. I have not had a lot of formal training on this type of application. Which is why I ask alot of questions.

So now here is the scenario. We currently have 4 machines wired up and this is what we are using for our testing, designing and debugging steps. When this project takes off we will have upto 100 machines. From what I can understand Ethernet is a parallel type of scheme. I also know how to get into the diagnostics and such of KepServer thanks in most part to David Garris from KepWare. If what I see and can translate from my limited knowledge is correct it takes about .5 to .6 seconds to read thru my 248 tags which I currently have. Although not all are active at the same time. What I have done is looked at the diagnostics and went from TX to TX which contained the same data string and took the difference of the time. So with that in mind and the fact I am told the scheme is parallel, how will the difference of 100 machines be vs what we are seeing when we are only running 4?

I am told that running 100 vs 4 will not cause a significant increase. Which by this statement means that an increase will occur. So how can one be sure that this increase will not cause problems as the system grows?

As for the time issue I know how I can do what I want but it would add more tags to my usage and if that is the case that would also possibly affect my scan time. Or at least I think it will. Also if I am going to sync up the clocks on the PLC’s with the system clock how can I take into account this slight lag time so that each and every clock is set identical?

I ask this because out data processing staff is building a VB program to take data which I putting there by IA software and they will run this application every hour on the hour. So anything from say 1:00:00 to 1:59:59 will be in the hourly production report. If my clocks are off by a few seconds this might affect their application. Also since a lot of our production equipment is fast paced this time might make a slight difference in the employees performance. These are all things I have to take into consideration.

Now onto what you said.

You said if you have a group which uses a triggered event that that group will only be active when that trigger presents a true statement. Does this mean that even though your software has not requested any data from these tags that the OPC server (KepServer) is still monitoring these tags? Or does KepServer only peek out the information from these tags when it has been told to do so by your software?

As for blocks, yes I am doing blocks. However I am not quite clear on them. It has always been my training and knowledge that when I write a program in a PLC I always have things that are going to be IO grouped together. So for my PLC program all registers that report back to the software are in sequential order (R376 - R430) Some are strings some are just DINT and they are also grouped by that respect as well. Likewise I did the same with the %I inputs, %Q Outputs, and %T temp bits. In the KepWare program I have the block data groups turned on. Now I have asked KepServer if it makes a difference using this blocked data groups check box and using an array. They seem, to me at least, to say that it would make no difference on scan time because both do the same thing. Just one is easier to work with which is why it was added.

I am just wanting to make sure I understand as much as possible about how things work and interact with each other this way should something arrise, from the dear ole MURPHY’S LAW, I can work around it and not get my butt bit off.

Thanks again for responding so promptly. Have a great day.

[quote=“nathan”]1. Having the same OPC item in several FSQL groups isn’t a problem. There are several layers of abstraction between the read request and polling the PLC. In fact, most OPC servers keep track of this sort of thing and act as a cache. FactorySQL works similarly.

  1. A FactorySQL group that is set to use a trigger will only evaluate the items after the trigger is evaluated AND condition is true. You can use an Action Item that is a DB query that triggers on certain times. Similarly, you could really slow the group down so that it only runs every so many hours. Are you also saying that you want the user to be able to force a data update?

  2. There are several ways of sharing data that vary depending on what you’re trying to accomplish. I’ll comment on each of your ideas.
    a. Virtual tag - I think this would refer to using a field in the SQL database without mapping an OPC tag. This is an “Action Item” in FactorySQL talk. There are many applications where this makes sense.
    b. Put the tag in each folder and don’t worry - hard to say yes or no to this. In many cases you’re perfectly fine doing that.
    c. Make two folders into one - That really depends how well your data templates. Your only constraint is that a group all writes to the same database table. Nothing stops you from using one folder with 2 groups that are each machines that write to their own table. Better yet (in many cases), have 2 groups that write to different rows in the same table for both machines.

It’s pretty hard to generalize how to be efficient without the discussion getting more specific. A few things to consider:

  1. What’s your bottleneck when discussing “wire usage”? Is it expensive whenever you’re up (dialup) or constantly slow (satellite/leased line)? Is network traffic to/from the SQL database expensive?
  2. Can you consolidate your traffic? Using block data groups to do array reads can make a huge difference, for example.

This is typically a matter of taking a big step back and noting your architecture. Does it make sense for the computer to talk to one PLC or a PLC at each machine? What points are communications slow and what are fast? How does my system work in terms of fault tolerance?

You can monitor the Kepware bit for status. You might consider using Alerting to send an email (text message) and log it to the database. You can also monitor the FSQL status table.[/quote]

Hi, a few observations after reading this thread:

FactorySQL does not directly read data from your PLC. It subscribes to tags from the OPC server. It is up to the OPC server how to read the tags from the PLC. Don’t worry about putting 2 of the same tags in 2 different groups -you aren’t incurring a double read from the PLC because of how OPC subscription works.

Please explain further how you’ve arrived at your ‘time per scan’ number. Don’t confuse a polling scheme with time to do actual work. My point is - KepWare is probably polling the PLC every .6 seconds - it isn’t taking .6 seconds to do the actual work of reading the tags. There is a huge conceptual difference here as it pertains to scalability.

I wouldn’t worry about the scalabilty of ethernet wire traffic from reading a few hundred tags from 4 PLCs to reading 100 PLCs unless you’re polling unbelievably fast. Ever watched any streaming video on the web? You probably put a much larger strain on the wire doing that than polling PLCs. Don’t worry - modern networking technology has progressed much faster than manufacturing communication technology.

Hope this clears some things up,

Well as I mentioned I am very new at this technology. First time I have tried to accomplish a task of this magnitude and complexity. So as for the information I have it is very limited. I did send a copy of the diagnotics to KepWare where I was assisted in making a little knowledge about it. So I might not be calling things correctly because of the limitation of my understanding at this point. But the way in which KepWare made it sound was you start the Kepserver, then you start the diagnostics, then you start the client. From the time you start the client the first thing that happens is the client attempts to read all the tags to check validity and such, It is during this time if you watch the diagnostics and take the time from TX (with its hex parameters) to a TX with identical hex parameters, then subtract the time this will give you and idea of how long it takes to get the data from the PLC, thru the server software and to the client. Or at least that is what I understood, but again being very new to this I probably misunderstood. So that is where I got the .5 to .6 from. And of course being it is TX to TX with some RX in the middle I again naturally assumed that that was the scan time.

I do know from what I have read that there are a lot of factors involved in scan time. Things such as if the tags are grouped together, if you read as an array, what type of tag you are watching (string, Dint, Int, boolean, etc…) Scan time of the PLC the type of connection you are using to gather the data. So am I correct so far?

Here is what we are using. Horner XLE102 PLC with the ethernet adapter on it. Now the ethernet is not true ethernet it is actually serially encapsulated modbus tcp/ip or something similar to that name. Their web site is heapg.com If this is not the information you were looking for let me know and I will do my best to describe it in better detail.

[quote=“Carl.Gould”]Hi, a few observations after reading this thread:

FactorySQL does not directly read data from your PLC. It subscribes to tags from the OPC server. It is up to the OPC server how to read the tags from the PLC. Don’t worry about putting 2 of the same tags in 2 different groups -you aren’t incurring a double read from the PLC because of how OPC subscription works.

Please explain further how you’ve arrived at your ‘time per scan’ number. Don’t confuse a polling scheme with time to do actual work. My point is - KepWare is probably polling the PLC every .6 seconds - it isn’t taking .6 seconds to do the actual work of reading the tags. There is a huge conceptual difference here as it pertains to scalability.

I wouldn’t worry about the scalabilty of ethernet wire traffic from reading a few hundred tags from 4 PLCs to reading 100 PLCs unless you’re polling unbelievably fast. Ever watched any streaming video on the web? You probably put a much larger strain on the wire doing that than polling PLCs. Don’t worry - modern networking technology has progressed much faster than manufacturing communication technology.

Hope this clears some things up,[/quote]

[quote=“mrtweaver”]I do know from what I have read that there are a lot of factors involved in scan time. Things such as if the tags are grouped together, if you read as an array, what type of tag you are watching (string, Dint, Int, boolean, etc…) Scan time of the PLC the type of connection you are using to gather the data. So am I correct so far?

Here is what we are using. Horner XLE102 PLC with the ethernet adapter on it. Now the ethernet is not true ethernet it is actually serially encapsulated modbus tcp/ip or something similar to that name. Their web site is heapg.com If this is not the information you were looking for let me know and I will do my best to describe it in better detail.
[/quote]

This all sounds legit. My point is - however long it is taking to read all the tags - the bottleneck is the PLC, not the network. Your question was about wire traffic. Ethernet is parallel in the sense that your PLC’s slow response isn’t blocking other traffic, so I wouldn’t worry about wire traffic.

Here is a thought I had while I was pondering over what I have read and learnt thus far from the forums and from your tech support. As you may have learned thru reading this post is that eventually we will have 100 machines online. So here is a scenario that I thought might be good. We generally do not run all 100 machines at one time. Usually we only run 50 or less. I also have a logic bit that will tell me when they want to run a particular machine center or not. So I was thinking there is in KepServer an enable bit in the system tags. If I turn this bit off it disables that device so it would not be polling that device. What if I tie this logic bit to this enable then the only machines that would be polled and read from are those that are enabled. Does this sound like a plausible idea?

[quote=“Carl.Gould”][quote=“mrtweaver”]I do know from what I have read that there are a lot of factors involved in scan time. Things such as if the tags are grouped together, if you read as an array, what type of tag you are watching (string, Dint, Int, boolean, etc…) Scan time of the PLC the type of connection you are using to gather the data. So am I correct so far?

Here is what we are using. Horner XLE102 PLC with the ethernet adapter on it. Now the ethernet is not true ethernet it is actually serially encapsulated modbus tcp/ip or something similar to that name. Their web site is heapg.com If this is not the information you were looking for let me know and I will do my best to describe it in better detail.
[/quote]

This all sounds legit. My point is - however long it is taking to read all the tags - the bottleneck is the PLC, not the network. Your question was about wire traffic. Ethernet is parallel in the sense that your PLC’s slow response isn’t blocking other traffic, so I wouldn’t worry about wire traffic.[/quote]

Sure, thats fine. You would also want to use that enable bit as the trigger for your FactorySQL groups.

I would be remiss however if I did not bring this up: In computer science there is an old adage

Or, put another way,

[quote=“Michael A. Jackson”]The First Rule of Program Optimization: Don’t do it.
The Second Rule of Program Optimization (for experts only!): Don’t do it yet.[/quote]

I didn’t make these up - see en.wikipedia.org/wiki/Optimizati … er_science

The point is -you’re prematurely optimizing what you suspect will be a bottleneck (wire traffic). However, in reality, as you scale a system up, more often than not, it is something you didn’t suspect that will become the bottleneck. Moral of the story: take steps to optimize only when you actually know you have something to optimize.

Hope this helps,

Carl is dead on about optimization, which really has been the focus of this topic. Optimization refers to modifying a system or system behavior and often involves trade offs. This is best done to resolve a problem. There are certainly such things as good techniques and a scaleable architecture, both of which are important considerations when planning a large system like the one you’ve mentioned. Your questions have been good and are focusing more and more on the important factors. I can provide you with a rough idea of where you might scale with our model since that seems to be the underlying concern. Your performance often has more to do with design/configuration than hardware.

SQL Database - Frequent read/writes of numerous tags from multiple sources can cause a lot of activity on the database. Many commercial solutions exist like high availablility that can provide performance and redundancy. Our model uses a “scale out” approach (like amazon.com) so that you can seamlessly add databases on the backend. This helps you grow as you add lots of historical data, points, etc. Consult with SQL database vendors/admins/experts to scale performance as high as you need.

PLC I/O - The PLC/OPC Server combination has some nominal data rate that the device can read/write. This comes down to PLC and OPC vendors. You will eventually need to add/upgrade hardware as you grow.

Processing for FSQL/FPMI. Both FSQL and FPMI support high availability with a similar “scale out” model. You can add servers to either.

Network traffic - may be applicable for older (non-Ethernet standards). Bandwidth is rarely the issue although latency may be.

Nathan,

From my reading of the manuals, it appears to me that FSQL and FPMI are different in their approach to multiple nodes. As I understand it, an FSQL installation with multiple nodes has one node doing the work and the others ready to step in in case of failure; adding nodes will only increase the redundancy, not the performance. On the other hand, an FPMI installation with multiple nodes shares the load between all the nodes, giving both redundancy and scale-out.

Is this correct?

Yep, thats right. Having FactorySQL do load balanced clustering would involve intelligent partitioning of your project, a delicate problem that we may tackle in the future.

FactoryPMI load balancing is much simpler since each client’s requests are independent of the other clients.

It should be noted that a human can fairly easilly partition their FactorySQL project across two FactorySQL installations for manual scale-out if necessary.

Hope this helps,