In one of our projects we have:
- New Ignition system for storign records
- Old system doing the same.
We have setup a transaction group to store the date in the same manner as the existing old system did.
There are 2 tags in the PLC to request each SCADA to store the Date
Both work the same way:
0 = idle, 1= SCADA to store date, 2 = SCADA has stored the data.
Every so often at random intervals the new ignition system misses a record, the old system does store the record.
There is no error in the ignition system log to say the transaction group failed.
Ignition is comunicating via kepware to the S7 PLC.
Any suggestions on this?
How are the groups setup? What version of Ignition do you have?
To be clear, you have two Ignitions set to look at the exact same address, in the same plc, and the group writes a handshake to the item to say that data has been stored? If so, could it simply be that one of the machines writes the row and the handshake before the other ever sees the 1? Or do you actually have separate systems/tags set up so they’re each doing their own thing?
And by “old system”, you mean FactorySQL, or Ignition on a different machine?
By old system i mean WinCC Seperate PC with its own trigger, which is /Report1/ControlPC2.
The new system is a Ignition Redundancy with two seperate servers, version is 7.1.x
Yes each system have their own system tags in the plc for triggering.
The group is a standard group with the trigger being /Report1/ControlPC1 on the ignition side
This only happens once in a blue moon, however it is a problem as these transactions are then used for accounting.
How is the triggering on the group set up? How quickly is the group set to run? The PLC sets the bit from 0->1, and expects Ignition to set it to 2, but does it change it on its own after some time if nothing happens? After setting the value to 2, what’s the quickest that the bit might go back to 1 again?
If easier, just right click on the group, export to xml, and post it here. Otherwise, what I’m really interested in is 1) whether “only execute group once while trigger is active” is selected and 2) the rate of the group vs. the rate at which values change.
wont be onsite till next week to post an xml.
The group is set to run every 1000ms,
If the Ignition does not set the tag to 2 then there is a 60 second delay in the plc until it goes back to 0.
If the Tag goes from 1 to 2 and a new record is set to be available by the plc it possibly could go back to 1 within 10 scan cycles of the plc = 100ms
The group is set to “only execute group once while trigger is active”, this is to stop duplicate values in the database.
Some more description on the group.
There are 2 train loading bays that deliver peat.
each train delivery is recorded with total tonnage etc.
it may be the case that both deliveries arrive at the same time.
This is taken car of in the plc, bay one takes precedence to send the values to ignition.
once bay 1 sent it’s values bay 2 sends it values. (this is the 100ms second delay between both transactions).
i could upload the project here if i knew how.
Something like this is what I was looking for. Take a look at your missed values vs. the values that were recorded with WinCC - would they fit this description? That is, did the missed row come in <1sec after the previous row?
This is an issue that comes up from time to time with the “only execute once…” option. The problem is, the group must see the trigger value come back from the PLC low before it will allow the group to run again. So, even though you’re writing a different value to that tag, unless the group runs and sees that value (or 0), it won’t run again.
This situation comes up enough that we know we should do something to make it easier to work around. But, given the nature of things, there are few sure fire solutions. One possibility would be to simply run the group more quickly, like 100ms. If you didn’t want to subscribe all the values at that rate, you could set the OPC mode to “Read” on options, so that only the trigger was subscribed, and the other values were manually read when triggered.
Anyhow, try to confirm that this hypothesis is valid, and we’ll go from there.
What i have done to shed some light is trend the 2 tags for the Transaction group trigger for both PC’s.
Both tags change at approximateley at the same time.
It seems to be a problem with the transaction group itself. even if i use the settings that colby has mentioned i have the same problem.
However i have found an new problem with the transaction groups.
There are several Float4 type tags in the transaction group. It seems that the transaction group rounds the values to the closest 10 before placing them into the database.
IE value coming from plc to transaction groups is 278563.9,
the value stored to the database is 278560.
it only seems to do this on larger numbers, i.e. another filed has numbers in the range 0.0 to 1200.0, they work correctly.
It does this in all transaction groups.
What i’ve tried then is to manually enter a record with correct Float4 values into the database, which worked.
Seems like there a some issues to be worked out here.
I am now thinking of moving away from the transaction group and use a gateway script to store the values from the plc into the database using the runQuerry() command.
Also i have upgraded the system to 7.2.3 while i was on site, i noticed the the redundancy is now changed to master slave setup rather then clustering type, why is that. Prefered the older clustering mode, is that gone alltogether?
It might be possible that you’re running into something similar to what was describe in this thread. Try changing your column types to double instead of float (in the database, not necessarily in Ignition- or if you change in Ignition, you’ll still need to change the database by hand).
Did you observe any pattern of when the rows were missed? That is, does it seem to happen in cases like I described where the second value comes very quickly after the first?
Yes, it’s gone. You will be recorded as the only person to ever compliment that system. Not that it didn’t work, but it was focused on solving problems that our customers weren’t really having (load balancing), while not handling well the cases they were trying to solve (faster failover, better history handling). Also, it was generally very hard to troubleshoot when not working.
If you have any particular points that you miss and would like to see re-evaluated, please feel free to post them over on the feature request board.
One other thing I just thought of to check: are the items in the group SQLTag reference, or OPC Items? That is, if you double click on one of them, do you see the SQLTag path, or do you get a window that shows you the opc server and opc path?
I am using the SQL Tags as i had problems of the groups not working with the OPC tags.
There is no pattern of when a record is missed.
The time between records to be recorded has always been longer then 1 minute. I have now increased the group execution time from 1 second to 5 seconds. All the SQL tags have an update rate of 1 second.
Will have to wait and see what happens.
Back to the new Redundancy mode as of version 7.2.3:
- I don’t like how it doesn’t let you use the designer when the master is down and the slave has become the temporary master.
- That there is no clustering mode with load balancing.
- Since the new change in Ignition is a complete Replication of the Database still required or does ignition now take care of this? Am sure it wouldn’t be hard to get the master to write the same items to both db’s automatically.
- A better manual for configuring the Redundancy mode. I.E.
I appologise if i put the above in the wrong section.
Such as? I think we need to get to the root of the problems before we can start building on top of them. I would definitely NOT recommend using SQLTags references for a set up like this- then you’re introducing double polling with the scan class below the group. And since using OPC items in the group is the normal way groups are set up (well, I shouldn’t use terms like “normal”, but let’s say- “most common”), I don’t think it’s a general problem, and would be very interested in tracking down what was going on.
You said that you have it set to “only execute once” to prevent duplicate records. This is understandable, as even though you’re writing at the end of an execution, you might not see the value on the next cycle (especially when using SQLTags). There are a few strategies, though, that you can try:
- First, switch back to OPC items. This will put the group directly in control of the subscription rate, and will be easier to troubleshoot in terms of timing.
- On the “options” tab, set the OPC subscription rate to 1/3 of the group rate. That way, when you write the handshake, you should definitely see it back in the group by the next execution.
- Turn off the “only execute once” option. If you get multiple rows, something is going wrong with your handshake, or the comm is really slow.
- To reduce the chance of multiples, you enable the “only evaluate when values have changed” option, and then change the watch list to “Custom” and select the tags you expect to change (including the trigger tag). This is in addition to the normal trigger. This way, it only goes on to evaluate the trigger condition if one of your selected tags has changed since the last exec. This is similar to the “only execute once” option- but that’s only checking the trigger. This way, if the trigger stays at 1 for 2 cycles, but the other values change, it will log (this condition is what would happen if you wrote 2 and the PLC immediately set it back to 1 because it had more data, for example).
I would suggest setting up a test like this pointed to a different test table, and observing the results. Let me know how it goes. If thing aren’t working right, perhaps we could set up a time to get on GoToMeeting to look at it (something that works with the time difference).
Are you suggesting it is always better to use OPC Items rather than SQL Tags in transaction groups? My initial thoughts were as long as there are no timing issues, and if you were already using the same SQL Tags in Vision, that it would be more efficient to re-use SQL Tags for transaction groups.
We are in the process of converting our RSView projects to Vision, and upgrading our Factory SQL to SQL Bridge. Much of the same data we’ll use in Vision will also be used in SQL Bridge for data collection. Until I read this post, I was going to use SQL Tags wherever possible in our transaction groups. If the SQL Tags scan rate was too slow for a particular tag, I would instead then use an OPC Item for that tag only. Is this a good approach, or should we use all OPC Items?
The only real “problem” is that, as you’re aware, SQLTags are executed by their scan classes, and the groups run on a separate timer.
On the flip side, there isn’t exactly a real performance boost to switching things over to SQLTags. You’ll reduce the number of tags subscribed, but the OPC server will coalesce the multiple subscriptions for a particular device address, so it’s not a big deal.
Furthermore, if you are importing from FactorySQL, it will import them as OPC Items. Going through and switching all of these over to SQLTags would probably be a lot of work for not much gain.
But to answer your question, it’s not really “bad” to use SQLTags in groups.
The Server setup is a redundancy setup, with a full MySQL replication setup with auto recovery.
It talks to a siemens s7-317 software plc redundancy via kepware.
As discussed in a previous post i have to check which plc is the master and swap a third group to which ever is the master on the fly by changing the opc’s (kepware) ip address access.
If i’m switching back to opc tags which of the following do i use:
The Kepware OPC Reference
do i use the Ignition OPC reference.
Kepwares polling rate is 1000ms for all tags except for the redundancy tags which are 300ms so a plc redundancy swapover on the tags can take place without all other tags becoming stale.
In Kepware i do see in the log that ignition keep renewing its connection, this doesn’t seem to be the problem though.
Since my last change on Monday 07 - March there were no records missed.
I think the problem was that the sampling rate of the SQL tags was 1 second and the execution rate of the Group was 1 second also.
Going back to college days i remeber that the sampling rate needs to be at least twice the frequency of the signal to be recorded in order to get a true replication.
With this in mind i hope that having changed the execution time to 5 seconds would solve this issue.
Never thought maths/physics would come to haunt me in HMI design.
Time will tell.
Since the tags are coming from kepware, if you were switching to direct OPC items, you would use the items under kepware in the OPC browser, not under “Ignition”.
You shouldn’t see connections very often. It could be that something is causing the connection to re-establish itself unnecessarily - which could lead to problems with data loss, as the transaction groups won’t trigger if the trigger quality is bad.
Try going into the OPC server connection configuration in Ignition and turning off the “use keepalives” option. We’ve seen this cause false fault detections before. After turning it off, see if it reduces the number of connection attempts from Ignition.
Seems like the issue is resolved:
If using SQL tags in the Transaction groups the Scan rate of the SQL tags has to be at least twice as fast the Scan rate (execution rate) of the Transaction group.
The Floating point error was resolved by using doubles in the MySQL database using the command
Alter Table report Modify Cum_Weight_After double;
Great, thanks for the followup!
it seems that this problem has reaccurred.
I will try and convert all the groups back to opc items in the next couple of days and see if that fixes it.
The issue is still outstanding here.
I switched all Tags to be OPC Items.
I noticed that i could only run the Transaction group at 1 second intervals as the Tags in the Kepware server are in 1second intervals.
I i set a different time on the transaction group it will not read nor write to the OPC items.
It now sometimes creates duplicate records and still sometimes misses a record. Approximately 1 a day.
In total there are only about 30 - 40 records a day in all transaction groups together (5 of them). So it’s not like the system is exceptionaliy busy.
Really need help here