3000 tags recipe download to PLC - best method

Hi,

I’ve been fighting with this a while now, and I believe you might have some good advise on the best way to solve the issue.

At the moment in my DB I have 100 recipes x 100 steps x about 30 setpoints.
The columns in the DB are like [recipeNumber, stepNumber, sp1, sp2, sp3…sp30]

On my script I use a select query to get a single recipe dataset (100 steps x 30sp) .
I then run a for loop (index is the stepnumber) and write to the PLC with system.tag.writeAll(paths,values) where paths is something like PLCtag[rownumber] and values the results of the for loop on the dataset.
Now this means that I’m launching 100 times the system.tag.writeAll(paths,values) functions and it causes some issues on the GUI and the tag quality.

I’ve been suggested to use asynchronous threads, but I’m not sure if it’s a good idea to start 100 separate threads keeping the existing for loop and I have no much idea how the code could look like.

Any idea / example?

Thank you!

Instead of calling writeAll() 100 times, simply concatenate the tagpath and values lists in your loop, then call it once after the loop.

1 Like

Hi pturmel,

thank you for your reply!
It took you a minute to solve my weekly Python question :smiley:

I’ve tried your version just now, but I still have the quality of the tags going into bad after let’s say 10 seconds. I believe it’s the PLC driver that gets too busy?

I think I should divide the paths/values into 3-4 parts and write tags in different threads at different time asynchronously.
Should I use system.util.invokeAsynchronous?

Thank you

Yes, that's quite possible. Consider extracting all of the OPC Item Paths from the tags and using system.opc.writeValues() instead. And yes, from an asynchronous task, as that is a blocking operation. Using this will avoid the timeouts for tag writes. Do inspect the list of qualities returned to ensure everything was successfully written. The asynchronous task needs to use .invokeLater to report results back to your UI. See this topic for more info:

1 Like

Hi pturmel,

thank you for suggestions.

I’ve used the system.tag.readAll() function to get the OPC paths from the defined tags on my script (I hope it makes sense to do this), and then I used system.opc.writeValues(), launched from an asynchronous task, but still at the end of writing, about 20-25 seconds later, I keep getting this red blinking in the GUI, and the comms for that PLC going to bad for about a second.

If I reduce the number of tags and I download only 50-60% of my list, it does work fine.
I’m now thinking if there is a way to download a 50% of tags first, then wait and download the remaining tags, since the operator won’t be using the second group of tags for a long amount of time.

EDIT: even with 50/60% sometimes it doesn’t work well, could I be missing something else??

Do you have any other suggestion?

Thank you

If it can’t go any faster, then display a progress bar for the operator. Write 10% at a time, update progress, sleep for a few seconds, then do another 10%.

It seems your device is simply incapable of faster transfer.

How are you connected to the PLC? Over the integrated port on PLC or do you have S7 CP (Communication processor)?
OK, I’m assuming that you have Siemens PLC…?

Unclear. The "DB" in the OP is a database, I believe.

Yup, you’re (probably) right…
When I see somewhere DB with upper letters, my brain sees :innocent: DB block from PLC…

Hi all,

Yes I do have a Siemens s7-314 and I’m connected to the CPU integrated ethernet card using the Siemens Ignition Driver.
And yes, it’s always confusing using the DB acronym :grin:.

I like the idea of the 10% and the progress bar.
Should it be again 10 asynchronous tasks, with the sleep() function between them?

I’ll also try to create a copy of the data block in another Siemens PLC to prove the issue is related to my script rather than the Siemens PLC not being able to process all the data.

I never had a need for 3000 tags to transfer at once to the PLC… :worried:
But you can try to play with the ‘Scan cycle load from communication [%]’ setting in the hardware configuration for the PLC:
Screenshot_39
By default, it’s at 20% (communication can take up to 20% of the cycle time).
You can go up to 50% if the scan cycle is not important. Or more…

1 Like

No, just one. Looping through the percentages. Sleeping a bit each time through the loop. (Threads launched by system.util.invokeAsynchronous() can safely sleep, but not interact with the GUI.) Be sure you follow the guidance in the linked topic above.

1 Like

Thank you all,

Quick update from this morning tests.

I’ve changed the Scan cycle load on the PLC from 20% to 50%, but it didn’t change much the result.
I’ve divided the list into 3 big blocks and used sleep function (I went up to 7 seconds between the writings) but still sometimes I have the issue.

I’ve now noticed that not all PLC tags are affected, so I have other memories I’m using that are not going in red/bad status, and I’m start thinking that it can be something else.
All the 3000 tags I’m using to download the recipe are stored in a single big UDT (with 100 sub UDTs for every step).
Could it be that the UDT performance is somehow affected?
I’m still working on 7.9 but I have installed a 8.1 server recently to test the migration, so I might give it a go in that version too.

In the meantime I’ll test a different PLC and dividing the download in 10 blocks as suggested.

Did you consider transaction groups? Assuming you are writing to arrays in the PLC a block transaction group should be considered. You could easily break this out into multiple TGs and utilize the handshake to validate the PLC properly received all the data.

If you aren’t writing all this to an array in the PLC, fix that first.

RecipeUDT.recipeNumber[0...100]
RecipeUDT.stepNumber[0...100]
RecipeUDT.sp1[0...100]
.
.
.
RecipeUDT.sp30[0...100]
1 Like

Are the elements of the UDT array you are writing just the recipe settings? Or are operational and recipe elements mixed together in UDT? If the latter, Ignition won’t be able to optimize the writes into large blocks of bytes. You need to be writing every element of the UDT (also hopefully consecutive instances of the UDT in the array) in order for Ignition to make a big, efficient write.

1 Like

I’ve thought about this, but I would keep it as last choice, since my UDT configuration is not like the one you mentioned but more like

RecipeUDT.Step[0…100].stepNumber
RecipeUDT.Step[0…100].sp1
.
RecipeUDT.Step[0…100].sp30

and I believe I have to rework a bit the data structure?

The elements of the UDT are just the recipe settings.
I actually had an internal memory with the step Name, but I tried to remove it just to check if it was the real cause.
I still had the same issue at the end of the writing.

Then I tried the version 8 test server…and it looks like the problem doesn’t exist anymore
The writing is also faster and same the GUI response.

Maybe the UDT performance / drivers have been improved?

If this is the case I guess I only have to speed up the migration, even if it wasn’t planned yet…

i don't really keep up with Siemens stuff, but I wouldn't be surprised.

v7.9 is EOL this June, so you should be planning anyways.

1 Like

Keep in mind you can configure patterns (I don’t recall on 7.9 if this looks the same)

This may or may not be the most efficient given the UDT structure, however you could always write to a buffer array in the PLC to optimize the Ignition data transfer, then when the TG handshake confirms a valid transfer, use PLC logic to copy the buffer array to the appropriate recipe UDT members.

1 Like

Hi all,

Unfortunately I have to reopen this.

At the time (2022) I did manage to launch 100 times the system.tag.writeAll() and in about 3 seconds the writing to the PLC was completed.
Last week I migrated the project to version 8 in a new server in the same cluster, so ideally same speed, same network interface etc, sure that this was working, but when I tried to launch the script it took about 2/3 minutes to write all the tags.

I've then tried to use the system.tag.writeAsync() using the callback parameter to print a timestamp and I can see Ignition writing 7 times / blocks of variables every about 10 seconds (I hope this makes sense).
I also tried to launch writeAsync() for a single list of 3000 variables/addresses and it takes the same amount of time, but I get a read timeout warning on top of it.

I'm pretty sure at the time v8 version of the script was running faster, if you see my comments above, I wonder if I'm missing something obvious when I imported the project to the new server?
Using v8.1.35 for my test.

Thanks