SQL Bridge and Block Groups

I have a block group transferring data from a PLC to a database table. Each item is a column in the db and an array in the PLC (part of a UDT) 20 elements long. The table has related data in groups of twenty rows with a column PartIndex differentiating them (ranging from 0-19). Another column called LineIndex groups the blocks of 20 rows. I set up the block transfer to use PartIndex as the ‘custom index column’ and in table action I set a custom update/select clause to: LineIndex = 99. I would have expected this to transfer the 20 elements from the PLC to the db and fit them in in the 20 rows that have a LineIndex = 20. However, what it is actually doing is putting blocks of 20 elements into each ‘group’ of twenty rows (i.e. 20 rows that have a LineIndex of 1, 20 rows that have a LineIndex of 2, etc) for a total number of rows written of 320. What’s going on here and how do I correct it? Thanks for any help.

I’ve attached the xml export of what I’m doing.
BlockGroup.xml (5.12 KB)

It seems to me the PartIndex is not the correct index column. Just try unchecking that Use custom index column setting. You should also give each block a different name. The first one should be Active and the second one should be Remaining. I don’t have a copy of the database table to really test it out. If it still doesn’t work can you backup that table and send it to us?

I should think PartIndex sould be the correct index column each group has a LineIndex and 20 part indexes (compound primary key). Perhaps I’m understanding this incorrectly. If I uncheck the 'use custom index colum, it creates another column but fills it up with NULL’s and transfers no data. It looks like it’s ignoring the WHERE statement under the ‘table action’ section. I’m attaching a csv copy of that table.
ProductionHoldTable.csv (12.8 KB)

Still not having any success with this. Can anyone offer a suggestion?


When it comes to writing, transaction groups really expect the index column to be unique. The WHERE clause gets used for reading- including for reading the values of the index. In other words, the group operates like this:

  1. The group does its thing, and figures out what rows to write.
  2. It executes a query like the following, to get the index ids:
    SELECT block_ndx FROM table WHERE {custom where clause} ORDER BY block_ndx ASC
  3. The group runs an update query for each row, using the index obtained in step 2:
    UPDATE table SET column=?, etc WHERE block_ndx=?

So you see, in this case, the group is using your where clause to select the “index”- but it’s getting back 0-19. In the write, it’s simply using that, which ends up writing to every line.

Travis’ suggestion was right, he was implying that by not using a custom index column, the group would make a column with a unique value, and use that as an index. Unfortunately, that’s true when it creates a new table, but on an existing table, it can’t generate unique values. So, you end up with a column filled with nulls.

So, the very easiest thing to do would be the following: create a unique index column.

  1. Uncheck the “use custom column”. You said you already did this, so I believe your table should already have a column made called “productionholds_ndx”.
  2. From the query browser, run this query:
update productionholds set productionholds_ndx=(20*(lineindex-1)+partindex)

Since the group expect the index to be unique, this is currently the only way. I suppose the argument could be made that this should be relaxed, and the rest of the where clause used for writing, but generally having a unique row id isn’t too big of an issue.


Thanks, having a unique column index did the trick. Thanks for the explanation too.