Best practices for managing a TSQL inventory system?

I have a Microsoft SQL table Materials which contains columns MaterialName, MaterialDesc, and ActualInventory.

There is a non-zero chance two Perspective clients could check out a material at the same time. When making my SQL queries is the correct isolation level for this SERIALIZABLE?

RequestedInventory = 5

datasource = "TSQL"
isolationLevel = system.db.SERIALIZABLE
timeout = 60000
txNumber = system.db.beginNamedQueryTransaction(datasource, isolationLevel, timeout)

params = {}
params['Material'] = 'xyz'

# Get current inventory in system.
ActualInventory = system.db.runNamedQuery("Materials/getActualInventory", params, txNumber)

if ActualInventory >= RequestedInventory:
    params['Material'] = 'xyz'
    params['Quantity'] = RequestedInventory
    x = system.db.runNamedQuery("Materials/SubValues", params, txNumber)
    # Not enough inventory to complete transaction

if x > 0:
    # Rows updated is 1.


One issue that might appear is other clients trying to read the data while this transaction is going on. In that case would REPEATABLE_READ be a better isolation level?

REPEATABLE_READ - This isolation level prevents other transactions from modifying or inserting data that the current transaction is reading.

And then my last issue is if I have multiple Perspective clients open I want to alert other sessions that an update has occurred. Each client has a table on it that needs to see the most up-to-date values. The data on each table is currently tied to a named query reading Materials with polling turned off. Will these named queries update automatically or do I need to send a message to other clients?

Interesting problem. Definitely more a database question that Ignition specific but I'll give my two cents anyways.

Do you have a table that has a row per item? Ie if you have 5 quantity of 'xyz', there is a table with 5 rows with materialDesc 'xyz'. I would probably do things this way - and maybe add a column like checkedOutBy which is NULL if not one has checked it out or the id of the user who checked it out. I would then only calculate what is available at the time of asking by counting how many NULL's there are (items not checked out by anyone). Then I would calculate the ActualInventory at time of needing it by inspecting the checkedOutBy column. When you have an aggregate value, something like ActualInventory I am much more partial to having some underlying table with the raw data that I calculate the aggregate value off of, then just having an aggregate value standing completely on its own. If it gets out of sync and you have no underlying table that creates the aggregate value, you can be left scratching your head why it got out sync at all, but if your aggregate value is calculated from an underslying dataset, you can always inspect it and see what is going on - oh some user said they checked out ALL the materials and that's why it says 0 available, even though that may not really the case in reality.

You'll definitely still want to use transactions and I think serializable should be fine. IIRC (but someone please correct me if I am wrong) serializable transactions act as if in queue. If two perspective users try to do it at the same time, whoever's request hit's the database first - all the queries in their transaction will run first to completion, and then the second transaction will go from start to finish. This can create bottlenecks but if you only have a handful of users I wouldn't worry about it. At least this is the case for SQL Server last I checked - you may want to confirm with whatever flavor of db you are using. I have to imagine it's similar though.

Some discussion on inventory system designs - about whether to have a field that keeps track of quantityAvailable or a table that keeps track of the items themselves

1 Like

The problem might be slightly simpler. I have a single row for 'xyz' and it lists the ActualInventory as 5.

1 Like

Right and the discussion I linked does talk about that method, and for some use cases that is probably fine - what I described may be excessive for your system. I am just partial to it after being burned before with aggregate columns standing on their own getting out of sync - though I also wasn't using transactions back then!

I would still recommend using transactions and SERIALIZBLE is fine for this and only change it if you start running into performance issues - but if you don't have many people using it concurrently, I think you should be fine.

1 Like

but if you don't have many people using it concurrently, I think you should be fine.

Famous last words. Thanks for the link though. Very informative.

both are valid, depending on the circumstances. The former is best when the following conditions hold:

* the number of items to sum is relatively small

* there are few or no exceptional cases to consider (returns, adjustments, et al)

* the inventory item quantity is not needed very often

on the other hand, if you have a large number of items, several exceptional cases, and frequent access, it will be more efficient to maintain the item quantity

also note that if your system has discrepancies then it has bugs which should be tracked down and eliminated

i have done systems both ways, and both ways can work just fine - as long as you don't ignore the bugs!

I know the number of users will probably be low in this system but I think their inventory is going to get large and they will request the inventory many times (total number of select queries).

Hahah. Fair enough. I have a system with 60 users that use SERIALABLE transactions in SQL Server all the time, often concurrently, and while there may be a smallish hiccup, generally there's not much more than a half second delay. I know 60 users not necessarily a lot, but for Ignition system's it's the largest sample I can give you.

I remember seeing some benchmark test elsewhere that that ran thousands of insert/updates inside of and outside of a transaction and they run faster in a transaction thought the link escapes me now.

To keep inventories accurate with concurrent users, serialized transactions will be needed. A SELECT statement can confirm the current inventory level and an UPDATE statement can adjust the level accordingly within same serialized transaction, and this eliminates the possibility of an interim read or overwrite from another user throwing the values off.

1 Like

Wouldn't REPEATABLE_READ achieve the same thing but allow other clients to read the current value? My fear is SERIALIZABLE will not allow other clients to read data while one of the clients is adding / subtracting inventory.

You're talking an extra few milliseconds I would not sweat that. You use case is not too complicated. I personally am of the belief you should start with the strictest transaction/isolation level possible and only downgrade if you see an actual need too.

One other thing you may want to do, I don't know how your x = system.db.runNamedQuery("Materials/SubValues", params, txNumber) - I assume based on the name this is something like UPDATE someTable set ActualInventory = ActualInventory - :Quantity where Material=:material?

If so, that is fine - but I would also recommend adding a trigger on your table to that throws a SQL error if NEW.ActualInventory < 0 - just to prevent someone taking out more than what is there.

It's good to have it in your script like ActualInventory >= RequestedInventory: but any data integrity constraints really should also be on the database level as well to prevent unexpected database states from occuring. Given you're using transactions the script check I guess should be sufficient but I am a stickler for putting data integrity rules in the database, the buck stops there.


Yes, you understand the situation perfectly and I think triggers are a great idea.

I just implemented that now.

ADD CONSTRAINT CHK_ActualInventory_NonNegativeValue
CHECK (ActualInventory >= 0);

Thanks for the advice.


What about using stored procedures to conduct the transaction processing instead of with Ignition's interface?

You can always technically do that with any db operation but my opinion is that when you can do what you need to do with a series of JBDC compliant statements (INSERT/UPDATE/DELETE/SELECT) it's better suited to put that logic in Ignition itself so you can see the control flow along with the rest of your application. This is personal preference but for two equally complicated scripts - I find jython easier to read than a SQL. I only resort to using stored procedures when what I want to do cannot be done through the JBDC in a statement or series of statements.

But you're right insofar as you could do this with an sp, but I don't necessarily see the advantage of it either.

1 Like

LOL, I find it easier to read SQL statements!

As I said, personal preference - you're not the first to say that! lol

I'm thinking the advantage would be transaction processing time/timing for multiple concurrent transactions?

And, if you have nested sprocs/functions within the main sproc, all within one transaction... I'm not a DB/server expert yet, but to me it seems logical that this processing would be faster. And, what if the connection is broken between Ignition and the DB while this transaction is mid-process? How would that affect it if the transaction processing was Ignition side or DB side? IDK

I'm really interested in this topic as I have an inventory system working in Perspective at this time, though it is still kind of rough. We have multiple bin locations for any one part, so we have to manage user checkouts in several parts rooms.

The way I solved this, years ago, using a bit data type in the DB table that would be flagged false when a user checks something out, and true when the parts manager cleared the check outs and verified the actual quantity left. I also gave the users the ability (which we try to require of them) to tell us how many are left after they take something. All in all, it comes down to: do they check out what they take; do they count what's left; and do they return it to the bin without saying anything? This is mainly the issue with having an open parts room...

1 Like

You can create, revert/commit transactions from jython in Ignition. You're probably right that all else being equal it's probably faster because the database is just doing this all internally vs Ignition opening a transaction, feeding a series of statements then reverting or comitting. Maybe at a certain size, maybe the speed gain is worth it but for this particular system and the one I use there is no real gain.

And, what if the connection is broken between Ignition and the DB while this transaction is mid-process?

What if the database host computer turns off mid stored procedure?

Having said that, you peaked my curiousty - I do believe the Ignition gateway does close any lingering transactions and gives you a warning about it in the gateway logs -

I just did a little test and it seems to revert it before closing. Doing

CREATE TABLE testTable (
 t int

And running

tx = system.db.beginTransaction('tab',1,10000)
system.db.runUpdateQuery("INSERT INTO testTable (t) VALUES (1)", tx=tx)

without ever committing/closing, you never get a record, you get the warning in your logs and the record never appears.

@PGriffith just wondering if you can confirm this? For lingering transactions that were opened but never closed, the gateway reverts them before closing?

1 Like

Yes, in the event of a timeout rollbackTransaction() is called before the transaction is explicitly closed.

1 Like