Script help, preventing redundant rows, overwriting with prepupdate

Sorry, i don’t get it. If the last row is always correct (an assumption based on what you intend to do), then why would updating a single row for the shift not work?

3 Likes

In addition, the method I show above expects that sometimes you will have a user errant data (ie a duplicate row) and gives you space to handle that exception.

import java.lang.Exception

noErrors = True
try:
    system.db.runPrepUpdate("INSERT INTO blah", [value1, ...])
except Exception, e:
    noErrors=False
    # Catch and log any python error that may occur
    errorMessage='python error'
except java.lang.Exception, e:
    if "Duplicate entry" in str(e.cause):
        # Gather any necessary data and run your UPDATE statement here and then run it
        system.db.runPrepUpdate("UPDATE blah SET ....", []) # chance to update row instaed of making new one
        # Note - noErrors doesn't get set to False here because you expect and handle this type of error
    else:
        noErrors = False
        # Unexepcted db error - log and set some appropriate message to show user
        errorMessage = "Ruh roh"
if noErrors:
    system.gui.messageBox("Success!")
else:
    system.gui.messageBox(errorMessage)

Something along these lines.

I know you said this is for perspective, so you probably want to do a system.perspective.navigate to some generic popup view you have with the message to show the user.

3 Likes

Someone put in data on the wrong machine like 30 minutes ago.

If I were updating a row then data would have been lost.

Yes, but you also have assumed that someone will come in and write another, presumably correct, row at the end of the shift.

I’m still trying to wrap my head around this. Sorry, Zachary.

Is each entry a running total for the shift?

Josborn,

The world can turn upsidedown mid-shift:
Move operators arounds, run out of ingredients, switch products, and different things.
They can use entry and delete buttons to produce a correct row for each machine without automatically losing any data until the end of the shift.

At the end of the shift, old redundant entries are cleared out.

pros:
They get control of their data during their shift for accommodating errors and changes manually.

I can be certain I get one row per machine per shift, and my reports are robust.
If there is an error, it was before that script ran.

clarification
I think there are many implementations, and I think this is one robust one with some advantages
and this is all how I think it will be done after that error I had mentioned, not set in stone


Jordan,

I was getting redundant rows from the manual entry side.
comments, recipe info, intended products, material selections
I am going to use a script at the end of shift to eliminate redundant old data.

the number side is from transactions, uniform, no changes to collect, no redundancies

Obviously you take whichever approach you are comfortable with. Either way everything revolves around assuming the last entry is correct. Con’s would be that the last entry may not be correct, and that you have to delete at all.

yah, I agree

the con to me is that despite being robust, it could still be wrong on manual entry

I don’t have a way to automate their comments or selections yet

IMO, the only way you can guarantee accurate input is to take the human out of the equation =/

but then I would miss their experienced insights

image
I think I will use Row_Number() for this
It assigns unique numbers even if rank would qualify them the same.
Then I don’t have to say like top 1 rank()


I think one reason I like this way is that I tend to be revisionist, constantly revising to get to a bit bet results. So when I imagine being the person entering the data, I create a new revisions each event that occurs during the shift that might impact some X amount of dollars relative to the production.
If I am typing in a 10 digit number each time, I want to keep that old copy a bit to make sure my number is right. Transposing errors are so bad and common. I usually type the letter ‘e’ faster than any other key on the keyboard for some reason.
Though I can also see a benefit in another way where every entry does that update instead of insert. It has different pros and cons, less revisionist friendly.
You’d never have a point in time with more than the one row per machine.

1 Like

I’m kinda surprised you aren’t just inserting new rows every time. Doesn’t production want to allocate partial credit to the correct people, even if replaced mid-shift? It seems you are trading one class of errors for a different class of errors.

4 Likes

I'm lazy, I don't do more work than I need to.

If duplicate data is okay, then allow it.

If it isn't, don't go back later and delete it, prevent it. The use of a UNIQUE INDEX and a IF EXISTS UPDATE query form will do the work for you.

I have a hard time ever imagining a case where duplicate data is a good thing. Rather than programming a process to go back at a later point and "Deleting" , just never allow it in the first place.

You may be a revisionist and that's all good from the perspective of how you program, but when it comes to data entry, IMO, it should be forced to be correct from the beginning independent of where that data comes from.

If a user is typing in a data point and it is important that it is correct, you should not rely on the operator for validation. It just as easy to 'transpose' a number while reading as it is while typing.

I understand that some data points are not able to be validated (e.g. 47.14 can be just as valid as 74.14). Once the operator submitts that data point should be treated as valid. If you need to allow the operator the ability to correct an incorrect entry, then an 'edit' process should be put in place which 'updates' the row.

2 Likes

Perhaps a better way to handle this if you don’t want to update the last row, and to make sure the new inserts meet some criteria, to change what I wrote slightly

 if "Duplicate entry" in str(e.cause):
        noErrors=False
        errorMessage="Cannot insert record as there already exists record with that data"

Then this would force the user to re-enter the data until they entered something valid (according to the constraint you set).

I agree with @lrose 's opinion. Duplicate data usually just causes more headaches than it solves. Obviously there are cases for allowing it, but if I don’t need to allow it, I won’t. Not allowing it helps keep the database in a state that I expect.

pturmel,
I am inserting every time right now.
I need one row per machine per shift for reports, queries, totals
I don’t want to trade errors.


Irose
Switching the number entry to a dropdown

I don’t think I can use unique index. I need data every shift, for every machine.

I thought maybe to call a delete of entries for a given machine in the past 8 hours when hitting the submit button.
like, on submit button press script:
1 collect the parameters
2 run delete with the parameters for last 8 hrs based on machine and shift
3 do an insert with the parameters

Right before I did this though, someone put data from like machine blue on machine red.
This would have wiped out data on machine red.
So that is when I thought cleaning up after is better.
Though my conviction on cleaning up after is wavering today.
Last night I thought a lot about the idea of using the insert or update.


bkarabinchak
I don’t know how to detect the duplicate entry. If I understand that string only happens if unique index is in place?
I don’t think I can use the unique index, I need a row per machine per shift

So then do that. What have you tried? You mentioned this is a work in progress, but browsing this post I don't see much work posted.

1 Like

You can use what is called a Unique Clustered Index. Effectively a combination of columns can be used to insure the row is unique. In your case perhaps Machine ID and Shift No. would be used, since you're looking for 1 row for each machine per shift. Creating this constraint on the table would force the DB to reject any insert which did not meet it, returning an error.

@bkarabinchak.psi has already shown how to handle that error.

2 Likes

While I don’t understand the goal here, it sounds like he wants the last one to win, so I don’t think unique would work.

The last one does win.

Script tries to insert a new row, insert is rejected because of the constraint.

Constraint violation error is caught and an Update is run instead, and now the row is updated with the Last data.

(Insert New Data, Delete Old Data) = Update Old Data with New Data

1 Like

An additional benefit too is then when you are trying to find the last one later on in other queries - you don’t have to do any WHERE/GROUP BY/ORDER BY stuff to “get” the last record of the shift as there is only the one record and that you know must necessarily be the one with all the up to date information.

1 Like

Gotcha, sorry I didn't real close enough, I was distracted by some of the posts. :slight_smile:

2 Likes