Easy Chart and t_stamp problem

First of all, my name’s Chris and I work for Michelin Tire company in Ardmore, Oklahoma. I’m new to the forum but have been working on implementing KEP,FSQL,FPMI and MySQL into our Data Acquisition system.

First things first, the machines we use make or mix rubber for the plant. A single “Batch” of rubber takes anywhere from 1 1/2 min to 3 1/2 min to mix. During that time, we have a “BATCH START” trigger in the PLC5. Once triggered, we capture that moments data and also start capturing 5 sec. data. At the end of the batch there is a “END OF BATCH” trigger, it captures the last moment of data. By data i mean, mixer speed, KWH, Step #, Current Recipe… etc. When it is the “End of Batch” and it stores the last moment of data, a “PC Upload request” bit is made.

In FSQL under Project, I have made a new folder for Batch Data and within that folder created a “Block group”. I have all the tags that are updated every 5 sec in PLC within this group. It is triggered off of the “PC Upload request”. As you can already guess, my TIME STAMP for say ONE batch of rubber is the same for EVERY 5 sec bit of data within that batch.

So, here is the real question, using the chart in FPMI, do i graph this data if all the t_stamps are the same? I also capture BATCH TIME, which is going to change and this gives me a better idea of WHEN the information in EACH row of the SQL database is, but if i try to use that for my x axis, it doesn’t work, i would assume because the slider at the bottom of the chart if utilized for DATETIME, not BIGINT.

Any suggestions, critisism, witty remarks and questions are welcome LOL.

Thanks in advance.
-Chris

Chris - welcome!

If I understand your description properly, you are trying to log data every 5 seconds during each short (1.5-3.5 minute) batch process, then view that data in an Easy Chart in FactoryPMI.

Here’s how I would go about it in FactorySQL:

  1. Create a “standard group”. Name it something like Batch History.
  2. Drag all the OPC items into the group that you wish to log, plus your batchStart trigger. Name the items in such a way that makes sense to you.
  3. Double click the batchStart trigger and set it as Read Only. This means that we won’t log the data to the database.
  4. The group settings in the Action Tab should be “OPC->DB” and “Insert New Record” with an update interval of 5 seconds.
  5. In the Trigger tab check Execute this group on a trigger, select batchStart as your trigger, and make sure Only execute group once when trigger is active and reset trigger after execution are NOT selected. Trigger Conditions: should probably be set to is > 0.

In FactoryPMI the Easy Chart configuration should be straightforward.

That should be it. Your group will run every update interval (5 seconds) as long as the trigger condition is true.

I would consider a unique identifier (batchNum) to reference the batch number in a PLC register. One approach would be to use a separate table where each row represents a batch. I would set up a similar FactorySQL group to store batch specific data as columns. The group would trigger on a new batch using the same batchStart trigger, writing a single new row, then I’d use an Action Item and a handshake to read batchNum from the database and write it back to the PLC. batchNum is an Autoincrementing Primary Key in this new table, guaranteeing a unique value for each batch. If you maintain a unique batch numbering system in the PLC then use this instead. Did that make sense?

The PLC is using an INTERGER file (N) for its capture. For EACH Data, say batch time, the length it is set to in the PLC (using an index feature to move each capture to the next “N” address) is a length of sixty.

i.e. Batch Time is stored to N47:240 for a length of 60
Batch Temperature to N47:300 for a lenght of 60
etc…

So once the trigger is TRUE for the UPLOAD, it grabs all data from N47:0 to N47:779

Just to Clarify what I’m looking at, here is an Excel file with two BATCHES captured.
as you will see, the t_stamps for batch 50’s data are all the same, then it changes for batch 51 to the NEXT captured t_stamp.
fivesecdata.xls (34 KB)

Are each of the 60 “rows” short (5/60=.0833 sec) snapshots in time? Looks like it. Or do they represent different data points?

You probably are correct using block transfer groups (array reads, basically) for transferring so much data so quickly. I think regular groups max out in the 50-100 ms range and the PLC would do a better job caching the data.

You could certainly use a “Classic Chart” for your analysis. I wonder if you could process the data (in an SQL query) to offset the timestamp of the row based on the position as/after reading in data. This would be a great Colby or Carl question.


Not sure what you mean by short. LOL

Take batch time for instance. When the batch starts, it looks at N9:0 and stores how much time has elapsed in that BRIEF amount of time (usuall 0 or .1) into N47:240, 5 secs elapse and it again looks at SAME data point (N9:0) and stores it into N47:241 and so on and so on. When it gets the “END of batch” trigger it will then store the value from (N9:0) into WHICH ever N47:xxx is next inline to be stored to. In the excel file on batch 50, that would be N47:273. Almost every point of data we look at in this “Five sec batch data” is the same way. So if you look at the excel spreadsheet, after the LAST record of batch time for batch 50, the remaining values are all “0”.

Maybe its because I haven’t had my coffee yet, but I’m lost. Give us a call, that will probably be a quicker way to get the question answered.

Nathan was correct when he stated:

     [quote="nathan"]You probably are correct using block transfer groups (array reads, basically) for transferring so much data so quickly. I think regular groups max out in the 50-100 ms range and the PLC would do a better job caching the data.

You could certainly use a “Classic Chart” for your analysis. I wonder if you could process the data (in an SQL query) to offset the timestamp of the row based on the position as/after reading in data. This would be a great Colby or Carl question.[/quote]

We have a MASSIVE system and communication is KEY. I can not afford to bog down the system with MULTIPLE 5 sec requests. Is there any way of “Sneeking a peek” at the PLC without CAUSING a Bottle Neck in the communication? and if the CURRENT way we get 5 sec data is the best, then is there a way to MANIPULATE the t_stamp in MySQL? or is there a way to display the data on a chart or report using the “Batch Time” as the x-axis?

If you block transfer the data in using a block group, then you’ll get all of your 5-second data as rows in the table with the same timestamp, the same blockid, and different row ids, right? I think you can use the row id to manipulate the timestamps in MySQL. Logically, you want the following manipulation for each timestamp:

timestamp - ((# of rows - rowid)*5sec)

This manipulation (which you can do in a SQL Query or View) will let you use block transfers, and still treat the data as if it had 5-second separated timestamps.

If I READ only on lets say 50 OPC tags in KEPserver, and i’m ONLY reading them in FSQL (never writing to the PLC) and storing to MySQL, should there be ANY issues with the PLC5, as far as a bottle neck or any other communication issues?

I have an idea on how I’d accomplish this - I’ll get it to you tomorrow.

Thanks Nathan. Much appreciated. I’ll be looking forward to your post.

My idea is to use your PLC registers like a queue for FactorySQL. The PLC will prompt the “array read” and “data massaging” groups to run. After the PC read, the PLC clears that data, and (figuratively) puts the new data in the old position. On the DB side, FSQL gets the data into a format that the EasyChart can use. Here’s a conceptual sketch:

  1. You would add a delta column to your array that maintains cumulative elapsed time in ms since the last batch read. The first row will always have a zero in this column. The second may be 8ms, the third 17ms, the fourth 26ms, etc.

  2. Suppose we choose to read 50 records at a time with our FSQL block transfer group. You would either use another array, or have a much larger one in the PLC. Let’s use the example of actually having 200 elements in our array.

As soon as the PLC has logged 50 elements it notes the delta[50] time (relative time to log 50 values). It then energizes the trigger, prompting FSQL to do a batch read. The PLC is continuing to log values until FactorySQL resets the trigger or sets a handshake bit.

Suppose the PLC has logged to the 72nd row when FSQL indicates successful receipt. The PLC then moves those 22 rows to the first position, subtracts delta[50] from their delta times, and continues logging at the 23rd position.

There may be a more efficient way of implementing this with pointers or indirect addressing, but get it working first.

  1. Add a second datetime column, derivedTStamp, to your table. It should be indexed, allowed to be NULL, and default to NULL. At the end of each batch (use a separate trigger), a different FSQL group will run a query that calculates this value. In pseudo-SQL it’ll look something like this:
UPDATE table SET derivedTStamp=datesubtract(t_stamp DURATION delta milliseconds) WHERE derivedTStamp IS NULL 

Keep in mind that FactorySQL will set the t_stamp value to the current datetime of the array read, which is your 50th value. The delta value is that logging offset.

  1. Create the FSQL group like how I described above. The triggering is a little different. Instead of running every 5 seconds while the trigger is true, it should run once when the trigger is true then reset the trigger (or set a handshake bit) - the PLC is running the timing. Also, make your array as “long” as is reasonably possible and performs well. This will take some tinkering.

It’s a little tough to wrap your mind around, but I think it’ll work well. Does the approach make sense?

Why the funkiness:

  1. To start with you want to log data every ~10ms - this must be stored in the PLC and transferred in batches. Standard data logging is out.

  2. Eliminate empty records. Those create meaningless points on the graph. Periodic reads of the entire array create zero values (look at your spreadsheet).

  3. Log data as needed instead of on a periodic basis. You could use the PLC to go faster/slower based on what’s going on with the process.

Carl - when you posted:

[quote=“Carl.Gould”]If you block transfer the data in using a block group, then you’ll get all of your 5-second data as rows in the table with the same timestamp, the same blockid, and different row ids, right? I think you can use the row id to manipulate the timestamps in MySQL. Logically, you want the following manipulation for each timestamp:

timestamp - ((# of rows - rowid)*5sec)

This manipulation (which you can do in a SQL Query or View) will let you use block transfers, and still treat the data as if it had 5-second separated timestamps.[/quote]

how would i use that statement? lets say i have all my data in a table. within that table, every batch has 60 rows. i’m using a block group in fsql, so i have a blockid. all the timestamps are the same for x blockid for 60 rows. Now, i do a query in fpmi on a Table component.

SO, heres the question, i want to graph the data, how do i manipulate the t_stamp and create another dataset or what ever YOU have in mind to achieve the trend?

I meant to use your database’s built-in date manipulation (date arithmetic) functions directly in the SQL query that creates the DataSet that drives the chart. For example, for MySQL the function you could use is DATE_SUB to manipulate the timestamp, subtracting 5 seconds for each block, thus getting logically accurate date information.

Oh- one more thing, and this might have been covered, I forget: why are you using block groups? If you used a normal historical group, this would be way easier.