Migrating FactoryTalk Historian data to Ignition

That is a lot of data, do they not have compression configured for those points? If they're OK with compression, you might be able to configure it, then reprocess the archives before exporting the data. I'm not 100% sure reprocessing archives would actually run compression on data that's already archived, though. I can probably find out.

I think custom AFSDK is probably a dead end for Factory Talk.

So I’m continuing along with the exporting of data from OSI PI into text files using the piconfig command. Each archive file in SMT is 1024MB and is roughly 15-16 days of data, but when I run piconfig to export all the data, the file is 7.4GB for 30 days of data.

Occasionally I am seeing this error about event collection exceeding the maximum allowed. What specifically is that referring to?

If I remember correctly, this is related to the ArcMaxCollect tuning parameter. If you have access to SMT, you should be able to view/change the parameter. If your server’s resources are going to be a limit, you may need to just make smaller queries. Also, some tuning parameters work hand-in-hand with others. I was never an expert with all of the tuning parameters, but you can take a look around and see if any of them sound relevant. If they do, and your machine has available memory/CPU, crank them up a bit. Feel free to ask me about them, I’m happy to share what I can remember about the tuning parameters. I think most are documented somewhere.

One benefit of PI’s proprietary binary archive file format is efficiency. Unfortunately, it comes at the cost of easy raw data access. If your plan is to export a lot of these files and then later import them into something, consider zipping them up. You might be able to get them closer to 2 GB in size that way. If you’re immediately processing the export, there may not be much value in zipping them up.

Out of curiosity, how long does it take for that export to finish?

I’m not sure if the FactoryTalk historian comes with the any of the PowerShell tools for PI System, but if it does, that may be something to consider.

It takes about 30 minutes or so to export that data. The plan is to eventually import all of this into Ignition in the new environment as the customer is migrating from FTView to Ignition but needs to retain at least a decade of historical data.

1 Like

Well, if that rate is consistent (30 minutes to export half a month of data) for the entire decade, it sounds like it will take roughly 5 days to export everything.

Sounds slow, but I’ve seen migrations take longer. Good luck, and let me know if you have any more questions!

The problem is that the export file for that period is way too large to handle. I’m using scripts to open the file and manipulate it to create an output file that will be ready for importing into Ignition. Turns out all those errors were missing data, so in reality, 24 hours of data creates a file that is about 590MB. I’m trying to come up with a batch file that I can run and it will just create an export file for each 24 hour period so I don’t have to modify the piconfig script, run the command, modify piconfig script, run the command…and so on. 590MB is still a large file for the manipulation script but it’ll work.

If you haven’t considered it, it may be helpful to process the files by streaming the rows instead of reading the entire thing into memory within your manipulation script. Most CSV readers can do this, I think.

1 Like

Yeah you might be right. I was thinking it would load into memory.
To export the data, I came up with two batch files. The first one called ā€œrun.batā€ runs this:

call create.bat 14-oct-2021 15-oct-2021
call create.bat 15-oct-2021 16-oct-2021

The second one runs this:

echo off
cls
echo @tabl piarc > input.txt
echo @ostr tag,time,value >> input.txt
echo @ostr ... >> input.txt
echo @istr tag,starttime,endtime,count >> input.txt
echo @output C:\values_%1.txt >> input.txt
echo @echo off >> input.txt
echo @maxerr 10000 >> input.txt
echo Tag1,%1,%2,6000000 >> input.txt
echo Tag2,%1,%2,6000000 >> input.txt
echo Tag3,%1,%2,6000000 >> input.txt

…..all the remaining tags

and then

"C:\Program Files\Rockwell Software\FactoryTalk Historian\Server\adm\piconfig.exe" < c:\input.txt

So I can use something like excel to auto-populate a huge set of start & end dates, drop that in the ā€œrun.batā€ file and it will create a file for each day. But before I get too carried away, I’ll try streaming a 7GB file and see what happens.

1 Like

While I’m grabbing all this data, I want to focus on the import file format. We’ll be using Postgres, and when I look at the historical database, it seems there is a new table for each month. Then instead of a tagname, it uses a tagid, so I need to import the tagid, value, and timestamp (unix milliseconds), correct?

And where do I find the correlation between tagname and tagid? If I export all the tags from designer, there is no unique id associated with each tag.

EDIT: looks like the table ā€œsqlth_teā€ contains the relationship between id and tag.

FYI:

Despite the extra overhead, I would try to use system.tag.storeTagHistory to migrate data into Ignition when adding history to existing tags, especially if you are able to connect to the other database and read data directly into Ignition.

A disclaimer with the above method, is that you are not currently able to override the gateway setting when storing data from a remote gateway. (Hmmm…) If you are trying to add history for tags of an existing tag provider, ensure you run this function on the same gateway where the tag provider is local.

Thanks. This looks like it gives great detail into the table structure, but doesn’t quite answer my question. In fact, it adds some confusion. That first section talks about the core table that is created and it says sqlt_data_X_X. Here you can see that the tables created in my environment (also 8.1) appear to include the month as well, but all the tags being collected are within that table and they do not have separate tables.

EDIT: and I guess this is why :backhand_index_pointing_down:

1 Like

Yes, the _X_X varies with the partitioning selected, if any.

1 Like