Tag history string length

Using version 8.1.2
I started with internal tag history which worked great with long JSON strings.
I added a push to remote server using MariaDB connection with regular tag historian db.
I updated the Table definition as varchar(10000) but still getting error message.
java.sql.BatchUpdateException: Data truncation: Data too long for column ‘stringvalue’ at row 1

I ran into this same issue with a MySQL database, and figured out a work around. It's not super clean, but seems to be working. Dragging this thread back from the dead to give a solution for anyone else who comes across it.

If I change the stringValue column datatype from VARCHAR(255) to VARCHAR(1000) the records are able to be stored (both saving new entries and Store and Forward 'Retry' in the web interface). It's interesting that didn't work for you. I'm using 8.1.19, so maybe there is a change there. I was getting the record too long error in the StoreAndForward status page on the Gateway web interface.

While updating the column datatype worked, when a new partition table is created the problem comes back up. Tech Support suggested making a new database Driver/Translator and setting the default string type to varchar(1000). I tried this but got a bunch of errors immediately. I didn't dig too deep, since I had already figured out another work around.

It feels really 'hacky' but seems to be working well. There is also a lot of room left to clean it up/make it more robust, but it met our immediate need.

I setup a Gateway tag change event on the historian StoreAndForward Quarantined tag ([System]Gateway/StoreAndForward/historianName/Quarantined). If there are records quarantined, it checks the MySQL data types, updates any as needed, and then triggers a StoreAndForward retry. Currently lots of hard coded values, but it should be pretty straightforward to make it more generalized.

"""
Check the historian database table data types, the default stringvalue type is too short.
Increase the DB field size and then attempt to store the data again

Called by a Gateway tag change event on the StoreAndForward Quarantined tag.
Must be run in Gateway scope.
"""

#import Java classes, needed for the transfer retry at the end
from com.inductiveautomation.ignition.gateway import IgnitionGateway
from java.lang import Long 

#TODO this is not robust, the database name is hardcoded and both Ignition DB name and MySQL schema must be entered.
#set Ignition database name for the historian, and the associated MySQL schema name
historianDBName = "HistorianDBName"
historianSchema = "historianSchemaName"

#set new stringValue varchar length
newLength = 1000

logger = system.util.getLogger("HistotianDBUpdate")

#check if there are quarantined records.
#assumes these are because of column datatype errors, needs error handling incase this doesn't fix it.
if newValue.getValue() > 0:
	
	logger.info("{} has {} quaratined records. Attempting to fix.".format(historianDBName, newValue.getValue()))
	
	#get partition table names
	query = "SELECT pname FROM sqlth_partitions;"
	tableNames = system.db.runPrepQuery(query, [], historianDBName)
	
	if len(tableNames) > 0:
		#check the datatype of the stringvalue column in each table
		for table in tableNames:
			#get stringValue datatype length
			query = "SELECT CHARACTER_MAXIMUM_LENGTH from INFORMATION_SCHEMA.columns where COLUMN_NAME = 'stringvalue' and table_schema = ? and table_name = ?;"
			varLength = system.db.runPrepQuery(query, [historianSchema, table[0]], historianDBName)
			
			#update datatype length if needed
			if varLength[0][0] < newLength:
				logger.info("Updating table {} in {}".format(table[0]), historianDBName)
				query = "ALTER TABLE " + table[0] + " MODIFY COLUMN `stringvalue` VARCHAR(?);"
				system.db.runPrepUpdate(query, [newLength], historianDBName, skipAudit = True)
		
		#trigger store and forward retry after datatypes have been updated
		logger.info("Triggering StoreAndForward retry")
		gateway = IgnitionGateway.get()	
		histMngr = gateway.getHistoryManager()
			
		stores = histMngr.getStores(None)	 #none = getAll
		for store in stores:
			qdataInfo = histMngr.getQuarantinedDataInfo(store)
			ids = [Long(x.getId()) for x in qdataInfo]		
			if ids:
				histMngr.unquarantineData(store,ids)

Big gaps are there is no error checking that updating the datatype fixed the problem (should check the quarantine tag value after running and make sure it's zero) and if there is more than 1 quarantined value there is a good chance that this script will trigger it's self during the StoreAndForward retry.

1 Like

Stopping by to :+1: this thread for helping convince me this was possible, and also to report that things seem to be able to go much bigger if you really need. 99.99% of our tags are reasonably sized (<255 bytes), but we have a half dozen or so infrequently updated tags that can get up to 20k or 30k in size. (We're arguing whether these tags should still be tags any more, but moving them elsewhere would require putting oddball exceptions in a bunch of scripts, and we'd still have to capture their history somewhere else, so we're sticking with tags for now...)

In any case, I pushed our stringvalue columns up to be MEDIUMTEXT in MariaDB and have successfully stored and queried values up to 100Kbytes as a proof of concept.

I did our history partition table conversion a bit more pro-actively since altering the table from VARCHAR to MEDIUMTEXT is slower the more rows it has to work over and I wanted to get the tables as soon as possible (before there was a quarantine event). We run the script below roughly daily to notice new tables and alter any it finds. I did the first batch of alterations by hand directly on the SQL console just to monitor for oddness and avoid designer timeouts, but followed the same flow as the code below.

This code will only work on MariaDB and maybe MySQL since the INFORMATION_SCHEMA.COLUMNS table is specific to them:

def update_history_schema(ignition_db_name):
    logger = system.util.getLogger('update_history_schema')

    # Get the list of all SQL tables in this database with a column named "stringvalue" of type "varchar" that are known history partition tables
    query = "SELECT DISTINCT(TABLE_NAME) FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = DATABASE() AND COLUMN_NAME = 'stringvalue' AND DATA_TYPE = 'varchar' AND TABLE_NAME IN (SELECT DISTINCT(pname) FROM sqlth_partitions)"
    query_result = system.db.runPrepQuery(query, (), ignition_db_name)
    tables_to_configure = query_result.getColumnAsList(0)

    # Sanity print!
    logger.info(u'Checking: IGNITION_DB={} TABLES_TO_CONFIG={}'.format(ignition_db_name, len(tables_to_configure)))

    # Reconfigure those tables!
    for pname in tables_to_configure:
        logger.warn(u'Altering: {}'.format(pname))
        query = "ALTER TABLE {} MODIFY stringvalue MEDIUMTEXT DEFAULT NULL".format(pname)
        system.db.runPrepUpdate(query, (), ignition_db_name)

EDIT: Big OUCH on the conversion time for those tables before we adjusted the time partitions!

MariaDB [ignitiondb]> ALTER TABLE sqlt_data_1_2022_05 MODIFY stringvalue MEDIUMTEXT DEFAULT NULL;
Query OK, 69776404 rows affected (14 min 56.855 sec)   
Records: 69776404  Duplicates: 0  Warnings: 0

This did run without affecting production since nothing was trying to insert/query the table too much, but I am going to be avoiding the conversion on the current/busiest table until just after the new partition is created...