Understanding Tag Historian Timestamps in PostgreSQL

@dave.fogle has produced an article titled Understanding Tag Historian Timestamps which is beneficial in understanding timestamps returned when querying tag history tables directly and how to convert them to string format timestamps.

However that article does not provide a specific example for PostgreSQL using the to_timestamp() function. As I have had to rediscover this a few times now I thought I’d make a post so a can find it faster next time (and it may be useful to others).

PostgreSQL examples for converting Ignition variant of Unix Epoch time to string format timestamps.
SELECT to_timestamp(t_stamp/1000) AS ts, * FROM sqlt_data_...
SELECT to_timestamp(t_stamp/1000) AT TIME ZONE 'Canada/Mountain' AS ts, * FROM sqlt_data_...

Usually I forget that I need to divide by 1000. Standard Unix Epoch time is the number of seconds since January 1, 1970 00:00:00 UTC. The Ignition variant uses milliseconds instead of seconds and that is why all the examples start by dividing the returned t_stamp value by 1000 before using the built-in Unix Epoch conversion functions (where available).

@dave.fogle please consider adding this to the next version of your article. Please also consider adding examples for converting from string timestamps to Ignition’s variant of Unix Epoch time (milliseconds since January 1st, 1970 at UTC).

Here are PostgreSQL examples (untested) for converting string format timestamps to milliseconds since January 1st, 1970. Note the multiplication by 1000 to switch from seconds to milliseconds. there is also caution needed to confirm you are dealing properly with timezone conversion.

  • SELECT EXTRACT(EPOCH FROM TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40.12-08')*1000;
  • SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '2001-02-16 20:38:40.12-08')*1000;
  • SELECT EXTRACT(EPOCH FROM TIMESTAMP '2001-02-16 20:38:40.12')*1000;

Useful references:

Here is an example of a query for pulling tag history from a PostgreSQL database. With a specific tagpath pattern and time range…

  to_timestamp(t_stamp/1000) AT TIME ZONE 'UTC' as UTC
, to_timestamp(t_stamp/1000) AT TIME ZONE 'US/Central' as US_Central
, A.t_stamp
, A.tagid
, A.floatvalue
, A.dataintegrity
--, A.* 
FROM sqlt_data_1_20220729 A
WHERE tagid in (select id from sqlth_te where tagpath like '%power/%pm-04-04%/energy')
AND t_stamp BETWEEN (EXTRACT(EPOCH FROM TIMESTAMP '2022-08-02 00:00:00.000')*1000) AND (EXTRACT(EPOCH FROM TIMESTAMP '2022-08-03 00:00:00.000')*1000)

Results (using Database Query Browser):

Just in case you aren’t aware, you can just use a system function and get tag history, probably quite a bit easier than what you are doing.


Thanks, I’m aware of those Ignition functions, but I had a need (desire) to examine tag history directly for investigating and troubleshooting apparent oddities that were being returned using the built-in system.tag.queryTagHistory and system.tag.queryTagCalculations Ignition system functions and charting components.

It seems built-in Ignition features (charts etc.) default to returning a lot of interpolated data and it’s not always apparent how to reduce to as-stored values only (or at least it is easy to overlook).

I’m a little untrusting by nature so it is comforting to be able to examine the source data directly. That is easy enough to do by using the Database Query Browser in designer (if one is familiar with SQL) except for the interpretation of the atypical timestamps (the topic of this thread).

I’m not actually suggesting that direct SQL queries for tag history be used for for features exposed to the end user, I would avoid that, but I think it is valuable knowledge for investigation/troubleshooting.

Note that my example does not dynamically handle tag history across multiple tag history partition tables. Though I’d be interested if someone has developed an parameterized example of how to achieve that with PostgreSQL so I can just feed in a date range and tag path and get the results.

For comparison, here is an example showing how to query and review similar data using the Script Console (Designer) and the system.tag.queryTagHistory function.

myTagPaths = [ '[default]Station Power/Station 4/PM-04-04/Energy' ]
myColumnNames =  [ 'PM-04-04' ]
myEndDateTime = system.date.parse("2022-08-03 00:00:00", "yyyy-MM-dd HH:mm:ss")
myStartDateTime = system.date.addHours(myEndDateTime, -24)

print myEndDateTime
print myStartDateTime

def printDataSet(datasetIn):
	# Get Row and Column counts
	nrows = datasetIn.getRowCount()
	ncols = datasetIn.getColumnCount()
	# Get max length of row labels
	rowLen = len(max(['row']+[unicode(i) for i in range(nrows)], key=len))
	# Get column names
	colNames = datasetIn.getColumnNames()
	# initialize lists to process columns
	headerList = []
	colData = []
	maxLen = []
	# Process columns
	for i, col in zip(range(ncols), colNames):
		# Get column as list, converting all elemenst to unicode strings
		colIn = ([unicode(element) for element in list(datasetIn.getColumnAsList(i))])
		# Get max lentgh of the column, including the column name
		maxLen = len(max([col]+colIn, key=len))
		# Append data as left justified strings.
		# ljust() will automatically pad to the max string length
		colData.append([element.ljust(maxLen) for element in colIn])
		# Append column name, also using the max string length
	# Concatenate the header string and print it.
	headerString= 'row'.ljust(rowLen) + ' | ' + ' | '.join(headerList)
	print headerString
	# Print a bunch of dashes the same length as the header string
	print'-' * len(headerString)
	# Print the rows
	for row in enumerate(zip(*colData)):
		print unicode(row[0]).ljust(rowLen) + ' | ' + ' | '.join(row[1])
data = system.tag.queryTagHistory(
	, startDate=myStartDateTime
	, endDate=myEndDateTime
#	, returnSize = 0
#	, aggregationMode="LastValue"
#	, returnFormat
	, columnNames=myColumnNames
#	, intervalHours
#	, intervalMinutes
#	, rangeHours
#	, rangeMinutes
#	, aggregationModes
#	, includeBoundingValues=1
#	, validateSCExec
	, noInterpolation=1 # prevent the return of rows that are interpolated
#	, ignoreBadQuality
	, timeout = 10000
#	, intervalSeconds
#	, rangeSeconds



Wed Aug 03 00:00:00 UTC 2022
Tue Aug 02 00:00:00 UTC 2022
row | t_stamp                      | PM-04-04     
0   | Tue Aug 02 00:06:17 UTC 2022 | 17.8490009308
1   | Tue Aug 02 01:06:23 UTC 2022 | 18.2940006256
2   | Tue Aug 02 02:06:29 UTC 2022 | 18.7299995422
3   | Tue Aug 02 03:06:33 UTC 2022 | 19.1550006866
4   | Tue Aug 02 04:06:37 UTC 2022 | 19.577999115 
5   | Tue Aug 02 05:06:43 UTC 2022 | 19.9960002899
6   | Tue Aug 02 06:06:49 UTC 2022 | 20.4130001068
7   | Tue Aug 02 07:06:55 UTC 2022 | 20.8269996643
8   | Tue Aug 02 08:07:00 UTC 2022 | 21.2390003204
9   | Tue Aug 02 09:07:06 UTC 2022 | 21.6520004272
10  | Tue Aug 02 10:07:13 UTC 2022 | 22.0620002747

To me this actually seems quite a bit less versatile than querying the database directly but it does have the advantage of transparently handling the history data partitioned across database tables.

There is probably a way, but it is not immediately apparent, how one might use localized time in the query and show localized time in the returned data.

The printDataSet function was lifted from a forum post by @JordanCClark, (thank you).