Understanding Tag Historian Timestamps in PostgreSQL

@dave.fogle has produced an article titled Understanding Tag Historian Timestamps which is beneficial in understanding timestamps returned when querying tag history tables directly and how to convert them to string format timestamps.

However that article does not provide a specific example for PostgreSQL using the to_timestamp() function. As I have had to rediscover this a few times now I thought I’d make a post so a can find it faster next time (and it may be useful to others).

PostgreSQL examples for converting Ignition variant of Unix Epoch time to string format timestamps.
SELECT to_timestamp(t_stamp/1000) AS ts, * FROM sqlt_data_...
SELECT to_timestamp(t_stamp/1000) AT TIME ZONE 'Canada/Mountain' AS ts, * FROM sqlt_data_...

Usually I forget that I need to divide by 1000. Standard Unix Epoch time is the number of seconds since January 1, 1970 00:00:00 UTC. The Ignition variant uses milliseconds instead of seconds and that is why all the examples start by dividing the returned t_stamp value by 1000 before using the built-in Unix Epoch conversion functions (where available).

@dave.fogle please consider adding this to the next version of your article. Please also consider adding examples for converting from string timestamps to Ignition’s variant of Unix Epoch time (milliseconds since January 1st, 1970 at UTC).

Here are PostgreSQL examples (untested) for converting string format timestamps to milliseconds since January 1st, 1970. Note the multiplication by 1000 to switch from seconds to milliseconds. there is also caution needed to confirm you are dealing properly with timezone conversion.

  • SELECT EXTRACT(EPOCH FROM TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40.12-08')*1000;
  • SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '2001-02-16 20:38:40.12-08')*1000;
  • SELECT EXTRACT(EPOCH FROM TIMESTAMP '2001-02-16 20:38:40.12')*1000;

Useful references:

Here is an example of a query for pulling tag history from a PostgreSQL database. With a specific tagpath pattern and time range..

 SELECT 
  to_timestamp(t_stamp/1000.0) AT TIME ZONE 'UTC' as UTC
, to_timestamp(t_stamp/1000.0) AT TIME ZONE 'US/Central' as US_Central
, A.t_stamp
, A.tagid
, A.floatvalue
, A.dataintegrity
--, A.* 
FROM sqlt_data_1_20220729 A
WHERE tagid in (select id from sqlth_te where tagpath like '%power/%pm-04-04%/energy')
AND t_stamp BETWEEN (EXTRACT(EPOCH FROM TIMESTAMP '2022-08-02 00:00:00.000')*1000) AND (EXTRACT(EPOCH FROM TIMESTAMP '2022-08-03 00:00:00.000')*1000)
  • Update 2023-11-10:
    • Changed to_timestamp(t_stamp/1000) to to_timestamp(t_stamp/1000.0).

Results (using Database Query Browser):
image

Just in case you aren’t aware, you can just use a system function and get tag history, probably quite a bit easier than what you are doing.
https://docs.inductiveautomation.com/display/DOC81/system.tag.queryTagHistory

3 Likes

Thanks, I’m aware of those Ignition functions, but I had a need (desire) to examine tag history directly for investigating and troubleshooting apparent oddities that were being returned using the built-in system.tag.queryTagHistory and system.tag.queryTagCalculations Ignition system functions and charting components.

It seems built-in Ignition features (charts etc.) default to returning a lot of interpolated data and it’s not always apparent how to reduce to as-stored values only (or at least it is easy to overlook).

I’m a little untrusting by nature so it is comforting to be able to examine the source data directly. That is easy enough to do by using the Database Query Browser in designer (if one is familiar with SQL) except for the interpretation of the atypical timestamps (the topic of this thread).

I’m not actually suggesting that direct SQL queries for tag history be used for for features exposed to the end user, I would avoid that, but I think it is valuable knowledge for investigation/troubleshooting.

Note that my example does not dynamically handle tag history across multiple tag history partition tables. Though I’d be interested if someone has developed an parameterized example of how to achieve that with PostgreSQL so I can just feed in a date range and tag path and get the results.

For comparison, here is an example showing how to query and review similar data using the Script Console (Designer) and the system.tag.queryTagHistory function.


myTagPaths = [ '[default]Station Power/Station 4/PM-04-04/Energy' ]
myColumnNames =  [ 'PM-04-04' ]
myEndDateTime = system.date.parse("2022-08-03 00:00:00", "yyyy-MM-dd HH:mm:ss")
myStartDateTime = system.date.addHours(myEndDateTime, -24)

print myEndDateTime
print myStartDateTime

#======================================================================
#======================================================================
def printDataSet(datasetIn):
	# Get Row and Column counts
	nrows = datasetIn.getRowCount()
	ncols = datasetIn.getColumnCount()
	
	# Get max length of row labels
	rowLen = len(max(['row']+[unicode(i) for i in range(nrows)], key=len))
	# Get column names
	colNames = datasetIn.getColumnNames()
	# initialize lists to process columns
	headerList = []
	colData = []
	maxLen = []
	# Process columns
	for i, col in zip(range(ncols), colNames):
		# Get column as list, converting all elemenst to unicode strings
		colIn = ([unicode(element) for element in list(datasetIn.getColumnAsList(i))])
		# Get max lentgh of the column, including the column name
		maxLen = len(max([col]+colIn, key=len))
		# Append data as left justified strings.
		# ljust() will automatically pad to the max string length
		colData.append([element.ljust(maxLen) for element in colIn])
		# Append column name, also using the max string length
		headerList.append(col.ljust(maxLen))
		
	# Concatenate the header string and print it.
	headerString= 'row'.ljust(rowLen) + ' | ' + ' | '.join(headerList)
	print headerString
	# Print a bunch of dashes the same length as the header string
	print'-' * len(headerString)
	
	# Print the rows
	for row in enumerate(zip(*colData)):
		print unicode(row[0]).ljust(rowLen) + ' | ' + ' | '.join(row[1])
		
#======================================================================
#======================================================================
data = system.tag.queryTagHistory(
	  paths=myTagPaths
	, startDate=myStartDateTime
	, endDate=myEndDateTime
#	, returnSize = 0
#	, aggregationMode="LastValue"
#	, returnFormat
	, columnNames=myColumnNames
#	, intervalHours
#	, intervalMinutes
#	, rangeHours
#	, rangeMinutes
#	, aggregationModes
#	, includeBoundingValues=1
#	, validateSCExec
	, noInterpolation=1 # prevent the return of rows that are interpolated
#	, ignoreBadQuality
	, timeout = 10000
#	, intervalSeconds
#	, rangeSeconds
)

printDataSet(data)

Results:

>>> 
Wed Aug 03 00:00:00 UTC 2022
Tue Aug 02 00:00:00 UTC 2022
row | t_stamp                      | PM-04-04     
--------------------------------------------------
0   | Tue Aug 02 00:06:17 UTC 2022 | 17.8490009308
1   | Tue Aug 02 01:06:23 UTC 2022 | 18.2940006256
2   | Tue Aug 02 02:06:29 UTC 2022 | 18.7299995422
3   | Tue Aug 02 03:06:33 UTC 2022 | 19.1550006866
4   | Tue Aug 02 04:06:37 UTC 2022 | 19.577999115 
5   | Tue Aug 02 05:06:43 UTC 2022 | 19.9960002899
6   | Tue Aug 02 06:06:49 UTC 2022 | 20.4130001068
7   | Tue Aug 02 07:06:55 UTC 2022 | 20.8269996643
8   | Tue Aug 02 08:07:00 UTC 2022 | 21.2390003204
9   | Tue Aug 02 09:07:06 UTC 2022 | 21.6520004272
10  | Tue Aug 02 10:07:13 UTC 2022 | 22.0620002747
...

To me this actually seems quite a bit less versatile than querying the database directly but it does have the advantage of transparently handling the history data partitioned across database tables.

There is probably a way, but it is not immediately apparent, how one might use localized time in the query and show localized time in the returned data.

The printDataSet function was lifted from a forum post by @JordanCClark, (thank you).

Somewhat updated query that I now prefer as it allows access to fields in the sqlth_te table to include in the results.

SELECT 
  to_timestamp(t_stamp/1000.0) AT TIME ZONE 'UTC' AS UTC
, to_timestamp(t_stamp/1000.0) AT TIME ZONE 'US/Central' AS US_Central
, to_timestamp(t_stamp/1000.0) AS ts
--, tags.*
, tags.tagpath
--, data.* 
-- , data.tagid
, data.intvalue
, data.floatvalue
, data.stringvalue
, data.dataintegrity
-- , data.t_stamp
FROM sqlt_data_1_20231103 data
JOIN sqlth_te tags ON tags.id = data.tagid
WHERE tags.tagpath = LOWER('BoP Power/PM-10-00/FREQ')
AND t_stamp BETWEEN (EXTRACT(EPOCH FROM TIMESTAMP '2023-11-04 00:00:00.000')*1000) AND (EXTRACT(EPOCH FROM TIMESTAMP '2023-11-04 01:00:00.000')*1000)
  • Update 2023-11-10:
    • Changed to_timestamp(t_stamp/1000) to to_timestamp(t_stamp/1000.0).
1 Like

If you divide by 1000.0 instead of 1000, you should retain the milliseconds in the timestamptz output. Note that to_timestamp() is defined to return a timestamptz, so it is fundamentally UTC-based, and will be transferred to a java.util.Date with that UTC point-in-time intact.

Using the AT TIME ZONE clause converts to non-UTC-based timestamp, that is unreliable when different database clients use different connection time zones. You should only use that clause when it is nested inside a string conversion, as allowing timestamp to travel through JDBC exposes that unreliability.

Thanks for the divide by 1000.0 suggestion. I failed to notice that my results were all missing milliseconds. I'll update my examples above.

Regarding the use of AT TIME ZONE I added that because at one point I had some uncertainty about what timezone was being represented in the output. Even now when I use just to_timestamp(t_stamp/1000.0) in designer's database browser it gives me time in my gateway timezone (not UTC). I have just been considering the string interpretation.

No, it is printing the time in the gateway timezone, but the underlying java object is timezoneless.

I recommend you read this topic about datetime objects in Ignition in general:

Then consider that PostgreSQL uses the same technique for its timestamp with time zone datatype (aka timestamptz), except that the underlying storage format is UTC microseconds instead of milliseconds (and a non-epoch zero point).

So, timestamptz in PostgreSQL is fundamentally timezoneless, and is only shown with a timezone when converted to a string in an execution context that has a time zone. Same when parsing a string into a timestamp--uses the execution context's timezone, then stores/computes with the UTC microseconds.

Further note: part of the reason IA uses 64-bit UTC microseconds for its t_stamp columns in the historian is that it obviously cannot be misinterpreted ("bungled", "warped", "fumbled", et cetera) by some stupid database, and 64-bit integers are supported data types in all usable databases.

(Date/time data types are a true cluster-[expletive] across the various SQL brands--only slowly getting better.)

1 Like

I understand that, as stated I was only using the text output (printed string interpretation).

I simply used AT TIME ZONE to force the string representing for the specific time-zone(s) I desired, then I had no doubts as to what timezone the default string representation was associated with.

I understand your points require consideration if when one might be feeding the query results into variables/objects for further processing and not just looking at the text output. However in that case I suggest one use Ignition's built-in system functions. https://docs.inductiveautomation.com/display/DOC81/system.tag.queryTagHistory