queryTagHistory returns all zeros for a tag history provider

Hi all,
I'm wondering if anyone else ran into this issue and can help me out.

The tag was online, reading data, and recording it to our database. I'm able to see it when I add it to a chart. When I look at the file, it's all zeros in every spot. Changing the endTime to system.date.now() will include data from the first of the month but before that its all zeros.

It only seems to affect tags in our Edge server's tag history provider. Tags in [default] work fine.

startTime = system.date.getDate(2023, 1, 1) # yyyy, mm, dd
endTime = system.date.getDate(2024, 1, 1)

dataSet = system.tag.queryTagHistory(paths=['[IGN-Edge]Proleit Tracking/Utilities/Nitrogen/Outlet Flow/Value'], 
	startDate=startTime, endDate=endTime, 
	intervalMinutes=5, aggregationMode="Maximum", returnFormat='Wide')
spreadsheet = system.dataset.dataSetToExcel(1, [dataSet])
filePath = "C:\\Users\\username\\Documents\\N2_flow_rate_2023.xls"
system.file.writeFile(filePath, spreadsheet) 

Potentially related to the issue I'm having is this post I made about a pre-processed partition table that has some corruption. It was a table from 2020 (iirc) and is outside of the date range for this query by a while.

If you have any ideas about fixing this, it would be greatly appreciated! Thanks!

I would start small. Select only 1 hour or 1 day's worth of data. 5-minute interval data for 1 year is a LOT of data (100k rows). I'd be surprised if you aren't timing out. Also, before outputting the data to the file, use the Script Console and loop through the dataset to make sure it has valid data.

If the year was a typo and you were only trying to return 1 day of data anyways, the startTime and endTime would be identical. You can advance the endTime by 1 day or set the time specifically. Also note that month is 0-based.

I appreciate the reply!

Adding a timeout parameter was only necessary for the default tag provider (tags that actually output the correct historical data). There's no error handling so a read time out would stop the script.

I tried checking the values in the Script Console and they're all zeros. Even with only an hour of data.

I did notice that some of the partition tables don't seem to follow the normal sqlt_data_ naming scheme:
I'm probably going to need to reach out to support for this.