Tag History goes bonkers with very large number

I have a type of device that will have a board failure and the tag will go to a very large number.
Im calling large like 340,282,346,638,528,860,000,000,000,000,000,000,000. When this happens, the tag history system constantly stores that number to the tag database despite being set to only store on change.

For instance, I had 3 tags that this happened to over the last few days. These devices only update every 30 minutes, yet I have 57264 rows of data stored for these three tags, with almost every row being the same number. I will see 4-5 entries for every minute.

Version 7.8.4 (b2016082217)
OPC Tag datatype = Float

That sounds like very unhelpful behaviour from the device!

340,282,346,638,528,860,000,000,000,000,000,000,000 is the maximum value that can be held in a single precision floating point number when it is cast to a double. I’d check if it was sending the same binary value (01111111011111111111111111111111, 0x7f7fffff) each time. If it was I’d assume there was a problem with the Ignition historical logger and let IA know.

[quote=“AlThePal”]That sounds like very unhelpful behaviour from the device!

340,282,346,638,528,860,000,000,000,000,000,000,000 is the maximum value that can be held in a single precision floating point number when it is cast to a double. I’d check if it was sending the same binary value (01111111011111111111111111111111, 0x7f7fffff) each time. If it was I’d assume there was a problem with the Ignition historical logger and let IA know.[/quote]

Yeah definitely. Not sure what causes the device to send that particular number but it usually correlates with a bad board. It seems to be the same number every time. It causes the tag history system to freak out and write constant records. The device isnt updating more than every 30 min so there is no reason other than a bug in the Tag history system for it to store multiple records per minute when it is set to only store on a change. When I realized what was going on I had to go back and delete millions of rows out of the SQLTH_Data table. After doing that I had 3 more tags on various devices do the same thing over the weekend and it ran it up to 52k entries despite the value never changing.

I guess I could use scaling with a clamp on the tags that are normally affected to get around it.