SQL query crashes client

I have a large sql query (by large I mean it’s pulling data from 20 tables and joining it) that has been working great until particular point. The query pulls 15 minutes of data to perform calculations. I’ve been having trouble completing calculations for a specific time period and the query is taking over 90seconds and then ALWAYS crashes the client, where as it usually takes about 2 seconds to complete. The only thing about the data that I’ve noticed is that there is a duplicate timestamp for 2 consecutive values.
I’ve put in try/except in an attempt to handle the error if there is one but it does not seem to even throw an exception, just crashes the client.

My question is: Can the problem be this duplicate timestamp? How can I handle that inside the query or prevent Ignition from crashing entirely?

This is the only 15 minute set of data out of about 500,000+ records that is crashing the client.

Anyone run into a sql query crashing their client?

Just to mention, I found several duplicate timestamps in the next chunks of data and, while the queries are slow, they do not crash the client…
Unfortunately, I do not have control of the duplicate timestamp issue as it was a clients proprietary system.

What version of Ignition are you using?

Version 7.3.3 (b570)
Java Version Sun Microsystems Inc. 1.6.0_31
and Microsoft SQL Server 2008 R2

I talked to one of the developers about what was encountered. Can you find out if there is a hs_err_pidXXXX file somewhere on that machine. It sounds like its a Java crash. If you can find that file and send it in or attach it to this thread then we can take a closer look at this.

Searching for all or part of the file name ( ‘hs_err_pid’ and ‘hs_err_pidXXXX’) yielded no results. Also searched for ‘hs_err_pid’ as a word or phrase in the file and that had no results either… Any idea where it might be? I never trust the older windows(Server 2003 R2) search tool.

The XXXX refers to a number that would be assigned to that file. I don’t know what the numbers would be, it would be automatically generated. Try looking on the desktop, documents, Java folder, Ignition folder. For some reason they don’t place that file in a specific location on all machines.

Looked manually in all of the above folders and more… tried every other folder I could think of. Did a search for each part of the filename ‘hs’ ‘err’ ‘pid’ nothing is very similar. Any other options?
Is there some logging I can turn on? I can force the error/client crash to happen if I just perform the query on that chunk of data again. Could it be the version of windows or java?
Thanks for your help so far.

You may be running out of client memory. You could look at that as a possible solution. Another solution is upgrading to 7.4. There were changes to Jython 2.5 (which is used in 7.4) which would prevent the JVM from closing when it runs out of memory. I would look into the memory issue. If you still cant find anything then I would suggest calling into tech support so we can look at the system.

Yea, I think I need to let the customer know the computer they installed Ignition on is a little dated to be a server. Its a 3.2GHz Pentium 4 with 2GB ram. Ignition has a max of 1GB allocated and with only 1 client open it is using 836MB of the ram.
I’ll look into this and see if the usage spikes during the bad SQL query.

Is the database also on the same computer?

Also, you mentioned that they were pulling the 15 minutes of data from 20 tables to run calculations, how many rows are they pulling?

Interesting, I ran the script again on a set of data that was causing it to crash and this time, while monitoring the Gateway overview, I noticed that when the query finally timed out (due to a slow query) the memory being used by Ignition dropped to 30MB(where as it WAS at 850MB). This time the client did not crash and the rest of the script continued. I think you’re on to something with the memory…

[quote]Is the database also on the same computer?

Also, you mentioned that they were pulling the 15 minutes of data from 20 tables to run calculations, how many rows are they pulling?[/quote]

No, the database is on a much more powerful server.

15 minutes of data, 1 minute intervals. So it should be 15 rows from each table unless there are duplicate time stamps.

You can adjust the client memory in designer. If you go to project properties>client>launching you can adjust it. Just keep in mind that if your using 32 bit Java then you can only use 1GB total of memory, so make your adjustment accordingly. Give that a try and see if it works.

Yea, I don’t want to give Ignition more than half of the total system memory.

I did some testing in the Designer and found that for every table that had a duplicate time stamp it was creating a duplicate row for all the columns listed so for instance if each row had 2 ‘5:04PM’ time stamp rows then there would be it appeared to happen exponentially because I ended up with several hundred rows… That was most likely the problem.

Whoever wrote the time stamp conversion script for this database rounded down to the minute from 29 seconds and up to the minute for 30 seconds. Thus if two consecutive times are 5:03:35PM and 5:04:29PM then they both round to 5:04PM.

Interesting that this is the first time this has happened so far in the data.

Thanks for your help, I will be selling them 7.4 as well though!

Sorry, I misread about the clients memory, I will try increasing it a little. Thanks.

I am glad that you were able to source out the problem. That is a really strange way to handle the time stamps.

I just saw that there was another post, sorry, let me know if increasing the memory helped at all.

The memory increase appears to have helped, it has not been crashing as frequently, in fact I’m running though the last 6 months of data and it has been powering through it so far. The problem is still the duplicate timestamps. We wrote a script to truncate the timestamps but found that there is a problem at the equipment level where the MODBUS polling device is occasionally acquiring data at less than 1 minute intervals (seems like it gathers every 59.75 seconds).
There just seems to be a new issue every time another is resolved.

With the way your describing the problem, it sounds like someone was just making small adjustments in different places to accommodate some issue elsewhere in the system. Now that your fixing it, its causing a domino effect. I am glad to hear that your narrowing it down.

With the polling rate falling short of 1 minute, have you thought about changing it to 500 ms? This would only add an additional 15 sets of data. However, I don’t know how that would affect the calculations your going to be doing with the 20 tables.