Easy Chart DB Pen Resolution

I have a client that is using Transaction Groups as their historian. One of there tables has 47 columns and is inserting new rows at a 5 second interval.

When using the DB Pens with the Easy Chart it tries to pull all data points within the selected timeframe. Is there a way to set a resolution or grab every nth row? I know it can be done with the Basic Chart since you load the data directly and a properly structured query would accomplish it. The Easy Chart historical settings allow you to determine data resolution or density but nothing for DB Pens?

For example if a user selects 5 days worth of data then the Easy Chart is trying to handle 4 million data points (47 pens at a 5 second interval over 5 days). But there is no way to tell the Easy Chart to only return 1000 data points spaced evenly apart.

Any ideas or thoughts will be appreciated! Thanks!

There is no way to set the resolution for the DB pens that I am aware of. You can use the Max Selection to limit how much data they are allowed to pull, though that is not always applicable.

Honestly, the basic chart is probably the better option in this case.

Or you can tell them if they want to pull 5 days of data expect it to take a bit retrieve it. I’m sure that will stop the complaints :rofl:

This is why my time series database cache module exists. You can pull millions of rows if you need to, without crashing your client. (My NoteChart module is needed for cache access from EasyNoteChart DB pens.)

Thanks for the ideas. I actually figured it out last night.

I’m going to provide a numeric text field so a user can enter the data points they want to display (default will be 500). When they make a timeframe selection the screen will divide the number of hours/minutes by the data points and then will build a string that will be used in the “where” property of the Easy Chart. Like this: WHERE date_part(‘minute’,t_stamp) IN (10,20,30,40,50,60,70,80,90)

With a script I’ll be able to iterate through the timeframe and build that “where” string as they zoom in/out.

My little bit of testing showed that it reduces the sluggish response from 10-30 seconds down to 1-2 seconds.

It’s working…somewhat.

Due to the transaction groups timestamp not being uniform e.g. - every 5 seconds does not mean a timestamp at 0,5,10,15,20 seconds so on and so forth. It might be 4,9,14,19… and each table is different.

So one of my WHERE clause is as follows:
date_part(‘second’,t_stamp)::integer = 0 AND date_part(‘minute’,t_stamp)::integer IN (0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58)

If I want a record for every other minute then I also have to set the ‘seconds’ to equal something because at any given minute there will be multiple records (every 5 seconds). But with a variable t_stamp it may get hairy…

Any thoughts? Also have to consider the intervals may change - meaning the every 5 seconds may jump around as the groups are started/stopped or Gateway restarts. I can’t rely on the seconds always being uniform.

Try using the window function row_number() in a view in your db to pick out just the first row in any given minute. It’ll have to be organized as a nested query because window functions aren’t allowed in WHERE or HAVING clauses. Something like this:

    SELECT *, row_number() OVER (PARTITION BY date_trunc('minute', t_stamp) ORDER BY t_stamp) AS row_in_minute
    FROM original_table_name
) subq
WHERE row_in_minute = 1

pturmel - not 100% sure I follow on what you suggested. Are you saying to build a nested view with the row_number() and point the Easy Chart to the view instead of the source transaction group tables?

What I did do: when the script builds the WHERE clause for the Easy Chart it will first SELECT DISTINCT on the transaction group table to get the time value integers that actually exist and then builds the date_part string for specific seconds/minutes/hours depending on the time range that was selected.

Taking what you suggested may work better, in fact if I can build a proper view with row_number() then I wouldn’t need to use the time values and can just return every nth row based on the number of data points the end user would want.

Yes. You’ll get one row per minute, no matter how many are really there.

The partition by clause determines the size of the segment to collapse into one row.