Query help sql date range between grouping 7am to 7am

I think I got it.

after the " )t"
I put like left join ( most that query) sub3

I joined on

on format(dateadd(hh,-8, t.t_stamp), 'yyyy-MMM-dd') = sub3.day

query takes 1.175 seconds right now

gets me the last 60days of 7am to 7am of manual info, auto info, and durations of codes for one machine

Thanks for help

Wow, that's quite a long time. How many rows does that return?

38 rows is the final

it is gathering a ton more initially from both the subqueries

one of them is collecting the total of all time when the machine was running

one is getting manual and auto data for the last 60 days, no repeats

:astonished: That seems like ages for only 38 rows.

I will suggest that there is a better way. I donā€™t really know/understand your schema or what your trying to accomplish, but anytime a query takes over a couple hundred ms to run and Iā€™m not collecting 10,000 + rows, I take a step back.

2 Likes

it is collecting over 10,000 rows before the final trim

fetches 30,000 rows for one machine for the section

select line, code, eventTable.t_stamp as t_stamp
		from   eventTable 
		WHERE 	t_stamp >= coalesce((
					SELECT TOP 1 t_stamp FROM eventTable
					WHERE 	eventTable.t_stamp < DATEADD(hh,7,DATEADD(dd,DATEDIFF(dd,0,GETDATE()-60),0))  and code <260 and Line= 'B2'												
					ORDER BY t_stamp DESC), '2021-01-01') AND 
				t_stamp <= coalesce((
					SELECT TOP 1 t_stamp FROM eventTable
					WHERE 	eventTable.t_stamp > DATEADD(hh,7,DATEADD(dd,DATEDIFF(dd,0,GETDATE()-0),0)) and code <260 and Line='B2'						
					ORDER BY eventTable.t_stamp), dateadd(hour,5, DATEADD(hh,7,DATEADD(dd,DATEDIFF(dd,0,GETDATE()-0),0)) )) 				
	

500ms here for 30k+ rows

60ms the other subquery gets 200 rows

trims it all down to 38 rows, organized by day over 60 days

would have all 60days show, but they donā€™t run all days

I thought 1.2 seconds was doing really well for this

Though I also think I will set it as a query tag that gets driven once a day, when I figure out how in the next few hours.

Is there a way to make the subqueries much faster?

I am getting an error index 73 is out of range.

trying to speed up the query
if I am using this in my query:

subq.Line=Case 	when :machine=1 then 'b1' 
				When :machine=2 then 'b2'
				When :machine=3 then 'b3'
				When :machine=4 then 'b4'
				When :machine=5 then 'b5'
				When :machine=6 then 'b6'
				When :machine=11 then 'c1'
				When :machine=12 then 'c2'
				When :machine=13 then 'c3'
				When :machine=14 then 'c7'
				When :machine=15 then 'c5'
				When :machine=16 then 'c6'
		        else null end )

Can I do it in the declare instead?
W3schools, unless I missed it, didnā€™t really cover declare and set.
so before my query I would say like:

declare @thisVariable varchar(50)
set @thisVariable = (Case 	when :machine=1 then 'b1' 
				            When :machine=2 then 'b2'
				            When :machine=3 then 'b3'
				            When :machine=4 then 'b4'
				            When :machine=5 then 'b5'
				When :machine=6 then 'b6'
				When :machine=11 then 'c1'
				When :machine=12 then 'c2'
				When :machine=13 then 'c3'
				When :machine=14 then 'c7'
				When :machine=15 then 'c5'
				When :machine=16 then 'c6'
		        else null end )

or if I say

declare @thisVariable varchar(50)
set @thisVariable = 'B2'

Then I get an index 55 out of range


getting 3 to 1.5 seconds for the query when I use execute now that I am using case statement for each machine


found uptime and downtime are stored in plc, in tags, and now adding to transactions so I am removing the join to the query for the durations for at least a few machines that I can


How can I speed the query up? how do I fix those index errors and use declare and set to get through some of the case statement early?

This grouped by the day offset by 8 hours.

How can I group in sets of 8hrs though?

I think you need to add an index to t_stamp and include the fields you're returning to that index.

Subqueries are typically very fast. You probably don't have an index on t_stamp so it has to do a table scan to figure out what records apply to your query. You want to avoid table scans.

You probably don't have an index on the other 2 things in your where clause but I would check out performance after adding an index to t_stamp before going crazy with indexes. Too many indexes can be a bad thing too. I typically find that adding an index to t_stamp is sufficient on Ignition data.

It's good to put Ignition and SQL on different servers on large systems. I don't think that applies to what you're doing in this case but it's something to keep in mind if you're doing a lot of transactions.

1 Like

If you're using MSSQL server, look into windowing functions.

1 Like

Case statement is for doing inline conditional stuff. You don't want to assign that to a variable because that would mean you're doing dynamic SQL which is something you should avoid at all costs.

It's hard to tell what your schema is but typically you'll have a lookup table with a list of indexes (machine number) and descriptions. So you'd have a machineID and the machinename in that table. You could add another field to classify machines by groups or however you want to organize them. This allows you to collect the data with the machine index in your table and query it by the index. If you want to show the machine name you just join with your lookup table to show the machine name or to filter your data by a machine group.

1 Like

I did index my table.
Tested my indexing to verify that the queries were using the index.
The query is fantastic.

Yet, I still need help to find a way group into 8 hour sections for one of the requests I have been given.

I am using MSSQL.

I will check out the windowing functions thanks.

I avoid joins like the plague, but I do have one to get descriptions of some things.

This is a bad habit. Joins, when properly indexed themselves, are very efficient. Way more than large CASE statements.

5 Likes

100% agree.

1 Like

I avoid big case statements like the plague too. I appreciate the help.

If you have a way to group on 8 hours, I would much appreciate it.
I am going to look into the windowing function which might solve it as you suggested, just trying to meet a deadline on another thing.

I posted a comprehensive solution for grouping by shift some time ago (fully dynamic table-driven shift definitions):

It relies on a PostgreSQL table-generating function for efficiency and convenience, but that can be replaced with a recursive common table expression.

The key is, given a timestamp, produce the timestamp for the beginning of the shift that falls in. As a computed column in a subquery. Then the outer query can group on that start-of-shift timestamp.

The comprehensive solution provides a joinable, dynamically-generated set of shift periods that can also be used to truncate timespans.

1 Like

A dynamically-generated set of shifts sounds good for this.
I don't understand how it works, though for sure if I had shift times that were changing, then I would use it.

I was hoping that since I was formatting for days for offset using addition, that there would be a way to scale the t_stamp for grouping. If I scale down, then group 8 hours in an hour, or if I can scale up to 1 day per 8 hour shift, but then later reverse that, it seems like the best option.

Is there a kind of way to multiplex and demultiplex the grouping of t_stamps?

Do this.

1 Like

Thanks, I can do that. Even when you quoted just that bit, I had to read a few times to grasp. Thanks again.

Seems so obvious now that you said it. Thanks so much.