I am doing a search on a DB table to generate a dataset for a template repeater. When getting to large of a result search it can bog down the system due to loading all the templates. I don't know what the per display limit for the repeater is but if I just have the Query loading full dataset no slowdown but if repeater is connected and results are a large number of rows ignition will hang and eventually shutdown client.
What I am thinking of doing is
parameter on root container that contains the named query for the search --> full search dataset
Template repeater that grabs Row x -> 20+x from that dataset to make it's dataset for the repeater showing the first 20 rows of data
Add a button that will cause the repeater to shift to Row 20+x as the start point.
This way it never tries to load more the 20 rows into the template at a time but operator can still page through to be able to see full search results.
Part I am currently stuck on is how do I get the full dataset into a filtered dataset that changes based on user input from on screen. I don't see a way to do this in expression so thinking I would need to runScript to generate the sub dataset. Before I go down that route looking for any other ideas / solutions.
The template repeater is a memory hog. Give the client as much memory as you can, and use either a scripted binding or a LIMIT clause in your SQL to ensure there aren't too many rows. You might find my view() expression function helpful in local filtering cases (part of the free Simulation Aids module).
View is an expression, not a scripting thing. So you can make a custom dataset property utilizing the view() expression and then grab that in your scripting.
You could also install that and then you can use the expression language in scripting as well with system.util.evalExpresssion if you really want to use view in the scripting directly.
RowCount and StartRow are set buy navigation next/prev which are Visible on/off based on high/low limit based on math with Totalrows startrow and rowcount
Ewww! Don't recompute the select string. That will have horrible performance. Use the view() function's support for args to process dynamic values without reparsing the PseudoSQL.
just for my clarification how does computing the select string change the performance? Is it an order of operations in that on change it recalcs the view statement. then changes the select string which causes a change so it does the view statement again?
Any time the first argument to view() changes, or the dataset's column names change, the internal jython script that solves the request must be regenerated and then compiled before it is executed with the rest of the arguments. If the string and dataset columns still match, the previously compiled jython is simply executed with the fresh arguments.
Turn on the function's DEBUG logger to get the actual internal script it generated, and get timing details for every execution.