JSON return format for Named query in script

Is there an easy way to get a named query called from within a script to have a return format of JSON similar to the option given if calling a named query from a property binding?

Currently i loop through the returned dataset and create the JSON format myself, however for queries with a large number of returned rows this really slows down execution time.

I think two ways you could accomplish the reduction in overhead are by either:

A. You can actually try this, it may give you exactly what you are looking for:
system.util.jsonEncode(system.dataset.toPyDataSet(system.db.runNamedQuery(project, path, params)))
Converting it into a pyDataset, and then into Json, however the compute overhead would still be done on the client, it just may be faster.

B. Moving the formatting onto your database and adjusting your query to format the data into a JSON structure at runtime, so that when you are querying data it returns a JSON string that you then use a system.util.jsonDecode() and parse it into the object that you need to use? Something like this should help you accomplish that https://www.sqlshack.com/convert-sql-server-results-json/

Thanks for pointing me in the right direction. I implemented your option b on a postgres db by using the json_agg() function.

Select json_agg(t)
FROM
("OLD Query") t

This returned a one cell dataset with the json object perfectly formatted within.

Now to update all the places where we did this manually :frowning:

4 Likes

loss_id = your named query dataset
pyData= system.dataset.toPyDataSet(loss_id)

for row in range(pyData.getRowCount()):

    self.getSibling("loss_code_lbl").custom.loss_id= pyData.getValueAt(row,'id')

you can try in this way. also.

I just want to throw this ideas post I the thread, hopefully people see it, agree this would be really convenient native functionality, and IA can add it in a release!

The more Perspective focused a project becomes, often the more you end up working with arrays of objects instead of datasets. It would be an awesome feature if you could grab the data this way without having to execute another script to convert it every time you want to manipulate it.

Just wanted to chime in, since I was searching for this answer myself.

This proposed solution doesn't seem equivalent to what I want, as there are differences between a named query binding with "json" return format and JSON encoding a PyDataSet.

For example, using a namedQuery binding with json format gives the following:

{
  "Id": 1,
  "IsCritical": false,
  "Order": 1,
  "ValueTypeId": 1,
  "ValueType": "float",
  "ValidationOperatorId": 3,
  "ValidationOperator": "Greater Than",
  "ComparisonValueFloat": 7,
  "ComparisonValueInt": null,
  "ComparisonValueBool": null,
  "ComparisonValueString": null,
  "ComparisonValueDatetime": null,
  "ComparisonValueDropdown": null
}

But using the proposed solution (system.util.jsonEncode after using system.dataset.toPyDataset) gives the following:

{"columns":[{"name":"Id","type":"java.lang.Long"},{"name":"IsCritical","type":"java.lang.Boolean"},{"name":"Order","type":"java.lang.Integer"},{"name":"ValueTypeId","type":"java.lang.Long"},{"name":"ValueType","type":"java.lang.String"},{"name":"ValidationOperatorId","type":"java.lang.Long"},{"name":"ValidationOperator","type":"java.lang.String"},{"name":"ComparisonValueFloat","type":"java.lang.Double"},{"name":"ComparisonValueInt","type":"java.lang.Integer"},{"name":"ComparisonValueBool","type":"java.lang.Boolean"},{"name":"ComparisonValueString","type":"java.lang.String"},{"name":"ComparisonValueDatetime","type":"java.util.Date"},{"name":"ComparisonValueDropdown","type":"java.lang.Long"}],"rows":[[1,false,1,1,"float",3,"Greater Than",7.0,null,null,null,null,null]]}

I have those functions in my project library, maybe this can be enough for most people - it is for me:


def to_dicts(ds):
	"""
	Make a list of dicts from a dataset.
	
	Each row in converted to a dict, with keys matching the dataset's columns names.
	"""
	columns = system.dataset.getColumnHeaders(ds)
	return [dict(zip(columns, row)) for row in system.dataset.toPyDataSet(ds)]


def make_key(row, columns, keys):
	if isinstance(keys, list):
		return '_'.join(str(row[columns.index(k)]) for k in keys)
	else:
		return str(row[columns.index(keys)])

def to_dict(ds, keys):
	"""
	Make a dict from a dataset.
	
	Each row is converted to a dict key/value pair
		- the key built from the values (cast to strings) in the columns specified by keys,
		- the value is a dict with all the values in that row, and keys matching the columns names.
	"""
	columns = system.dataset.getColumnHeaders(ds)
	return {
		make_key(row, columns, keys): dict(zip(columns, row))
		for row in system.dataset.toPyDataSet(ds)
	}

to_dicts is straightfoward, it creates a list of dictionaries, one for each row in the dataset.
to_dict does pretty much the same, except instead of a list of dicts, it's a dict of dicts, the key being the values in the columns passed as the 'keys' argument.

example
headers = ["foo", "bar", "baz"]
data = [
	["foo1", "bar1", "baz1"],
	["foo2", "bar2", "baz2"],
	["foo3", "bar3", "baz3"],
	["foo4", "bar4", "baz4"],
]
ds = system.dataset.toDataSet(headers, data)

>>> print json.dumps(utils.dataset.to_dicts(ds), indent=2)
[
  {
    "foo": "foo1", 
    "bar": "bar1", 
    "baz": "baz1"
  }, 
  {
    "foo": "foo2", 
    "bar": "bar2", 
    "baz": "baz2"
  }, 
  {
    "foo": "foo3", 
    "bar": "bar3", 
    "baz": "baz3"
  }, 
  {
    "foo": "foo4", 
    "bar": "bar4", 
    "baz": "baz4"
  }
]

>>> print json.dumps(utils.dataset.to_dict(ds, 'foo'), indent=2)
{
  "foo1": {
    "foo": "foo1", 
    "bar": "bar1", 
    "baz": "baz1"
  }, 
  "foo2": {
    "foo": "foo2", 
    "bar": "bar2", 
    "baz": "baz2"
  }, 
  "foo3": {
    "foo": "foo3", 
    "bar": "bar3", 
    "baz": "baz3"
  }, 
  "foo4": {
    "foo": "foo4", 
    "bar": "bar4", 
    "baz": "baz4"
  }
}
1 Like

I would suggest looking for the JSON function for your specific database like i showed in my previous comment on the thread for a Postgres install. that was the easiest way i found because i also was having issues with the pyDataSet and Encoding method.