In Ignition Perspective, I want to replace numeric values (1,2,3) in a table with textual representations like "moderate", "torelable", "danger", etc.,
Options:
- Do that when generating the data. e.g., In SQL.
- Run a script transform on the Table component
data
to replace the numbers in the required column.
If you need more help then supply more information.
- What is the datasource?
- Is it JSON or a dataset?
Now is just a fixed dataset I copied on the table but in the future would be a Named Query linked to a Connection
Ok, I try conditionally changing the table with an external view and something like:
if({view.params.rowData.RIESGO} == "1", "Important",
if({view.params.rowData.RIESGO} == "2", "tolerant",
if({view.params.rowData.RIESGO} == "3", "fine", "")))
If it's going to be a named query, return the content as json
instead of auto
. Then, attach the following transform:
def transform(self, value, quality, timestamp):
# pay attention to the returned type of the values. If they're actually strings, you'll
# need to cast them before running them through the dict, or change 1 to "1".
translations = {
1: "Important",
2: "tolerant",
3: "fine"
}
# the following variable should contain each of the columns which you need to translate
columns_to_translate = [
"population"
]
new_data = []
for row in value:
row_object = {}
for column in row:
if column in columns_to_translate:
data = translations.get(row[column], "")
else:
data = row[column]
row_object[column] = data
new_data.append(row_object)
return new_data
Will do tomorrow, good day.
If you have a lot of data, running a script to update every row might be inefficient. A solution I implemented, and as implied by @Transistor, can be embedded in the query itself with appropriate formatting. Note my example is from Postgresql but the concept should be transferrable. If Named Queries had option to return json like the query bindings do this wouldn't be as complicated (*cough feature request *cough).
Essentially, update your main query to generate the styling values based on a criteria (level in this case), and then use an outer query to convert that into psuedo-json results.
SELECT FORMAT(
'{
"level":{"value":%1$s, "style":{"backgroundColor": "%3$s","color": "%4$s"}},
"text":{"value":"%2$s", "style":{"backgroundColor": "%3$s","color": "%4$s"}},
}', level, REPLACE(text, '"', ''''), bgColor, fgColor) AS json_formatted
FROM (
SELECT
level,
text,
CASE
WHEN level < 200 THEN '#F1F1F1'
WHEN level < 400 THEN '#F1F1F1'
WHEN level < 600 THEN '#FBFFC7'
WHEN level < 800 THEN '#FFD4D4'
WHEN level < 1000 THEN '#FA7D7D'
ELSE '#3B3B3B'
END AS bgColor,
CASE
WHEN level < 200 THEN '#828282'
WHEN level < 400 THEN '#000000'
WHEN level < 600 THEN '#000000'
WHEN level < 800 THEN '#000000'
WHEN level < 1000 THEN '#FFFFFF'
ELSE '#FFFFFF'
END AS fgColor
FROM my_table
WHERE ...
This returns rows of results that are formatted like json, but aren't combined into a single object yet. Pump those results through a quick list comprehension and viola, your data is ready to be bound to the table.
ds = system.db.runNamedQuery("yourNamedQuery", params)
data = [system.util.jsonDecode(row[0].replace("\n", "")) for row in ds] # the \n is byproduct of outer query so we fix it here.
EDIT: replaced literal_eval()
with system.util.jsonDecode()
per reply from @PGriffith
This will yield a json formatted structure compatible with the table data object.
If your data has lot of rows, this solution should be more performant.
If your data has lots of columns needing this, then a pure Python solution like @cmallonee's is probably better.
Ewww! No, don't go there. eval()
is almost never safe.
@pturmel my understanding was ast.literal_eval()
was much safer than just eval()
as the former only accepted pre-defined python types, whereas the latter execuates before evalation. Is literal_eval() that worrisome?
If you have JSON, then use system.util.jsonDecode
. ast.literal_eval
is 'safer' in the sense that it's a gun with the bullets taken out, but safety aside, it's also dramatically slower than parsing to JSON - standing up the AST parsing mechanism is expensive.
Not all JDBC drivers support SQL scripts, so MMV depending on the DB in use.
Ironyeti1's example in #6 is a single statement, not a script.
Well,
That’s what I get for replying before I think.
Thank you @PGriffith, I compared system.util.jsonDecode()
against ast.literal_eval()
and it was definitely faster.