Are comments in Named Queries known to not work correctly?

Why do you draw this distinction?

Because scripted queries cannot participate in asynchronous bindings. (Well, not without going through your [expletive] to get to your elbow.)

1 Like

Ah I see, that is an important distinction. Not even using system.util.invokeAsynchronous?

Normal bindings, like expression bindings, must return a value immediately. You can fake it by returning an initial value then retriggering (from invokeAsynchronous or similar) to return a final value, but it is ugly.

Asynchronous bindings (tag and query binding) do not have to supply a value immediately--they supply the result value when it is ready, period.

Ignition has no way to run a script as an asynchronous binding. (And no way to add this via the SDK, either. Grrr.)

(This is one of the reasons the tag() expression function is so terrible compared to regular/indirect tag binding.)

So say I want to run a query and show its results in a templated widget in Vision -- what would need to be done to avoid locking the GUI? Would it be a message request to the gateway to run a script, with the final step of the script to load the results into a memory tag that is bound to the template? I continously struggle with the line between client / gateway functionality.

Just use a parameterized named query in the template.

That was my original plan, it is just resisting my efforts.

@Kevin.Herron I can reproduce the gateway crashing behavior that I mentioned earlier.

If I run an unlimited query in the Testing tab of a given Named Query, then go back to Authoring, add a add a LIMIT 100 statement where previously unbounded, and then go back to Testing and hit Execute Query again, it will hard crash my gateway with no log evidence.

I am running Ignition in an ArgoCD instance, using the official 8.1.31 image.

My (in development) query (with LIMIT in place) is:

WITH delta AS (
  SELECT
  	tag_historian.t_stamp as ts,
    TO_TIMESTAMP( :timestamp, 'YYYY-MM-DD HH24:MI:SS') - tag_historian.t_stamp AS dt
  FROM
    tag_historian
  Order by dt asc
  Limit 100 
),
data AS (
	SELECT 
		tag_historian.t_stamp as ts,
		delta.dt as x,
		CAST(tag_historian.pt03_pressure as FLOAT) AS y
	FROM tag_historian
    JOIN delta ON tag_historian.t_stamp = delta.ts
)
SELECT ts,x,y FROM data WHERE y is not null ;

Upon further tinkering, it may just be the original unlimited query that is bringing down the gateway. The ArgoCD instance does not indicate that entire pod failed, the gateway just hard restarted itself.

Yeah, probably. Careful with the rope.

1 Like

Sometimes it gives a java heap error, which I think is the appropriate way to handle this error. The rest of the time it hard crashes the gateway into a reboot, with no logged information. I think I don't need to elaborate much on why the handled gateway error is preferable to a nearly invisible unhandled exception that forces the entire gateway to restart. Besides production stability concerns, it somewhat limits my ability to develop the system if I have to worry about crashing the entire thing in the process.

We have upgraded the memory allocated to the gateway in the configuration file to 4096MB (was 1024 by default), and that at least makes me have to try a little harder to crash the whole thing. I have it running in a container specifically to allow it to have access to as many server resources as it needs, but it won't use them.

Two questions:

Is there a way to set the gateway to use host memory instead of a hardcoded limit?
Is there a way to encourage the gateway to emit those java heap errors instead of choosing death?

I think you're stuck with the -Xmx settings. That is host memory, it's just a limit on it.

edit: err, you're using containers, you mean something else by host I think. Ignition sees the container as its "host". It does not know it's in a container.

Maybe. It depends. Without the wrapper logs it's hard to say. You might be getting sniped by the Linux OOM killer, or you might be getting restarted by the service wrapper "watchdog" that restarts an unresponsive gateway. The latter can be disabled in ignition.conf.

I think you're stuck with the -Xmx settings. That is host memory, it's just a limit on it.

Is that under Java Additional Parameters in the documentation like (I added line 4)

# Java Additional Parameters
wrapper.java.additional.1=-Ddata.dir=data
#wrapper.java.additional.2=-Xdebug
#wrapper.java.additional.3=-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=*:8000
wrapper.java.additional.4=-Xmx

edit: err, you're using containers, you mean something else by host I think. Ignition sees the container as its "host". It does not know it's in a container.

The container is configured with unbounded memory access to the server it is running on -- I believe that should expose as something like 128GB of memory. If I can get Ignition running on the "host memory limit", it should mirror that?

Maybe. It depends. Without the wrapper logs it's hard to say. You might be getting sniped by the Linux OOM killer, or you might be getting restarted by the service wrapper "watchdog" that restarts an unresponsive gateway. The latter can be disabled in ignition.conf .

What exactly do you mean by wrapper logs? And the watchdog doesn't leave a log of its confirmed kills? It's highly perplexing that nothing seems to be claiming credit for this, especially since sometimes the gateway can protect itself with the java heap errors.

The "wrapper logs" are the log files you'll find on the filesystem at $IGNITION/logs. They can contain additional information that isn't piped through the logging system that requires Ignition is running, in particular messages from the service wrapper end up here.

Is there a wrapper.ping.timeout property in your ignition.conf? Maybe it's not explicitly set any more, if not...

https://wrapper.tanukisoftware.com/doc/english/prop-ping-timeout.html

Is there a wrapper.ping.timeout property in your ignition.conf ? Maybe it's not explicitly set any more, if not...

I will investigate and get back to you

We were unable to get those logs unfortunately