Perspective session: StatusRoutes/ Requested session for "project", does not exist

Hi,

I'm having an issue with a remote machine running Ignition. When I try to launch the Perspective session, it requires me to authenticate. Upon entering my login details, it then initializes the client then gets stuck in "Authenticating...". Upon checking the logs, it gave me information below.

Logger: StatusRoutes
Message: Requested session for "[project_name]", does not exist

I can launch the designer and open the project, the perspective components I've created are still there.

Is this possibly a firewall issue?

Cheers,

EDIT below (additional info)

Just an update on this one, by checking the Chrome console. I’m getting this issue.

image
Image 1

Image 2

So my main issue is that I can access the Perspective pages via jump host. However, when I try to access via the public website (ports 80 and 443 forwarded to 8088). Perspective keep getting stuck on the authentication stage.

Machine is an Amazon EC2 instance with load balancer pushing port 80 and 443 to 8088. Firewalls are currently disabled.

Hi, can anyone can help? Any Ignition perspective staff? Mark’s my integrator for our project and this is an issue that is now impacting setup, testing and delivery.

Thanks,

Patrick.

How is your authentication configured (e.g. Identity Provider)?

For testing purposes, does the Ignition system behave the same way when using simple “internal” auth?

Some tips and recommendations:

  1. The load balancer should be an application type (HTTP).

  2. Customer will be given an A record to use on their domain.

  3. Recommend setting up 2 listeners on the load balancer. First, is on port 443 (standard HTTPS) and be sure to use Amazon’s certs. Second, is on port 80 (standard HTTP) but this should redirect to port 443. That way we only allow 443 traffic.

  4. The security group associated with the LB should only allow 80 and 443.

  5. For the target groups, use Ignition’s standard port 8043 behind the scenes. That way they don’t have to worry about running Ignition on port 443, just use defaults.

  6. Set the health check settings to HTTPS protocol at path / with success codes 200,301,302. The success codes is really important otherwise the health check will fail.

  7. Enable stickiness on the target groups (under attributes). Set the duration to 1 hour or 1 day (whatever time period they want, based on if a client will be opened for that time period). We also use the round robin load balancing algorithm.

Hi Nathan, IDP is the default Ignition / internal gateway IDP.

Just noticed the ports. You don’t want 80 and 443 both forwarded to 8088.

https (port 443) should be forwarded to 8043
http (port 80) would be forwarded to 8088 (strongly recommend against using http in this case)

Ignition Port reference here.

Does that help?

Hi Nathan, I think it would be good if we can setup a meeting with my client’s IT so we can troubleshoot the problem. Just want to confirm, are you based here in Australia? Cheers.

I believe the Cromarty team is assisting with support from iControls (AUS distributor).

I’m out of the US.

Ok, noted. I’ll setup a meeting with them then. Cheers, Nathan.

Hi Nathan,

Using the http port gives the same issue and I don't have an SSL certificate to enable the HTTPS port.

Perhaps you would spot any glaring issues, here's the Load Balancer config from my client:

resource "aws_elb" "ignition-lb-pub" {
name        = "${var.stack_name}-${var.environment}-elb-public"
subnets     = [data.aws_subnet.public.id]
security_groups = [aws_security_group.ignition-elb-public.id]
internal    = false

# ignition
listener {
    lb_port           = 443
    lb_protocol       = "https"
    instance_port     = 8088
    instance_protocol = "http"
    ssl_certificate_id= var.ssl_certificate_arn
}

health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 2
    timeout             = 10
    interval            = 30
    target              = "HTTP:8088"
}

tags  = {
    Name       = "${var.stack_name}-${var.environment}-elb-public"
    owner      = var.owner
    stack_name = var.stack_name
    created_by = "terraform"
}

}

In that case I think lb_port should be 80 and lb_protocol should be http. This maps incoming port 80 (http) to port 8088 on Ignition. I’m guessing that the ssl_certificate_id line should be removed for http.

To get to TLS/https, (SSL cert ready), the mapping would be 443 (https) to 8043 and SSL cert required. Recommended settings at that point are to use https, not http for cloud hosted instances.

@mark.benedicto did you get the issue resolved?