Nginx configuration

We are trying to configure an nginx reverse proxy.

Even though connecting to our 7.9 server works just fine, trying to connect to our 8.1 server times out after pressing the login in button.

Watching the requests sent to the server, we’ve noticed that a GET request is sent to http://10.10.10.11/idp[...] instead of http:/10.10.10.11:8100/idp[…] which understandably times out as there is nobody listening on the 80 port. What’s even wierder is that this request is sent to the correct port when talking to the igniton 8.1 server directly.

Our servers being located at:
Ignition 7.9 → 10.10.10.10:8088
Ignition 8.1 → 10.10.10.11:8100

Our nginx config file being:

user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
	worker_connections 768;
}

http {
	sendfile on;
	tcp_nopush on;
	types_hash_max_size 2048;

	include /etc/nginx/mime.types;
	default_type application/octet-stream;

	ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
	ssl_prefer_server_ciphers on;

	access_log /var/log/nginx/access.log;
	error_log /var/log/nginx/error.log;

	gzip on;

	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*;
  
  server {
# Ignition 7.9 server
    listen 8088;

    location /main/system/mobile {
      valid_referers blocked server_names ~.*project=(?:proj1|proj2).*;

      set $REQ_CAN_PASS "false";

      if ( $arg_project ~ ^(?:proj1|proj2)$ ) {
        set $REQ_CAN_PASS "true";
      }
      if ( $invalid_referer = '' ) {
        set $REQ_CAN_PASS "true";
      }

      if ( $REQ_CAN_PASS = "false" ) {
       return 404;
      }

      resolver 1.1.1.1;
      proxy_pass http://10.10.10.10:8088;
    }
    
    location / {
      return 404;
    }
  }

  map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
  }

  upstream perspective {
    server 10.10.10.11:8100;
  }

  server {
# Ignition 8.1 server
    listen 8100;

    location / {
      set $REQ_CAN_PASS "false";

      if ( $uri ~ /data/perspective/(?:client|project|login)/(?:proj1|proj2) ) {
        set $REQ_CAN_PASS "true";
      }

      if ( $uri ~ /res/perspective/.* ) {
        set $REQ_CAN_PASS "true";
      }

      if ( $uri ~ /data/perspective/(?:themes|style-classes|fonts|hello|translation|keepalive).* ) {
        set $REQ_CAN_PASS "true";
      }

      if ( $uri ~ /idp/.* ) { 
        set $REQ_CAN_PASS "true";
      }

      if ( $http_upgrade = "websocket" ) {
        set $REQ_CAN_PASS "true";
      }

      if ( $REQ_CAN_PASS = "false" ) {
        return 404;
      }

      proxy_pass http://perspective;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
      proxy_set_header Host $host;
    }

  }

}

Okay, this is just a guess because I’m not very familiar with nginx, but in your 8.1 config the proxy_pass value does not have a port on it, so I assume it’s going to use 80 there.

Thanks for the input.

proxy_pass http://perspective points to the perspective upstream which i declared above like so:

upstream perspective {
    server 10.10.10.11:8100;
}

Which contains the port.

Just to make sure i tried changing proxy_pass http://perspective to proxy_pass http://10.10.10.11:8100 which gives the same results.

Meaning that something else must be wrong.

As a temporary fix we ended up making nginx also listen on port 80 for any connections, and forwarding those to the ignition 8.1 server.

The problem with this naive solution is that we are forced to keep port 80 open in our router as well.

Is there a more elegant solution we haven’t thought of?

@allnet did you solve this problem? I am also having this issue. :frowning:

I thought it maybe an issue with the gateway > config > web server not having its public ports set, but they are correct and I have turned off auto detect.

If I disable the need for a login (only for testing) the rest of my application seams to work just fine. It only when i try to login will it then dropy my port and uses port 80. :frowning:

Unfortunately no.

As usual any temporary fix tend to become permanent.

I am thinking maybe to open a service request later but that issue is not on my priority list.