Opened 2 years ago

Closed 2 years ago

#1517 closed defect (worksforme)

defective routing with multiple interfaces and domains

Reported by: bertothunder@… Owned by:
Priority: major Milestone:
Component: nginx-core Version: 1.13.x
Keywords: Cc:
uname -a: Linux localhost 4.4.0-109-generic #132-Ubuntu SMP Tue Jan 9 19:52:39 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
nginx -V: nginx version: nginx/1.13.10
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC)
built with OpenSSL 1.0.2k-fips 26 Jan 2017
TLS SNI support enabled
configure arguments: --with-pcre --with-http_stub_status_module --prefix=/opt/nginx/1.13.10 --conf-path=/etc/nginx/nginx.conf --with-http_ssl_module --with-select_module --with-file-aio --with-threads --with-http_secure_link_module --with-pcre-jit --sbin-path=/opt/nginx/1.13.10/bin/nginx --with-http_realip_module

Description

This is happening on a customer, running NGINX 1.13.10 on a VPS in DigitalOcean with ubuntu 16.04/16.10.

The host has two NICs with example IPs (eth0) 192.10.134.103 and (eth1) 192.19.10.223, and DNS records pointing to domains xxx.dev.domain1.com / yyy.dev.domain2.com into eth0 IP, and xxx.test.domain1.com / yyy.test.domain2.com into eth1 IP.

We had two existing server vhosts for app1.test.domain1.com and app1.test.domain2.com, which proxy_pass into a tomcat server with the right webapps running. Everything was working fine.

The server blocks are similar to:

# app1
server {
   listen 443 ssl http2;
   server_name app1.test.domain1.com;

   [...]

   location / {
      [...]
      proxy_pass http://<tomcat>:<port>/app1;
   }
}

# app2
server {
   listen 443 ssl http2;
   server_name app2.test.domain2.com;

   [...]

   location / {
      [...]
      proxy_pass http://<tomcat>:<port>/app2;
   }
}

unimportant details missing in the config, as ssl options, etc.; as said, this was working fine.

Issue arised when we added a new server vhost for app1.dev.domain1.com:

# app1 dev
server {
   listen 443 ssl http2;
   server_name app1.dev.domain1.com;

   [...]

   location / {
      [...]
      proxy_pass http://<dev_tomcat>:<port>/app1;
   }
}

With this vhost enabled, anything trying to hit either app1.test.domain1.com or app2.test.domain.com will be routed through the new .dev.domain1.com vhost, no matter what we do, returning a 404 as the dev tomcat webapp does not contain the expected app for test in dev.

We have enabled upstreamlog log format to ensure this, and nginx it's always routing any .test.domainX.com request into .dev.domain1.com.

DNS records point to the right IPs for the righ NICs, checked.

What we actually found to make this work, is we had to change the listen:

# app1 test
server {
   listen 192.19.10.223:443 ssl http2;
   [rest unchanged]
}

# app2 test
server {
   listen 192.19.10.223:443 ssl http2;
   [rest unchanged]
}

# app1 dev
server {
   listen 192.10.134.103:443 ssl http2;
   [ rest unchanged ]
}

With this change, everything works as expected, and routing no longer makes a mess.

I could not find anything about this in the documentation, and I don't understand how the routing for xxx.test.domain1.com would be accepted through xxx.dev.domain1.com, since the server_name does not match.

I would expected nginx to reject the requests since the domain and host for server_name does not match, in such a case. But it's accepted, routed, and proxied to the (wrong) tomcat, without the specific IP address in the listen parameter.

Is this right??

Change History (4)

comment:1 by Maxim Dounin, 2 years ago

If a request does not match any server_name in the server{} blocks configured for the listening socket, the request will be handled in the default server for this socket, see the Request processing article. Such default server can be either market explicitly using the default or default_server parameters of the listen directive, or the first server defined for the listening socket will be used.

Given that include files are loaded in alphabetical order, and assuming you've used include files for server{} blocks, I suspect that your problem is that app1.dev.domain1.com become the default server, as in alphabetical order it comes before app1.test.domain1.com. This doesn't explain why names explicitly defined in other servers server_name directives do not work for you, but given your other questions I suspect you use domain names which does not match server_name directives.

Please read the Request processing article, it includes basic explanation on how nginx matches server{} block it will use to process a request. If after reading the article you'll still think that there is a bug in nginx, please provides full minimal nginx configuration to demonstrate the bug.

comment:2 by bertothunder@…, 2 years ago

Thanks for the explanation mdounin. It makes sense indeed for app1.dev.domain.com becoming the default server, given it was the first one in order.

But no, the domain names and server_name(s) were a full match. server_name are defined with app1.dev.domain1.com, app2.dev.domain2.com for DEV (DNS A records point to the eth0 IP), and app1.test.domain1.com, app2.test.domain2.com for TEST, with the right DNS A records too.

That's what got me confused about the issue, since I would not expect nginx accepting / routing through a different domain than the one in server_name.

comment:3 by Maxim Dounin, 2 years ago

Ok, so the remaining question is why requests to app1.test.domain1.com and app2.test.domain2.com were not processed in server blocks with matching server_name directives, but were handled in the default server instead. Please provide full nginx configuration which demonstrates the problem (nginx -T might help here).

comment:4 by Maxim Dounin, 2 years ago

Resolution: worksforme
Status: newclosed

Feedback timeout.

Note: See TracTickets for help on using tickets.