Opened 2 years ago

Closed 2 years ago

Last modified 2 years ago

#1613 closed defect (invalid)

Nginx uses server's ip address instead of its domain name while verifying as a load balancer

Reported by: andyaskov@… Owned by:
Priority: minor Milestone:
Component: other Version: 1.10.x
Keywords: LoadBalancer Cc:
uname -a: Linux server1.domain.name 3.10.0-693.17.1.el7.x86_64 #1 SMP Sun Jan 14 10:36:03 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
nginx -V: nginx version: nginx/1.10.3
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-file-aio --with-threads --with-ipv6 --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_ssl_module --add-module=/home/admin/ngx_devel_kit-0.3.0 --add-module=/home/admin/nginx-goodies-nginx-sticky-module-ng-c78b7dd79d0d --add-module=/home/admin/set-misc-nginx-module-0.31 --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie'

Description

I am using nginx load balancer. I want to change configuration but I'm facing an error. Previous configuration had a 'proxy_ssl_name' flag with frontend.domain.name. Each server had a certificate with this frontend.domain.name and its own domain.name listed in SAN. Now I want to remove frontend.domain.name and make nginx verify on server's own domain.name.

Now each of two servers has a certificate with its domain.name only. I have removed 'proxy_ssl_name' and replaced each server's IP-address with its domain.name. As far as I understand, 'proxy_ssl_name' overrides address from 'proxy_pass' directive. So now nginx should verify on each server's domain.name. But instead I see this error in logs:


2018/08/08 11:59:17 [error] 13542#13542: *21 upstream SSL certificate does not match "servers_sticky" while SSL handshaking to upstream, client: 192.168.128.78, server: server1.domain.name, request: "GET /devclient/testapp/index.html HTTP/1.1", upstream: "https://server1.ip.address:8448/devclient/testapp/index.html", host: "server1.domain.name"


Where does nginx take server1.ip.address from? And why does it verify on server1.ip.address instead of server1.domain.name?


server {

error_log /opt/path/logs/nginx/lb_error.log warn;
access_log /opt/path/logs/nginx/lb_security.log security_log if=$ssl_client_s_dn;

ssl_certificate /opt/path/nginx/certs/nginx.crt;
ssl_certificate_key /path/nginx/certs/nginx.key;

server_name server1.domain.name;

add_header X-Load-Balancer $server_name always;

include secure-headers.conf;
include enable_ssl_verify_client_rest.conf;
add_header Pragma no-cache always;
add_header Cache-Control no-cache,must-revalidate,private always;

proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

# Clear Connection header to enable keep-alive
proxy_set_header Connection "";

listen 8443;

limit_req zone=rest burst=150 nodelay;

proxy_read_timeout 150;

set_escape_uri $ssl_client_cert_for_tomcat $ssl_client_raw_cert;

ssl_client_certificate /opt/path/nginx/certs/auth_ca.crt;

location / {

proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
limit_req zone=rest burst=150 nodelay;
proxy_pass https://servers_sticky;
proxy_ssl_certificate /opt/path/nginx/certs/nginx-s2s.crt;
proxy_ssl_certificate_key /opt/path/nginx/certs/nginx-s2s.key;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Adding header for server-to-server authentication.
# Also is should wipe the header data if provided by client
proxy_set_header X-SSL-Client-S-DN $ssl_client_s_dn;
proxy_set_header X-Client-Cert $ssl_client_cert_for_tomcat;
proxy_set_header REQUEST-ORIGIN REST;

}

}


New configuration:

upstream servers_sticky {

sticky path=/ secure httponly;
keepalive 16384;
server server1.domain.name:8448 max_fails=30 fail_timeout=15s weight=100;
server server2.domain.name:8448 max_fails=30 fail_timeout=15s weight=100;

}

upstream servers {

keepalive 16384;
server server1.domain.name:8448 max_fails=30 fail_timeout=15s weight=100;
server server2.domain.name:8448 max_fails=30 fail_timeout=15s weight=100;

}

proxy_ssl_verify on;


Old configuration:

upstream servers_sticky {

sticky path=/ secure httponly;
keepalive 16384;
server server1.ip.address:8448 max_fails=30 fail_timeout=15s weight=100;
server server2.ip.address:8448 max_fails=30 fail_timeout=15s weight=100;

}

upstream servers {

keepalive 16384;
server server1.ip.address:8448 max_fails=30 fail_timeout=15s weight=100;
server server2.ip.address:8448 max_fails=30 fail_timeout=15s weight=100;

}

proxy_ssl_name frontend.domain.name;

proxy_ssl_verify on;

Change History (2)

comment:1 by Maxim Dounin, 2 years ago

Resolution: invalid
Status: newclosed

In your configuration the proxy_pass directive uses the servers_sticky name:

proxy_pass ​https://servers_sticky;

So nginx will use this name in proxy_ssl_name by default. Since certificates provided by your backend servers do not contain this name, certificate verification results in an error.

Note that the upstream blocks as used in your configuration do not modify name verification anyhow. You can think of these blocks as a more sophisticated replacement for name resolution. Much like with name resolution, all servers in an upstream{} blocks are expected to be identical and return correct certificate for the name written in the proxy_pass directive.

Note well that upstream: "​https://server1.ip.address:8448/..." as shown in the logs contains the IP address of the server nginx was talking to when the error happened. The IP address is shown to make it possible to understand which server triggers errors when there are multiple servers.

in reply to:  1 comment:2 by andyaskov@…, 2 years ago

My mistake - I thought that without proxy_ssl_name nginx would use server names inside the corresponding upstream{} block for verifying, and not the exact address from the proxy_pass. Thanks for your answer, it's much more clear to me now.

Note: See TracTickets for help on using tickets.