Opened 3 years ago
Closed 3 years ago
#2318 closed defect (invalid)
SSL handshaking SSL_shutdown after 24h
Reported by: | Owned by: | ||
---|---|---|---|
Priority: | blocker | Milestone: | nginx-1.21 |
Component: | other | Version: | |
Keywords: | Cc: | ||
uname -a: | Linux nginx-rabbitmq-85f6cb6997-h6r6q 5.4.0-1064-azure #67~18.04.1-Ubuntu SMP Wed Nov 10 11:38:21 UTC 2021 x86_64 GNU/Linux | ||
nginx -V: | version nginx 1.21.3 |
Description (last modified by )
We are going to deploy ngnix into a pod in Azure AKS after 24h the service shutdown the SSL Handshaking and Ngnix throw the next error and the service has not attend new request.
2022/02/08 08:29:51 [crit] 23#23: *348004 SSL_shutdown() failed (SSL: error:14094123:SSL routines:ssl3_read_bytes:application data after close notify) while SSL handshaking, client: xxx.xxx.xx, server: 0.0.0.0:8000
2022/02/08 08:48:51 [crit] 23#23: *350873 SSL_shutdown() failed (SSL: error:14094123:SSL routines:ssl3_read_bytes:application data after close notify) while SSL handshaking, client: xxx.xxx.xx.7, server: 0.0.0.0:8000
2022/02/08 08:58:47 [error] 23#23: *352368 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.46.64.166, server: 0.0.0.0:9000, upstream: "xxx.xxx.xx.185:5672", bytes from/to client:243/8, bytes from/to upstream:8/243
2022/02/08 08:58:52 [crit] 23#23: *352385 SSL_shutdown() failed (SSL: error:14094123:SSL routines:ssl3_read_bytes:application data after close notify) while SSL handshaking, client: xxx.xxx.xx.7, server: 0.0.0.0:8000
Change History (5)
comment:1 by , 3 years ago
Description: | modified (diff) |
---|
comment:2 by , 3 years ago
Description: | modified (diff) |
---|
comment:4 by , 3 years ago
Hello, we have the next configuration in the POD, we do not undestand why we need restart the pod and the service accept request again but only for one day.
#user nobody;
worker_processes auto;
#error_log /var/log/nginx/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
listen 80;
location = /healthz {
access_log off;
add_header 'Content-Type' 'application/json';
return 200 '{"status":"UP"}';
}
}
}
stream {
server {
listen 8883 ssl;
#SSL
ssl_verify_client on;
#ssl_verify_depth 1;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_certificate /home/nginx/certs/tls.crt;
ssl_certificate_key /home/nginx/certs/tls.key;
ssl_trusted_certificate /home/nginx/certs/ca.crt;
ssl_client_certificate /home/nginx/certs/ca.crt;
#Active only when the CA has a CRL endpoint created
ssl_crl /tmp/nginx/crl/CAS.crt;
#proxy
proxy_ssl on;
proxy_ssl_server_name on;
#proxy_ssl_session_reuse on;
proxy_pass rabbitmq:8883;
}
server {
listen xxxx;
#proxy
proxy_ssl_server_name on;
proxy_pass rabbitmq:xxxx;
}
}
comment:5 by , 3 years ago
Resolution: | → invalid |
---|---|
Status: | new → closed |
There are no limits like 24h in nginx.
The recv() failed (104: Connection reset by peer) while proxying and reading from upstream
error in your logs suggests that the particular connection was reset by the upstream server. To find out why this happened consider investigating your upstream server and/or the network configuration.
The other errors, SSL_shutdown() failed ... while SSL handshaking
, happen when SSL_shutdown() is called right after SSL handshake. Likely this happens due to client certificate verification errors. Check the logs at the info level to find out what goes wrong, there should be other errors like ... [info] ... client SSL certificate verify error: ... while SSL handshaking, client: ..., server: ...
which will explain what goes wrong.
If you need further help to find out what goes wrong in your configuration, consider using support options available.
The SSL_shutdown() errors reported are likely to be expected on connection timeout where, while shutting down the SSL layer on the connection, some data is then received from the client. This means that the application layer of the connection isn't properly terminated (as due to timeout). Since the stream module doesn't usually involve on the application layer, these errors can be interpreted as innocent.
A common suggestion would be to adjust timeouts (see the
proxy_timeout
directive as the most suitable in the stream module).You can apply this patch to lower the logging level of such errors: