#2318 closed defect (invalid)

SSL handshaking SSL_shutdown after 24h

Reported by: guillermo.lopez.pando2@… Owned by:
Priority: blocker Milestone: nginx-1.21
Component: other Version:
Keywords: Cc:
uname -a: Linux nginx-rabbitmq-85f6cb6997-h6r6q 5.4.0-1064-azure #67~18.04.1-Ubuntu SMP Wed Nov 10 11:38:21 UTC 2021 x86_64 GNU/Linux
nginx -V: version nginx 1.21.3

Description (last modified by Sergey Kandaurov)

We are going to deploy ngnix into a pod in Azure AKS after 24h the service shutdown the SSL Handshaking and Ngnix throw the next error and the service has not attend new request.

2022/02/08 08:29:51 [crit] 23#23: *348004 SSL_shutdown() failed (SSL: error:14094123:SSL routines:ssl3_read_bytes:application data after close notify) while SSL handshaking, client: xxx.xxx.xx, server: 0.0.0.0:8000
2022/02/08 08:48:51 [crit] 23#23: *350873 SSL_shutdown() failed (SSL: error:14094123:SSL routines:ssl3_read_bytes:application data after close notify) while SSL handshaking, client: xxx.xxx.xx.7, server: 0.0.0.0:8000
2022/02/08 08:58:47 [error] 23#23: *352368 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.46.64.166, server: 0.0.0.0:9000, upstream: "xxx.xxx.xx.185:5672", bytes from/to client:243/8, bytes from/to upstream:8/243
2022/02/08 08:58:52 [crit] 23#23: *352385 SSL_shutdown() failed (SSL: error:14094123:SSL routines:ssl3_read_bytes:application data after close notify) while SSL handshaking, client: xxx.xxx.xx.7, server: 0.0.0.0:8000

Change History (5)

comment:1 by guillermo.lopez.pando2@…, 10 months ago

Description: modified (diff)

comment:2 by Sergey Kandaurov, 10 months ago

Description: modified (diff)

The SSL_shutdown() errors reported are likely to be expected on connection timeout where, while shutting down the SSL layer on the connection, some data is then received from the client. This means that the application layer of the connection isn't properly terminated (as due to timeout). Since the stream module doesn't usually involve on the application layer, these errors can be interpreted as innocent.
A common suggestion would be to adjust timeouts (see the proxy_timeout directive as the most suitable in the stream module).

You can apply this patch to lower the logging level of such errors:

# HG changeset patch
# User Sergey Kandaurov <pluknet@nginx.com>
# Date 1644330927 -10800
#      Tue Feb 08 17:35:27 2022 +0300
# Node ID a736a7a613ea6e182ff86fbadcb98bb0f8891c0b
# Parent  1add55d236522616ce34ffaa4dc697a76d3d41a4
SSL: logging level of "application data after close notify".

Such fatal errors are reported by OpenSSL 1.1.1, and similarly by BoringSSL,
if application data is encountered during SSL shutdown, which started to be
observed on the second SSL_shutdown() call after SSL shutdown fixes made in
09fb2135a589 (1.19.2).  The error means that the client continues to send
application data after receiving the "close_notify" alert (ticket #2318).
Previously it was reported as SSL_shutdown() error of SSL_ERROR_SYSCALL.

diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c
--- a/src/event/ngx_event_openssl.c
+++ b/src/event/ngx_event_openssl.c
@@ -3385,6 +3385,12 @@ ngx_ssl_connection_error(ngx_connection_
 #endif
             || n == SSL_R_WRONG_VERSION_NUMBER                       /*  267 */
             || n == SSL_R_DECRYPTION_FAILED_OR_BAD_RECORD_MAC        /*  281 */
+#ifdef SSL_R_APPLICATION_DATA_AFTER_CLOSE_NOTIFY
+            || n == SSL_R_APPLICATION_DATA_AFTER_CLOSE_NOTIFY        /*  291 */
+#endif
+#ifdef SSL_R_APPLICATION_DATA_ON_SHUTDOWN
+            || n == SSL_R_APPLICATION_DATA_ON_SHUTDOWN               /*  291 */
+#endif
 #ifdef SSL_R_RENEGOTIATE_EXT_TOO_LONG
             || n == SSL_R_RENEGOTIATE_EXT_TOO_LONG                   /*  335 */
             || n == SSL_R_RENEGOTIATION_ENCODING_ERR                 /*  336 */

comment:3 by Sergey Kandaurov <pluknet@…>, 10 months ago

In 8009:a736a7a613ea/nginx:

SSL: logging level of "application data after close notify".

Such fatal errors are reported by OpenSSL 1.1.1, and similarly by BoringSSL,
if application data is encountered during SSL shutdown, which started to be
observed on the second SSL_shutdown() call after SSL shutdown fixes made in
09fb2135a589 (1.19.2). The error means that the client continues to send
application data after receiving the "close_notify" alert (ticket #2318).
Previously it was reported as SSL_shutdown() error of SSL_ERROR_SYSCALL.

comment:4 by guillermo.lopez.pando2@…, 10 months ago

Hello, we have the next configuration in the POD, we do not undestand why we need restart the pod and the service accept request again but only for one day.

#user nobody;

worker_processes auto;
#error_log /var/log/nginx/error.log info;
#pid logs/nginx.pid;
events {

worker_connections 1024;

}
http {

server {

listen 80;
location = /healthz {

access_log off;
add_header 'Content-Type' 'application/json';
return 200 '{"status":"UP"}';

}

}

}
stream {

server {

listen 8883 ssl;
#SSL
ssl_verify_client on;
#ssl_verify_depth 1;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_certificate /home/nginx/certs/tls.crt;
ssl_certificate_key /home/nginx/certs/tls.key;
ssl_trusted_certificate /home/nginx/certs/ca.crt;
ssl_client_certificate /home/nginx/certs/ca.crt;
#Active only when the CA has a CRL endpoint created
ssl_crl /tmp/nginx/crl/CAS.crt;


#proxy
proxy_ssl on;
proxy_ssl_server_name on;
#proxy_ssl_session_reuse on;
proxy_pass rabbitmq:8883;

}
server {

listen xxxx;
#proxy
proxy_ssl_server_name on;
proxy_pass rabbitmq:xxxx;

}

}

comment:5 by Maxim Dounin, 10 months ago

Resolution: invalid
Status: newclosed

There are no limits like 24h in nginx.

The recv() failed (104: Connection reset by peer) while proxying and reading from upstream error in your logs suggests that the particular connection was reset by the upstream server. To find out why this happened consider investigating your upstream server and/or the network configuration.

The other errors, SSL_shutdown() failed ... while SSL handshaking, happen when SSL_shutdown() is called right after SSL handshake. Likely this happens due to client certificate verification errors. Check the logs at the info level to find out what goes wrong, there should be other errors like ... [info] ... client SSL certificate verify error: ... while SSL handshaking, client: ..., server: ... which will explain what goes wrong.

If you need further help to find out what goes wrong in your configuration, consider using support options available.

Note: See TracTickets for help on using tickets.