Opened 8 years ago

Closed 7 years ago

#1026 closed defect (wontfix)

Abrupt linger disconnect on SSL offload connections through nginx 1.10.1 on FreeBSD based proxy

Reported by: iamasmith.home@… Owned by:
Priority: major Milestone:
Component: other Version: 1.10.x
Keywords: Cc:
uname -a: FreeBSD testh1 10.3-RELEASE FreeBSD 10.3-RELEASE #0 r297264: Fri Mar 25 02:10:02 UTC 2016 root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64
nginx -V: nginx version: nginx/1.10.1
built with OpenSSL 1.0.2h 3 May 2016
TLS SNI support enabled
configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I /usr/local/include' --with-ld-opt='-L /usr/local/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx-error.log --user=www --group=www --modules-path=/usr/local/libexec/nginx --with-debug --with-file-aio --with-ipv6 --http-client-body-temp-path=/var/tmp/nginx/client_body_temp --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp --http-proxy-temp-path=/var/tmp/nginx/proxy_temp --http-scgi-temp-path=/var/tmp/nginx/scgi_temp --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp --http-log-path=/var/log/nginx-access.log --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gzip_static_module --with-http_gunzip_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_stub_status_module --with-http_sub_module --with-pcre --with-http_v2_module --with-stream=dynamic --with-stream_ssl_module --with-threads --with-mail=dynamic --without-mail_imap_module --without-mail_pop3_module --without-mail_smtp_module --with-mail_ssl_module --with-http_ssl_module

Description

Relevant settings

To ensure that chunked encoding is passed directly to the upstream server rather than being buffered until the request is complete.

proxy_http_version 1.1;
proxy_request_buffering off;

Description

I have a client that one of my developers has been working on that is getting dropped connections during large post operations if the upstream application raises an error at all.

The application may produce a 409 conflict error at the beginning of a large file upload POST message from this client (OSX written using CFNetwork framework) and this causes the nginx server to enter the linger state to consume the remainder of the POST message. Unfortunately if proxying the client from an SSL connection nginx gets a 0 bytes SSL_read and returns an SSL_get_error of 5 (SSL_ERROR_SYSCALL error). This causes nginx to believe that the client has closed the connection (which it hasn't) so it cleans up and drops the connection.

The ability to ensure that we aren't pooling up files being uploaded is important for this application so we need to pass the request like this without buffering it.

Additional points

If using plain HTTP nginx drains the lingering request properly after sending the error response to the client and the client does not get connected. The client then responds with the received error response rather than the disconnect message that one gets out of the CFNetwork framework if the request doesn't complete.

It has been verified that nginx has dropped the connection by using the client software in the working HTTP mode and routing the request through STUNNEL to the nginx server which also shows that the nginx server is dropping the connection and not the client and results in the same failure.

This is not observable at all until the post size reaches a good size, presumably filling any buffer/queue at the server prior to the switch to linger handling.

Using a weak cipher (RC4-MD5) as the only setting I was able to analyse the traffic and confirm that following the proxy sending back the 409 to the client, the client continued to try to send chunks of data but was disconnected by the nginx proxy.

I also tried adding an ERR_peek_last_error() to ngx_ssl_handle_recv if it detected SSL_ERROR_SYSCALL but this matched what was coming back as errno (0) and didn't provide any additional indication of the cause of the failure.

On some occasions the linger works as expected however in most cases the connection is terminated prior to the client finishing sending.

All testing has been done locally and I have experiemented with extending the linger values as a matter of course even though I can see that this isn't the problem.

Test Configuration

user www www;
worker_processes 5;

events {

worker_connections 4096;

debug_connection 192.168.178.1;

}

http {

include /usr/local/etc/nginx/mime.types;
default_type application/octet-stream;

proxy_buffering off;
large_client_header_buffers 4 12k;
proxy_http_version 1.1;

server {

listen 192.168.178.124:443 ssl;
ssl_certificate /mytest/server.crt;
ssl_certificate_key /mytest/server.key;
access_log /mytest/custom.log;
error_log /mytest/error.log warn;
underscores_in_headers on;


location /api {

client_max_body_size 0;
keepalive_timeout 0;
proxy_request_buffering off;
proxy_set_header BasePath "/api";
proxy_set_header HOST $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 1800;
proxy_send_timeout 1800;
proxy_pass http://127.0.0.1:8081;

}

}

}

Debug events from the disconnect.

2016/07/15 13:44:41 [debug] 13841#100298: *9 SSL_read: 4096
2016/07/15 13:44:41 [debug] 13841#100298: *9 lingering read: 4096
2016/07/15 13:44:41 [debug] 13841#100298: *9 SSL_read: 4096
2016/07/15 13:44:41 [debug] 13841#100298: *9 lingering read: 4096
2016/07/15 13:44:41 [debug] 13841#100298: *9 SSL_read: 4096
2016/07/15 13:44:41 [debug] 13841#100298: *9 lingering read: 4096
2016/07/15 13:44:41 [debug] 13841#100298: *9 SSL_read: 0
2016/07/15 13:44:41 [debug] 13841#100298: *9 SSL_get_error: 5
2016/07/15 13:44:41 [debug] 13841#100298: *9 peer shutdown SSL cleanly
2016/07/15 13:44:41 [debug] 13841#100298: *9 lingering read: 0
2016/07/15 13:44:41 [debug] 13841#100298: *9 http request count:1 blk:0
2016/07/15 13:44:41 [debug] 13841#100298: *9 http close request
2016/07/15 13:44:41 [debug] 13841#100298: *9 http log handler
2016/07/15 13:44:41 [debug] 13841#100298: *9 free: 00000008026DF000
2016/07/15 13:44:41 [debug] 13841#100298: *9 free: 000000080250D000
2016/07/15 13:44:41 [debug] 13841#100298: *9 free: 0000000802482000, unused: 1
2016/07/15 13:44:41 [debug] 13841#100298: *9 free: 000000080250C000, unused: 0
2016/07/15 13:44:41 [debug] 13841#100298: *9 free: 00000008026E0000, unused: 2949
2016/07/15 13:44:41 [debug] 13841#100298: *9 close http connection: 17
2016/07/15 13:44:41 [debug] 13841#100298: *9 SSL_shutdown: 1
2016/07/15 13:44:41 [debug] 13841#100298: *9 event timer del: 17: 1468590286811
2016/07/15 13:44:41 [debug] 13841#100298: *9 reusable connection: 0
2016/07/15 13:44:41 [debug] 13841#100298: *9 free: 0000000802487000
2016/07/15 13:44:41 [debug] 13841#100298: *9 free: 0000000802406800
2016/07/15 13:44:41 [debug] 13841#100298: *9 free: 00000008024E0800, unused: 24
2016/07/15 13:44:41 [debug] 13841#100298: *9 free: 00000008024E0E00, unused: 400

Attachments (1)

patch-src-http-ngx_http_request.c (740 bytes ) - added by iamasmith.home@… 8 years ago.
Disable shutdown on SSL connections in ngx_http_set_lingering_close

Download all attachments as: .zip

Change History (21)

comment:1 by Maxim Dounin, 8 years ago

The goal of the lingering close code is to prevent TCP RST from being sent to a client before it will be able to read the response from the socket buffer. To do so, nginx calls shutdown(SHUT_WR) to send FIN to the client while still reading incoming data.

From the description of the problem it looks like the client is able to see FIN from nginx and properly reacts by closing its side of the connection, but fails to process previously received response. This looks like something to be fixed in the client, not in nginx.

comment:2 by iamasmith.home@…, 8 years ago

It is a good point, however, there is a difference in behaviour between the http implementation and the ssl implementation and the cause of the failure seems to be related to aborting the linger drain on a detected SYSCALL failure.

Additionally, we need to maintain an OSX client and the limitations of the CFNetwork implementation, although undesirable, are clear.

But thanks, I hadn't spotted the shutdown write prior to the linger handler starting.. perhaps another area for me to experiment with.

comment:3 by iamasmith.home@…, 8 years ago

Many thanks mdounin@ that was bang on the money.

I would argue that you can't safely shutdown a socket for write under an SSL connection since to maintain the connection for read the SSL protocol needs to be able to write to the socket.

In my case the attached patch resolved the problem completely.

by iamasmith.home@…, 8 years ago

Disable shutdown on SSL connections in ngx_http_set_lingering_close

comment:4 by iamasmith.home@…, 8 years ago

So, summarising what is actually happening.

  1. Response is sent back to the client.
  1. shutdown(SHUT_WR) issued on the socket.
  1. Client can no longer maintain the TLS layer so cannot read any more so the connection abruptly drops.
  1. CFNetwork processes the disconnect as a fatal error rather than allowing access to the response.

On a HTTP connection we don't need to maintain the read/write nature of the socket to keep the protocol alive so the shutdown(SHUT_WR) is safe. Here it results in an abrupt loss of connection.

comment:5 by Maxim Dounin, 8 years ago

Client _can_ maintain the SSL/TLS layer, it continue sending data but doesn't want to. This is more or less a client implementation detail.

On the other hand, with the patch suggested there will be no indication to clients that no further data should be sent on a connection being closed, therefore properly implemented clients won't be able to behave properly. That is, don't expect the patch will be accepted.

What may make sense for SSL clients is using SSL_shutdown() (ngx_ssl_shutdown()) before or instead of shutdown(SHUT_WR). But whether it will work depends on the client.

I would also recommend you to look closely on the response returned by your backend. Having Content-Length defined may make a huge difference to the client.

comment:6 by iamasmith.home@…, 8 years ago

Also great feedback.

I did consider what one might lose if shutdown wasn't used on the connection and for many folks it wouldn't make any difference if they were using request buffering. We simply cannot use request buffering due to the potential sizes of the files and limited storage in the proxy. Clearly, in a request buffering scenario the response is not going to be available until the entire request reaches nginx and it begins to forward it on to the application server and in this scenario lingering actually doesn't happen at all.

We have 2 options basically in the client code, one is to not use the Apple frameworks and use something like libcurl or to accomodate this limitation in the proxy server because we really don't have that much influence inside of CFNetwork over what is going on.

Given that a) We have an iOS client which is also going to see this problem, b) Policy control over the network stack might be important to us using standard Apple tools so sidestepping it with another HTTP implementation is undesirable and c) Regardless of the true owner of the bug (and I fully accept what you say about shutdown being told to stop but the client could continue to send over the socket to maintain the SSL connection)... I suspect that we wouldn't see a particularly rapid response from Apple to fix the bug.

I can tell you aren't particularly enthusiastic about this kind of change but I'm offering it up and possibly if one were to consider that this behaviour could be turned on by a flag in the config (I'm willing to enhance the patch to do this if required) then that would be fine.

I can of course maintain my own fork with the changes that I need but if there is any merit in the change coming upstream then I'll help if I can.

comment:7 by iamasmith.home@…, 8 years ago

(for policy control over the stack I mean things like standard mechanisms of adding additional trusted roots using Apple profiles ofc)

comment:8 by iamasmith.home@…, 8 years ago

Actually, I can see other scenarios where it may be desirable to inhibit the socket shutdown.

If somebody happens to be using something like stunnel, ngrok, or an SSL offload agent of some kind then if nginx uses shutdown(SHUT_WR) it may cause problems because the something other than the client is actually getting that.

How about if I create an option to select this behaviour either generally or just for https connections..?

comment:9 by Maxim Dounin, 8 years ago

As explained in comment:5, the mode suggested (without shutdown(SHUT_WR)) will prevent properly implemented clients from tearing down a connection. Therefore it's unlikely to be accepted unless there are serious reasons to do so.

Please also see other recommendations in the comment:5, about SSL_shutdown() and Content-Length.

comment:10 by iamasmith.home@…, 8 years ago

OK, interesting, I reviewed the traffic being sent by the client using http this time and found that Content-Length is being sent so it would appear that chunking isn't a factor here.

I had assumed that the transmission was chunked because with the proxy_http_version missing the error would take some time to appear but I didn't have a complete capture that Wireshark could decode the request to show the header at that point because the connection was terminated early and Wireshark only decodes HTTP messages that it can see in it's entirety and I couldn't just see from the raw connection as this was encrypted data.

I will have a look at what can be done with the server response as this doesn't have a content length at present - this being partially because some of the content returned is streamed and the length isn't available (so in those cases the actual response is chunked) but I'm going to look at adding it for anything that could happen whilst a request is still incoming.

In the meantime I tested with this patch change also to see what effect SSL_shutdown would have.

--- src/http/ngx_http_request.c.orig	2016-05-31 14:47:02.000000000 +0100
+++ src/http/ngx_http_request.c	2016-07-21 09:29:08.562311000 +0100
@@ -3216,11 +3216,21 @@
         }
     }
 
-    if (ngx_shutdown_socket(c->fd, NGX_WRITE_SHUTDOWN) == -1) {
-        ngx_connection_error(c, ngx_socket_errno,
-                             ngx_shutdown_socket_n " failed");
-        ngx_http_close_request(r, 0);
-        return;
+    if (c->ssl) {
+        int s = SSL_shutdown(c->ssl->connection); 
+        ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, "SSL_shutdown() [ngx_http_lingering_close]: %d", s);
+        if (s < 0 && ERR_peek_error()) {
+            int sslerr = SSL_get_error(c->ssl->connection, s);
+            ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
+                           "SSL_get_error [ngx_http_lingering_close]: %d", sslerr);
+        }
+    } else { 
+        if (ngx_shutdown_socket(c->fd, NGX_WRITE_SHUTDOWN) == -1) {
+            ngx_connection_error(c, ngx_socket_errno,
+                                 ngx_shutdown_socket_n " failed");
+            ngx_http_close_request(r, 0);
+            return;
+        }
     }
 
     if (rev->ready) {

The call succeeds..

2016/07/21 08:33:52 [debug] 14457#100096: *3 event timer add: 17: 5000:1469090037131
2016/07/21 08:33:52 [debug] 14457#100096: *3 SSL_shutdown() [ngx_http_lingering_close]: 0
2016/07/21 08:33:52 [debug] 14457#100096: *3 http lingering close handler
2016/07/21 08:33:52 [debug] 14457#100096: *3 SSL_read: 4096
2016/07/21 08:33:52 [debug] 14457#100096: *3 lingering read: 4096
2016/07/21 08:33:52 [debug] 14457#100096: *3 SSL_read: 4096
2016/07/21 08:33:52 [debug] 14457#100096: *3 lingering read: 4096
2016/07/21 08:33:52 [debug] 14457#100096: *3 SSL_read: 4096
2016/07/21 08:33:52 [debug] 14457#100096: *3 lingering read: 4096
2016/07/21 08:33:52 [debug] 14457#100096: *3 SSL_read: 4096
2016/07/21 08:33:52 [debug] 14457#100096: *3 lingering read: 4096

But the CFNetwork framework breaks in the same way..

didCompleteWithError ERROR Optional(Error Domain=NSURLErrorDomain Code=-1005 "The network connection was lost."

Last edited 8 years ago by iamasmith.home@… (previous) (diff)

comment:11 by iamasmith.home@…, 8 years ago

Just finished... it took a little while because I don't maintain this bit but I managed to stop RESTLET from chunking the responses by wrapping them in a Representation object that returns hasKnownSize() returning true.

It definitely returns Content-Length now.

The OSX CFNetwork API still fails in exactly the same fashion.

comment:12 by Maxim Dounin, 8 years ago

You may also try if disabled keepalive will make any difference (that is, Connection: close in the nginx response), though I doubt it will help.

comment:13 by Maxim Dounin, 8 years ago

Overall, it looks like a CFNetwork problem, it seems to not be able to handle any form of a connection close when using SSL, even a perfectly clear one. It should be a good idea to report this problem to Apple.

I don't think anything can be done with it without breaking well-behaving clients at the same time, at least at lingering close level. What may help is always fully reading/discrading the body after returning a response, but it's not something nginx is currently able to do when using proxy_request_buffering off.

comment:14 by iamasmith.home@…, 8 years ago

Yes, currently the keepalive timeout is set to 0 and we do see Connection: close.

One thought here is that CFNetwork is in the UA string so I could do something conditional based on that with the response but want to explore if there is a possibility that a broader setting may be useful if folks are testing sites using ngrok or similar..?

stunnel, ngrok and the like would be maintaining the actual TCP connection, as would an SSL offloader device such as a Juniper SSL vpn box, the approach of using shutdown(SHUT_WR) or SSL_shutdown in these scenarios isn't going to reach the client and the results with an intermediate proxy of this sort are going to be more variable with the connection shutdown than without.

If you think there is merit in accomodating a feature around this that can be optionally enabled then I'm willing to create the functionality based on a setting, possibly also using the User-Agent string for detecting CFNetwork or Safari (because I'm sure Safari is essentially the same code).

in reply to:  13 ; comment:15 by iamasmith.home@…, 8 years ago

Replying to mdounin:

....

What may help is always fully reading/discrading the body after returning a response, but it's not something nginx is currently able to do when using proxy_request_buffering off.

I may be misunderstanding something here but I thought the lingering handler actually did discard the data.. if we don't shutdown the connection for write isn't this exactly what it's doing? With the original patch nginx certainly eats the remainder of the request quite happily.

Last edited 8 years ago by iamasmith.home@… (previous) (diff)

in reply to:  15 comment:16 by Maxim Dounin, 8 years ago

Replying to iamasmith.home@…:

I may be misunderstanding something here but I thought the lingering handler actually did discard the data.. if we don't shutdown the connection for write isn't this exactly what it's doing? With the original patch nginx certainly eats the remainder of the request quite happily.

Lingering close is to close a connection with lingering, that is, close a connection, but avoid RST if a client will send more data. This ensures that the client has a chance to see the response returned. It doesn't try to understand what exactly it reads and discards - it may be some request body bytes, or another request, or whatever.

Discarding body is somewhat different. It's not about closing a connection, it's about discarding the body of a request to maintain protocol synchronization. It is essentially reading the request body and throwing it away. It can only read the request body bytes, and should not try to read anything beyond that as the connection may be used later for other requests. This is what nginx normally does when it returns, e.g., a static file and don't need the body even if it is present, see ngx_http_discard_request_body().

Lingering close cannot be done properly without an actual connection close (that is, shutdown()), or the client won't be able to find out that nginx is closing the connection and will continue sending data to the connection (e.g., more requests to the same connection). Discarding body, in contrast, can be done without closing the connection at all.

comment:17 by iamasmith.home@…, 8 years ago

I think I need to see this in action with keep-alive working. I'm still trying to coerce the application server into sending anything but Connection: close in the header.

All my tests so far have finished with the connection being closed by the lingering close because the lingering request handler is still eventually calling ngx_close_connection via nix_http_close_request() and nix_http_close_connection().

I do note though that the connection itself doesn't get closed if the request count isn't 0 and I suspect I need to get our keep-alive working to test - but I'm not sure what it would be doing in the lingering handler anyway if it hadn't been told to close the connection by the server.

Our problem is that we are in lingering trying to close down the connection with the client but the client is not willing to accept that until the full request has gone out it seems. Any potential to discard request body with a keep alive connection surely wouldn't be hitting the lingering handler so the scenario we are looking at is end of connection anyway and we don't have any possibility in this configuration of multiplexed requests on the same connection.

comment:18 by Maxim Dounin, 8 years ago

It doesn't matter what application server will return in its response headers. Connection between nginx and the client only depends on nginx configuration.

On the other hand, keeping connection alive is not something that nginx is able to do after an early response from a backend server when using proxy_request_buffering off. Even with keepalive switched on nginx will close the connection in such a case, because it wasn't able to read the whole body from the client and can't properly discrard the rest of the body.

As of now, you have two basic options to workaround the problem you are seeing:

  • Read the whole body in your application server before returning a response.
  • Avoid using proxy_request_buffering off so nginx will read the body for you.

Obviously, the real solution is to fix the client. As previously suggested, you may focus on this instead.

comment:19 by iamasmith.home@…, 8 years ago

Since the connection is going to pass to lingering, and with the patch I originally discussed, the lingering body handler will discard the data and then terminate the connection, I think my best option is to maintain the patch or perhaps a variant of it that only utilises this behaviour if a CFNetwork client is used.
Size of posts and scaleability of the nodes in question dictate request buffering off isn't a viable solution.
Read the whole body in the application server digs into reworking some of the application teams frameworks (restlet.org on Tomcat) that is a tough ask.
I know the CFNetwork seems to be the broken piece in all this. I think I'm going to have to maintain a variant of my patch as an interim and report it to Apple.
TBH though I very much doubt Apple will fix the problem or even take the time for the proper folks to understand it unless it effects one of their own commercial environments.

Last edited 8 years ago by iamasmith.home@… (previous) (diff)

comment:20 by Maxim Dounin, 7 years ago

Resolution: wontfix
Status: newclosed

Closing this, it doesn't look like there is anything in nginx we can do to improve things without breaking well-behaving clients at the same time. Various workarounds available are listed in comment:18.

Note: See TracTickets for help on using tickets.