Opened 4 years ago

Closed 4 years ago

Last modified 4 years ago

#2045 closed defect (invalid)

error: upstream prematurely closed connection while reading response header on 408 or 444

Reported by: terence.nexleaf.org@… Owned by:
Priority: minor Milestone:
Component: nginx-core Version: 1.19.x
Keywords: Cc: terence.nexleaf.org@…
uname -a: Linux host 5.8.4-1-default #1 SMP Wed Aug 26 10:53:09 UTC 2020 (64fe492) x86_64 Linux
nginx -V: nginx version: nginx/1.19.2
built by gcc 9.3.0 (Alpine 9.3.0)
built with OpenSSL 1.1.1g 21 Apr 2020
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-perl_modules_path=/usr/lib/perl5/vendor_perl --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-Os -fomit-frame-pointer' --with-ld-opt=-Wl,--as-needed

Description

Is there a good way to signal to an upstream NGINX server that it should close the connection with a client without closing the connection to the upstream server? This is likely(?) related but not the same as https://trac.nginx.org/nginx/ticket/1005

Here is a trivially simple and useless example configuration:

# test.conf

server {
  server_name test;
  listen 8000;

  location /proxy/ {
    rewrite ^/proxy(.*)$ $1 break;
    proxy_pass http://127.0.0.1:8000;
  }

  location /200 {
      return 200;
  }

  location /444 {
      return 444;
  }

  location /408 {
      return 408;
  }

  location /444-reset {
      reset_timedout_connection on;

      return 444;
  }

  location /408-reset {
      reset_timedout_connection on;

      return 408;
  }
}

Which can be used with:

docker run --net=host --rm -v $PWD/test.conf:/etc/nginx/conf.d/test.conf nginx:mainline-alpine

When running curl in verbose mode:

curl -v http://127.0.0.1:8000/proxy/408
curl -v http://127.0.0.1:8000/proxy/444

The client sees (the same for all requests other than a different GET):

*   Trying 127.0.0.1:8000...
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> GET /proxy/408 HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.72.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 502 Bad Gateway
< Server: nginx/1.19.2
< Date: Mon, 14 Sep 2020 19:13:36 GMT
< Content-Type: text/html
< Content-Length: 157
< Connection: keep-alive
< 
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.19.2</center>
</body>
</html>
* Connection #0 to host 127.0.0.1 left intact

All of these requests have "Connection: keep-alive" which is not what I want from the upstream, but I don't know how to "forward" the 444 or 408 to the upstream so it can close the connection in the right place.

The following is logged in the NGINX logs for those requests:

2020/09/14 19:13:36 [error] 30#30: *1 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: test, request: "GET /proxy/408 HTTP/1.1", upstream: "http://127.0.0.1:8000/408", host: "127.0.0.1:8000"
127.0.0.1 - - [14/Sep/2020:19:13:36 +0000] "GET /408 HTTP/1.0" 408 0 "-" "curl/7.72.0" "-"
127.0.0.1 - - [14/Sep/2020:19:13:36 +0000] "GET /proxy/408 HTTP/1.1" 502 157 "-" "curl/7.72.0" "-"

127.0.0.1 - - [14/Sep/2020:19:13:41 +0000] "GET /444 HTTP/1.0" 444 0 "-" "curl/7.72.0" "-"
2020/09/14 19:13:41 [error] 30#30: *4 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: test, request: "GET /proxy/444 HTTP/1.1", upstream: "http://127.0.0.1:8000/444", host: "127.0.0.1:8000"
127.0.0.1 - - [14/Sep/2020:19:13:41 +0000] "GET /proxy/444 HTTP/1.1" 502 157 "-" "curl/7.72.0" "-"

A slightly separate and maybe unrelated question is about the "reset_timedout_connection" feature. Based on the name I would expect this to reset both 408 and 444 requests, but (as documented) it only affects the 444 response. Is there a reason it doesn't reset all timed out connections? It seems like it would make sense, but maybe I don't understand why/when I should be using the feature.

As above, using curl (same 502 response as above):

curl -v http://127.0.0.1:8000/proxy/408-reset
curl -v http://127.0.0.1:8000/proxy/444-reset

And in the NGINX logs:

127.0.0.1 - - [14/Sep/2020:19:19:48 +0000] "GET /408-reset HTTP/1.0" 408 0 "-" "curl/7.72.0" "-"
2020/09/14 19:19:48 [error] 30#30: *10 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: test, request: "GET /proxy/408-reset HTTP/1.1", upstream: "http://127.0.0.1:8000/408-reset", host: "127.0.0.1:8000"
127.0.0.1 - - [14/Sep/2020:19:19:48 +0000] "GET /proxy/408-reset HTTP/1.1" 502 157 "-" "curl/7.72.0" "-"

127.0.0.1 - - [14/Sep/2020:19:13:43 +0000] "GET /444-reset HTTP/1.0" 444 0 "-" "curl/7.72.0" "-"
2020/09/14 19:13:43 [error] 30#30: *7 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: test, request: "GET /proxy/444-reset HTTP/1.1", upstream: "http://127.0.0.1:8000/444-reset", host: "127.0.0.1:8000"
127.0.0.1 - - [14/Sep/2020:19:13:43 +0000] "GET /proxy/444-reset HTTP/1.1" 502 157 "-" "curl/7.72.0" "-"

Change History (3)

comment:1 by Maxim Dounin, 4 years ago

Resolution: invalid
Status: newclosed

Is there a good way to signal to an upstream NGINX server that it should close the connection with a client without closing the connection to the upstream server?

Basic options are:

  • Return a response with X-Accel-Redirect to a location which will do what you want.
  • Configure error_page to handle errors appropriately.
  • Configure proxy_intercept_errors and error_page to intercept some specific errors returned by the upstream server.

A slightly separate and maybe unrelated question is about the "reset_timedout_connection" feature. Based on the name I would expect this to reset both 408 and 444 requests, but (as documented) it only affects the 444 response. Is there a reason it doesn't reset all timed out connections? It seems like it would make sense, but maybe I don't understand why/when I should be using the feature.

The main goal of this directive is to reset connections which timed out while sending a response. That is, when send_timeout occurs, with reset_timedout_connection on; nginx will instruct kernel to drop the data from the socket buffer when the socket is closed instead of trying to send the data till TCP times out as well. This directive isn't triggered by manually returning 408 as there is no timeout.

If you have further questions, consider using support options available.

comment:2 by terence.nexleaf.org@…, 4 years ago

Thanks for the follow up! I wasn't sure if this was the right place to put the question, and I wasn't sure if there was a category other than "defect", which I only noticed after I submitted the query, but I'm guessing this probably should have gone to nginx-devel@…?

If it's an actual timeout, rather than a timeout error code like I was using there's still the question of how to handle that on the downstream server (since the 408 isn't actually returned). Or must the downstreams all be set to larger timeouts than their upstreams so *they* timeout on the downstreams rather than the other way around?

comment:3 by Maxim Dounin, 4 years ago

I'm guessing this probably should have gone to ​nginx-devel@…?

Given the questions are about configuring nginx, not about nginx development, I would rather suggest nginx@ one.

Note: See TracTickets for help on using tickets.