﻿id	summary	reporter	owner	description	type	status	priority	milestone	component	version	resolution	keywords	cc	uname	nginx_version
2045	error: upstream prematurely closed connection while reading response header on 408 or 444	terence.nexleaf.org@…		"Is there a good way to signal to an upstream NGINX server that it should close the connection with a client without closing the connection to the upstream server? This is likely(?) related but not the same as https://trac.nginx.org/nginx/ticket/1005

Here is a trivially simple and useless example configuration:

# test.conf
{{{
server {
  server_name test;
  listen 8000;

  location /proxy/ {
    rewrite ^/proxy(.*)$ $1 break;
    proxy_pass http://127.0.0.1:8000;
  }

  location /200 {
      return 200;
  }

  location /444 {
      return 444;
  }

  location /408 {
      return 408;
  }

  location /444-reset {
      reset_timedout_connection on;

      return 444;
  }

  location /408-reset {
      reset_timedout_connection on;

      return 408;
  }
}
}}}

Which can be used with:

{{{
docker run --net=host --rm -v $PWD/test.conf:/etc/nginx/conf.d/test.conf nginx:mainline-alpine
}}}

When running curl in verbose mode:

{{{
curl -v http://127.0.0.1:8000/proxy/408
curl -v http://127.0.0.1:8000/proxy/444
}}}

The client sees (the same for all requests other than a different GET):

{{{
*   Trying 127.0.0.1:8000...
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> GET /proxy/408 HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.72.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 502 Bad Gateway
< Server: nginx/1.19.2
< Date: Mon, 14 Sep 2020 19:13:36 GMT
< Content-Type: text/html
< Content-Length: 157
< Connection: keep-alive
< 
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.19.2</center>
</body>
</html>
* Connection #0 to host 127.0.0.1 left intact
}}}

All of these requests have ""Connection: keep-alive"" which is not what I want from the upstream, but I don't know how to ""forward"" the 444 or 408 to the upstream so it can close the connection in the right place.

The following is logged in the NGINX logs for those requests:

{{{
2020/09/14 19:13:36 [error] 30#30: *1 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: test, request: ""GET /proxy/408 HTTP/1.1"", upstream: ""http://127.0.0.1:8000/408"", host: ""127.0.0.1:8000""
127.0.0.1 - - [14/Sep/2020:19:13:36 +0000] ""GET /408 HTTP/1.0"" 408 0 ""-"" ""curl/7.72.0"" ""-""
127.0.0.1 - - [14/Sep/2020:19:13:36 +0000] ""GET /proxy/408 HTTP/1.1"" 502 157 ""-"" ""curl/7.72.0"" ""-""

127.0.0.1 - - [14/Sep/2020:19:13:41 +0000] ""GET /444 HTTP/1.0"" 444 0 ""-"" ""curl/7.72.0"" ""-""
2020/09/14 19:13:41 [error] 30#30: *4 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: test, request: ""GET /proxy/444 HTTP/1.1"", upstream: ""http://127.0.0.1:8000/444"", host: ""127.0.0.1:8000""
127.0.0.1 - - [14/Sep/2020:19:13:41 +0000] ""GET /proxy/444 HTTP/1.1"" 502 157 ""-"" ""curl/7.72.0"" ""-""
}}}


A slightly separate and maybe unrelated question is about the ""reset_timedout_connection"" feature. Based on the name I would expect this to reset both 408 and 444 requests, but (as documented) it only affects the 444 response. Is there a reason it doesn't reset all timed out connections? It seems like it would make sense, but maybe I don't understand why/when I should be using the feature.

As above, using curl (same 502 response as above):

{{{
curl -v http://127.0.0.1:8000/proxy/408-reset
curl -v http://127.0.0.1:8000/proxy/444-reset
}}}

And in the NGINX logs:

{{{
127.0.0.1 - - [14/Sep/2020:19:19:48 +0000] ""GET /408-reset HTTP/1.0"" 408 0 ""-"" ""curl/7.72.0"" ""-""
2020/09/14 19:19:48 [error] 30#30: *10 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: test, request: ""GET /proxy/408-reset HTTP/1.1"", upstream: ""http://127.0.0.1:8000/408-reset"", host: ""127.0.0.1:8000""
127.0.0.1 - - [14/Sep/2020:19:19:48 +0000] ""GET /proxy/408-reset HTTP/1.1"" 502 157 ""-"" ""curl/7.72.0"" ""-""

127.0.0.1 - - [14/Sep/2020:19:13:43 +0000] ""GET /444-reset HTTP/1.0"" 444 0 ""-"" ""curl/7.72.0"" ""-""
2020/09/14 19:13:43 [error] 30#30: *7 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: test, request: ""GET /proxy/444-reset HTTP/1.1"", upstream: ""http://127.0.0.1:8000/444-reset"", host: ""127.0.0.1:8000""
127.0.0.1 - - [14/Sep/2020:19:13:43 +0000] ""GET /proxy/444-reset HTTP/1.1"" 502 157 ""-"" ""curl/7.72.0"" ""-""
}}}"	defect	closed	minor		nginx-core	1.19.x	invalid		terence.nexleaf.org@…	Linux host 5.8.4-1-default #1 SMP Wed Aug 26 10:53:09 UTC 2020 (64fe492) x86_64 Linux	"nginx version: nginx/1.19.2
built by gcc 9.3.0 (Alpine 9.3.0) 
built with OpenSSL 1.1.1g  21 Apr 2020
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-perl_modules_path=/usr/lib/perl5/vendor_perl --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-Os -fomit-frame-pointer' --with-ld-opt=-Wl,--as-needed"
