Opened 7 years ago
Last modified 7 years ago
#1293 new enhancement
nginx http proxy stops sending request data after first byte of server response is received
Reported by: | Owned by: | ||
---|---|---|---|
Priority: | minor | Milestone: | |
Component: | nginx-module | Version: | 1.10.x |
Keywords: | http proxy upstream | Cc: | |
uname -a: |
[root@f266269b8112 nginx]# uname -a
Linux f266269b8112 4.1.12-61.1.19.el7uek.x86_64 #2 SMP Fri Nov 18 17:00:09 PST 2016 x86_64 x86_64 x86_64 GNU/Linux |
||
nginx -V: |
[root@f266269b8112 nginx]# nginx-debug -V
nginx version: nginx/1.10.2 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --user=nginx --group=nginx --with-file-aio --with-threads --with-ipv6 --with-http_addition_module --with-http_auth_request_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_ssl_module --add-module=/home/cozheng/nginx_git/modules/nginx-module-vts-0.1.11 --add-module=/home/cozheng/nginx_git/modules/nginx_upstream_check_module-master --add-module=/home/cozheng/nginx_git/modules/ngx_http_accounting_module-master --add-module=/home/cozheng/nginx_git/modules/nginx-limit-upstream-master --add-module=/home/cozheng/nginx_git/modules/ngx_http_dyups_module-master --add-module=/home/cozheng/nginx_git/modules/nginx-sticky-module-ng-08a395c66e42 --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --with-debug |
Description
I have an upstream service that accepts large input files via PUT method. The server sends the first part of its response (response code + headers) before the client has completely sent all of its request data. Questionable design choices aside, I believe that this is within the RFC2616 specification:
If an origin server receives a request that does not include an Expect request-header field with the "100-continue" expectation, the request includes a request body, and the server responds with a final status code before reading the entire request body from the transport connection, then the server SHOULD NOT close the transport connection until it has read the entire request, or until the client closes the connection. Otherwise, the client might not reliably receive the response message. However, this requirement is not be construed as preventing a server from defending itself against denial-of-service attacks, or from badly broken client implementations.
Because this happens on the back-end, and there may be some client-side buffering still going on, it is a little difficult to observe. Using a mix of pcaps and gdb on nginx, I can see that it stops sending data at pretty much the exact time that the server sends its response:
curl -k -vvv --header 'Accept: application/json' -i --max-time 3600 --header 'X-Detect-Content-Type: true' --header 'Content-Type: application/x-gzip' --noproxy '*' -u xxx:yyy -T ./input.tar.gz -XPUT 'http://localhost:8001/qq/xxxxxxxxxxxxxxxxxxxxx/yyyyyyyyyyyy/?zzzzzzzzzzzzzzzzzzzzzz' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 -::- -::- -::- 0* About to connect() to slcao604 port 8001 (#0) Trying 127.0.0.1... Connected to localhost (127.0.0.1) port 8001 (#0) Server auth using Basic with user 'xxx' > PUT /qq/xxxxxxxxxxxxxxxxxxxxx/yyyyyyyyyyyy/?zzzzzzzzzzzzzzzzzzzzzz HTTP/1.1 > Authorization: Basic ..............................................== > User-Agent: curl/7.29.0 > Host: localhost:8001 > Accept: application/json > X-Detect-Content-Type: true > Content-Type: application/x-gzip > Content-Length: 6900473 > Expect: 100-continue > < HTTP/1.1 100 Continue } [data not shown] 3 6738k 0 0 3 224k 0 1992 0:57:44 0:01:55 0:55:49 0
A gdb session of nginx shows that the functions that write data to the upstream are firing as the transfer progresses:
Breakpoint 7, ngx_http_upstream_send_request_body (do_write=1, u=0x20459f0, r=0x2044650) at src/http/ngx_http_upstream.c:1959 1959 rc = ngx_http_read_unbuffered_request_body(r); (gdb) continue Continuing. Breakpoint 7, ngx_http_upstream_send_request_body (do_write=1, u=0x20459f0, r=0x2044650) at src/http/ngx_http_upstream.c:1959 1959 rc = ngx_http_read_unbuffered_request_body(r); (gdb) continue Continuing. Breakpoint 7, ngx_http_upstream_send_request_body (do_write=1, u=0x20459f0, r=0x2044650) at src/http/ngx_http_upstream.c:1959 1959 rc = ngx_http_read_unbuffered_request_body(r); (gdb) continue Continuing.
As soon as the response comes back from the server, the curl data transfer suddenly stalls out:
< HTTP/1.1 200 OK < Server: nginx/1.10.2 < Date: Tue, 13 Jun 2017 00:38:04 GMT < Content-Type: application/json;charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < { [data not shown] 3 6738k 0 4 3 224k 0 127 15:05:34 0:29:58 14:35:36 0
In the gdb session, I see do_write go to zero directly after this:
Breakpoint 7, ngx_http_upstream_send_request_body (do_write=0, u=0x20459f0, r=0x2044650) at src/http/ngx_http_upstream.c:1959 1959 rc = ngx_http_read_unbuffered_request_body(r); (gdb) continue Continuing. Breakpoint 7, ngx_http_upstream_send_request_body (do_write=0, u=0x20459f0, r=0x2044650) at src/http/ngx_http_upstream.c:1959 1959 rc = ngx_http_read_unbuffered_request_body(r); (gdb) continue
Tuning the lingering_close option does not appear to resolve this issue. Setting lingering_close to "on" causes the request data to pause immediately after the 200 OK response first byte is received, and then flush it all out after the upstream server has finished sending its entire response body.
I have done most of my testing against nginx 1.10.2, but I tried 1.13.1, and it seemed to exhibit the same behavior.
I am attaching a debug-level error.log from one of the transfers. I think that the interesting part starts around line 1083, when the server's response comes back.
My nginx.conf:
user nginx; worker_processes 1; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; events { worker_connections 4096; } http { vhost_traffic_status_zone; include /etc/nginx/mime.types; default_type application/octet-stream; log_format logstash '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent $bytes_sent ' '"$http_referer" "$http_user_agent" ' '$request_time $request_length ' '$upstream_addr ' '$upstream_response_time ' '$upstream_http_x_trans_id ' '$http_x_forwarded_for ' '$ssl_protocol/$ssl_cipher'; access_log /var/log/nginx/access.log logstash; #sendfile on; keepalive_timeout 65; # throttling config # end of throttling config include /etc/nginx/conf.d/*.conf; }
My /etc/nginx/conf.d/proxy.conf:
upstream storagebackend { server node1.example.com:17999 weight=10; check interval=3000 rise=2 fall=3 timeout=1000 type=http; check_http_send "GET /healthcheck HTTP/1.1\r\nHost: localhost\r\n\r\n"; check_http_expect_alive http_2xx; } server { listen 8001; server_name vip.example.com; client_body_timeout 900s; client_max_body_size 0; client_header_timeout 900s; # Testtttttttttttttt lingering_close off; location / { proxy_pass http://backend; proxy_http_version 1.1; proxy_request_buffering off; proxy_buffering off; proxy_max_temp_file_size 0; proxy_read_timeout 900s; proxy_send_timeout 900s; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; allow 127.0.0.1; } } server { listen 9990; server_name stats; location /status { vhost_traffic_status_display; vhost_traffic_status_display_format html; } }
Attachments (1)
Change History (2)
by , 7 years ago
Attachment: | error.20170612_002.sanitized.log.gz added |
---|
comment:1 by , 7 years ago
Type: | defect → enhancement |
---|
As of now, nginx stops sending request as long as it gets full header from the upstream server and starts sending the response. This is more or less in line with what RFC 7230 says in section 6.5:
A client sending a message body SHOULD monitor the network connection
for an error response while it is transmitting the request. If the
client sees a response that indicates the server does not wish to
receive the message body and is closing the connection, the client
SHOULD immediately cease transmitting the body and close its side of
the connection.
That is, as of now nginx threats any response as a response "that indicates the server does not wish to receive the message body". Probably this can be improved to only cease transmitting the body in some cases (only on 4xx/5xx errors?). Alternatively, a configuration option could be introduced to continue sending the body as long as nginx can do it.
Previously discussed at least here.
debug error log for ticket #1293