#2252 closed defect (fixed)
Rate limiting rules turn chunked transfer into invalid request
Reported by: | Owned by: | ||
---|---|---|---|
Priority: | minor | Milestone: | |
Component: | nginx-module | Version: | 1.19.x |
Keywords: | Cc: | simon.attila72@… | |
uname -a: | Linux ip-10-1-1-20.eu-central-1.compute.internal 4.14.238-182.422.amzn2.x86_64 #1 SMP Tue Jul 20 20:35:54 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | ||
nginx -V: |
nginx version: nginx/1.20.0
built by gcc 7.3.1 20180712 (Red Hat 7.3.1-13) (GCC) built with OpenSSL 1.1.1g FIPS 21 Apr 2020 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-compat --with-debug --with-file-aio --with-google_perftools_module --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_degradation_module --with-http_flv_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_mp4_module --with-http_perl_module=dynamic --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-http_xslt_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-threads --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E' |
Description
We managed to define such a configuration that triggers a very strange behaviour from Nginx.
I am creating this ticket to make sure if it is a bug or an expected behaviour that is not covered by the documentation.
We have a UWSGI Python webapp fronted by Nginx. Nginx receives the requests from AWS ELB.
When we added the following rate limiting rule to Nginx, clients using chunked transfer encoding started to receive "HTTP 400: Bad Request (Missing body)" for their POST requests:
# large file rate limit (per ip) # large file: >=20MB # doc: https://www.nginx.com/blog/rate-limiting-nginx/ map $content_length $length_limit { # >=100Mb OR 20-99MB "~^([0-9]{9})|([2-9][0-9]{7})" 1; default 0; } map $length_limit $limit_key { 0 ""; 1 $binary_remote_addr; } # max 10 large file per minute limit_req_zone $limit_key zone=req_zone:1m rate=10r/m;
Tests confirm that above rate limiting rules work as expected (when not using chunked transfer encoding).
If removing the above rules, POSTs with chunked transfer encoding complete successfully again.
The documentation says "When HTTP/1.1 chunked transfer encoding is used to send the original request body, the request body will be buffered regardless of the directive value unless HTTP/1.1 is enabled for proxying."
Our guess is that in case of chunked transfer encoding Nginx buffers the whole request, then calculates content-length and passes it to our app.
But when adding the above rate limiting rules, the rules reference the "$content_length" variable, and initialize it with an empty value (because it is unavailable at the beginning of a chunked transfer). This makes Nginx believe that "content-length" is present, so it skips content-length calculation and passes the request along without the content-length header, which finally triggers the "HTTP 400: Bad Request (Missing body)" condition.
Above behaviour might be a bug.
Change History (6)
comment:1 by , 3 years ago
comment:2 by , 3 years ago
There's also an intentionally undocumented variable
$proxy_internal_body_length
in the proxy module that represents nocacheable calculated request body length as passed in Content-Length to backend, unfortunately it doesn't exist in the fastcgi module.
Just for the record, the $proxy_internal_body_length
is for proxy_set_body
and it is mostly unrelated to this issue. It is just a coincidence that proxy is not affected since $proxy_internal_body_length
is not cacheable.
For fastcgi/uwsgi it might make sense to improve the $content_length
variable to avoid caching in case of chunked transfer encoding. We already do something very similar while reading the request body with request buffering switched off, see a08fad30aeac.
Patch:
# HG changeset patch # User Maxim Dounin <mdounin@mdounin.ru> # Date 1633116687 -10800 # Fri Oct 01 22:31:27 2021 +0300 # Node ID 64de47d8f941dc0ba6b5284a343c856f12daf59c # Parent bfad703459b4e2416548ac66f548e96c2197d9cc Fixed $content_length cacheability with chunked (ticket #2252). diff --git a/src/http/ngx_http_variables.c b/src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c +++ b/src/http/ngx_http_variables.c @@ -1179,6 +1179,10 @@ ngx_http_variable_content_length(ngx_htt v->no_cacheable = 0; v->not_found = 0; + } else if (r->headers_in.chunked) { + v->not_found = 1; + v->no_cacheable = 1; + } else { v->not_found = 1; }
Review and testing appreciated.
comment:3 by , 3 years ago
The patch looks good to me.
Basic tests used to demonstrate the issue:
# HG changeset patch # User Sergey Kandaurov <pluknet@nginx.com> # Date 1633381127 -10800 # Mon Oct 04 23:58:47 2021 +0300 # Node ID 2cfee2d5a9ec679b2658dc318a5655e7f33661e8 # Parent 2fc629d84e822564029a3bd45e21f6791cdcd3dd Tests: uwsgi request body tests. diff --git a/uwsgi_body.t b/uwsgi_body.t new file mode 100644 --- /dev/null +++ b/uwsgi_body.t @@ -0,0 +1,136 @@ +#!/usr/bin/perl + +# (C) Sergey Kandaurov +# (C) Nginx, Inc. + +# Test for uwsgi backend with request body. + +############################################################################### + +use warnings; +use strict; + +use Test::More; + +BEGIN { use FindBin; chdir($FindBin::Bin); } + +use lib 'lib'; +use Test::Nginx; + +############################################################################### + +select STDERR; $| = 1; +select STDOUT; $| = 1; + +my $t = Test::Nginx->new()->has(qw/http uwsgi limit_req/) + ->has_daemon('uwsgi')->plan(5) + ->write_file_expand('nginx.conf', <<'EOF'); + +%%TEST_GLOBALS%% + +daemon off; + +events { +} + +http { + %%TEST_GLOBALS_HTTP%% + + limit_req_zone $content_length zone=one:1m rate=1000r/s; + + server { + listen 127.0.0.1:8080; + server_name localhost; + + limit_req zone=one; + + location / { + uwsgi_pass 127.0.0.1:8081; + uwsgi_param CONTENT_LENGTH $content_length if_not_empty; + } + } +} + +EOF + +$t->write_file('uwsgi_test_app.py', <<END); + +def application(env, start_response): + start_response('200 OK', [('Content-Type','text/plain')]) + if "CONTENT_LENGTH" not in env: + return b"SEE-THIS" + cl = int(env.get('CONTENT_LENGTH')) + rb = env.get('wsgi.input').read(cl) + return b"cl=%d '%s'" % (cl, rb) + +END + +my $uwsgihelp = `uwsgi -h`; +my @uwsgiopts = (); + +if ($uwsgihelp !~ /--wsgi-file/) { + # uwsgi has no python support, maybe plugin load is necessary + push @uwsgiopts, '--plugin', 'python'; + push @uwsgiopts, '--plugin', 'python3'; +} + +open OLDERR, ">&", \*STDERR; close STDERR; +$t->run_daemon('uwsgi', '--socket', '127.0.0.1:' . port(8081), @uwsgiopts, + '--wsgi-file', $t->testdir() . '/uwsgi_test_app.py', + '--logto', $t->testdir() . '/uwsgi_log'); +open STDERR, ">&", \*OLDERR; + +$t->run(); + +$t->waitforsocket('127.0.0.1:' . port(8081)) + or die "Can't start uwsgi"; + +############################################################################### + +like(http_get('/'), qr/SEE-THIS/, 'uwsgi no body'); + +like(http_get_length('/', 'foobar'), qr/cl=6 'foobar'/, 'uwsgi body'); +like(http_get_length('/', ''), qr/cl=0 ''/, 'uwsgi empty body'); + +# limit_req is used to cache $content_length before it's calculated + +TODO: { +local $TODO = 'not yet'; + +like(http_get_chunked('/', 'foobar'), qr/cl=6 'foobar'/, 'uwsgi chunked'); +like(http_get_chunked('/', ''), qr/cl=0 ''/, 'uwsgi empty chunked'); + +} + +############################################################################### + +sub http_get_length { + my ($url, $body) = @_; + my $length = length $body; + return http(<<EOF); +GET $url HTTP/1.1 +Host: localhost +Connection: close +Content-Length: $length + +$body +EOF +} + +sub http_get_chunked { + my ($url, $body) = @_; + my $length = sprintf("%x", length $body); + return http(<<EOF); +GET $url HTTP/1.1 +Host: localhost +Connection: close +Transfer-Encoding: chunked + +$length +$body +0 + +EOF +} + +###############################################################################
For the record.
Assuming you're passing request content length to backend within
uwsgi_param
:uwsgi_param CONTENT_LENGTH $content_length;
You might want to avoid damaging $content_length early in request processing, e.g. by replacing it with $http_content_length. But this wont obviously fix the present setup of limiting requests for chunked bodies. You could setup double proxying to always receive Content-Length, as a workaround. There's also an intentionally undocumented variable
$proxy_internal_body_length
in the proxy module that represents nocacheable calculated request body length as passed in Content-Length to backend, unfortunately it doesn't exist in the fastcgi module.