#2063 closed defect (invalid)
nginx reports seemingly incorrect “upstream split a header line in FastCGI records”
Reported by: | Owned by: | ||
---|---|---|---|
Priority: | minor | Milestone: | |
Component: | documentation | Version: | 1.19.x |
Keywords: | Cc: | ||
uname -a: | Linux 4e4771383dfc 5.8.14-arch1-1 #1 SMP PREEMPT Wed, 07 Oct 2020 23:59:46 +0000 x86_64 Linux | ||
nginx -V: |
nginx version: nginx/1.19.3
built by gcc 9.3.0 (Alpine 9.3.0) built with OpenSSL 1.1.1g 21 Apr 2020 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-perl_modules_path=/usr/lib/perl5/vendor_perl --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-Os -fomit-frame-pointer' --with-ld-opt=-Wl,--as-needed |
Description
This is a follow-up of https://github.com/symfony/symfony/issues/38462 where you will be able to find a way to reproduce the problem.
By enabling the debug mode I got
[debug] http fastcgi parser: -2
[debug] upstream split a header line in FastCGI records
[error] upstream sent too big header while reading response header from upstream
before every header could be sent. Note the [error] log seems inconsistent with the previous [debug].
I believe it has something to do with stderr so I checked https://github.com/nginx/nginx/commit/593dec8b35e3997a18592e678845dedea28a57ef but I’m not able to guess what’s going on.
Change History (3)
comment:1 by , 4 years ago
Resolution: | → invalid |
---|---|
Status: | new → closed |
comment:2 by , 4 years ago
So I assumed the issue came from expose_php
(https://www.php.net/manual/fr/ini.core.php#ini.expose-php) which sends a header before any log but it still doesn’t work; which means all logs must fit in fastcgi_buffer_size
.
Is there a way to avoid putting logs in the same buffer than responses’ headers?
comment:3 by , 4 years ago
Is there a way to avoid putting logs in the same buffer than responses’ headers?
FastCGI frames with stderr stream can be skipped by nginx if they appear before the first stdout stream frame, or after the end of headers. If stderr stream is interleaved with stdout stream, both must fit into fastcgi_buffer_size
.
The error message suggests that the amount of data sent before the response header is fully available is larger than
fastcgi_buffer_size
.From the description it looks like the error you see is completely correct, and indeed a result of errors being redirected to FastCGI stderr. The amount of data sent by the upstream from the response header start till its end is larger than
fastcgi_buffer_size
, likely due to errors being returned between the headers, hence the error. If you want nginx to handle large amount of errors being interleaved with the response headers, consider using larger fastcgi_buffer_size. Alternatively, consider reducing the amount of errors being generated.The debug message "upstream split a header line in FastCGI records" is generated when nginx encounters a header line which spans beyond the data available for parsing, usually limited by the FastCGI record boundaries. This message indeed might appear if the data available for parsing is instead limited by the buffer size, as this is what seems to happen in your case. This is not a bug though, as the message is only a debugging one, and merely shows what nginx does, it is not expected to be perfectly correct. The error message which follows explains the real problem.