Opened 4 years ago

Last modified 4 years ago

#1904 accepted defect

sendfile with io-threads - nginx mistakenly considers premature client connection close if client sends FIN at response end

Reported by: Shmulik Biran Owned by:
Priority: minor Milestone:
Component: nginx-core Version: 1.17.x
Keywords: sendfile io-threads premature Cc:
uname -a: Linux rd10.it.qwilt.com 3.10.0-862.3.2.el7.x86_64 #1 SMP Mon May 21 23:36:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
nginx -V: nginx version: nginx/1.17.6
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC)
configure arguments: --prefix=../build-dir --with-threads --with-debug

Description

Hi,
The scenario is as follows:

  1. Nginx is configured to work with sendfile and io-threads.
  2. Client sends a request, and after receiving the entire content it sends a FIN-ACK, closing the connection.
  3. Nginx occasionally considers the transaction as prematurely closed by the client even though the FIN-ACK packet acks the entire content.

The effect i've seen is that "$body_bytes_sent" holds partial data (up to the last "successful" sendfile call) and "$request_completion" is empty. I guess there are other effects though these are the one i'm using, so they popped up.

From what i've managed to understand from the code it looks like the scenario is that the read_event_handler "ngx_http_test_reading" is called before the completed task from the io-thread is handled by the main thread, effectively making Nginx think the client connection close happened earlier.

I've managed to reproduce it or latest nginx with rather simple config, but it's time sensitive so it doesn't happen on each transaction. I saw that using a bigger file with rate-limit increases the chances.

Config:

worker_processes  1;

events {
    worker_connections 1024;
}

http {
    keepalive_timeout 120s;
    keepalive_requests 1000;

    log_format main "$status\t$sent_http_content_length\t$body_bytes_sent\t$request_completion";
    access_log  logs/access.log  main;
    error_log  logs/error.log  info;

    aio threads;
    sendfile on;
    limit_rate 10m;

    server {
        listen 0.0.0.0:1234 reuseport;

        location = /test-sendfile-close {
            alias files/10mb;
        }
    }
}
  • files/10mb is a file of size 10MB, created with "dd" (dd if=/dev/zero of=files/10mb bs=10M count=1)

I then tail -F the access log and the error log file, and send these requests from the same machine:

while true; do wget -q "http://10.1.1.1:1234/test-sendfile-close"; done

The output i get in error log and access log (in this order) in case of a good transaction is:

2019/12/17 14:52:34 [info] 137444#137444: *1 client 10.1.1.1 closed keepalive connection
200	10485760	10485760	OK

But every few transactions i get this output instead:

2019/12/17 14:52:38 [info] 137444#137444: *7 client prematurely closed connection while sending response to client, client: 10.1.1.1, server: , request: "GET /test-sendfile-close HTTP/1.1", host: "10.1.1.1:1234"
200	10485760	3810520	

As you can see, the reported sent bytes is lower than the actual value, and the request_completion is empty.

I understand that the closer the client is to Nginx the higher chances this could happen, but it's not just a lab issue - we've seen this in a field trial with clients in a distance of ~30ms RTT, with higher load of course.

If there is need for any other information, or anything else - i'll be glad to provide it.
I appreciate the help, and in general - this great product you've built!

Thank you,
Shmulik Biran

Change History (4)

comment:1 by Maxim Dounin, 4 years ago

Milestone: nginx-1.17
Priority: majorminor
Status: newaccepted

Yes, thanks. There is a known race condition when using aio threads; and sendfile() on Linux. It was previously reported at least here:

http://mailman.nginx.org/pipermail/nginx-devel/2015-May/006940.html

Unfortunately, there is no easy fix.

comment:2 by Shmulik Biran, 4 years ago

Thank you for the fast response.

I was trying to work around it locally by setting "r->read_event_handler" to "ngx_http_block_reading" when sendfile is on. I was hoping sendfile would just fail to write to socket, so i'll get the same end result of premature client connection end, but from the thread performing the sendfile.

The first simple naive test seems to work (though i did get two logs in a row complaining about premature connection close, for a single transaction).

I'm far from an Nginx expert, and surely didn't do it right, though i'd appreciate it if you can share your opinion if this is an approach that can work, or alternatively when does this break / why shouldn't i go in this direction.

By the way, my initial attempt was to just "remember" during "ngx_http_test_reading" that an EOF was seen if there is a pending async task for sendfile, and then when the task is completed to check this indication and finalize the request at that point. However it felt too delicate to change without stronger knowledge in nginx flows.

Thanks,
Shmulik Biran

comment:3 by Carlos, 4 years ago

I'm observing the same behavior without threads.

sendfile on;
aio on;
directio 1m;
Linux nginx1 5.4.0-48-generic #52-Ubuntu SMP Thu Sep 10 10:58:49 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
nginx version: nginx/1.19.3
built by gcc 9.3.0 (Ubuntu 9.3.0-10ubuntu2)
built with OpenSSL 1.1.1f  31 Mar 2020 (running with OpenSSL 1.1.1g  21 Apr 2020)
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx-1.19.3/debian/debuild-base/nginx-1.19.3=. -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'

comment:4 by Maxim Dounin, 4 years ago

I'm observing the same behavior without threads.

By "the same behavior" you mean "client prematurely closed connection" errors when testing with known-to-behave-correctly clients, such as wget in the ticket description?

Note that the "client prematurely closed connection while sending response to client" errors often happen in practice with real clients. And the race condition in question only happens when using aio threads;. So if you are simply seeing these errors in logs (or small numbers in $body_bytes_sent), these are likely some real errors.

Note: See TracTickets for help on using tickets.