Opened 5 years ago
Last modified 5 years ago
#1904 accepted defect
sendfile with io-threads - nginx mistakenly considers premature client connection close if client sends FIN at response end
Reported by: | Shmulik Biran | Owned by: | |
---|---|---|---|
Priority: | minor | Milestone: | |
Component: | nginx-core | Version: | 1.17.x |
Keywords: | sendfile io-threads premature | Cc: | |
uname -a: | Linux rd10.it.qwilt.com 3.10.0-862.3.2.el7.x86_64 #1 SMP Mon May 21 23:36:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | ||
nginx -V: |
nginx version: nginx/1.17.6
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) configure arguments: --prefix=../build-dir --with-threads --with-debug |
Description
Hi,
The scenario is as follows:
- Nginx is configured to work with sendfile and io-threads.
- Client sends a request, and after receiving the entire content it sends a FIN-ACK, closing the connection.
- Nginx occasionally considers the transaction as prematurely closed by the client even though the FIN-ACK packet acks the entire content.
The effect i've seen is that "$body_bytes_sent" holds partial data (up to the last "successful" sendfile call) and "$request_completion" is empty. I guess there are other effects though these are the one i'm using, so they popped up.
From what i've managed to understand from the code it looks like the scenario is that the read_event_handler "ngx_http_test_reading" is called before the completed task from the io-thread is handled by the main thread, effectively making Nginx think the client connection close happened earlier.
I've managed to reproduce it or latest nginx with rather simple config, but it's time sensitive so it doesn't happen on each transaction. I saw that using a bigger file with rate-limit increases the chances.
Config:
worker_processes 1; events { worker_connections 1024; } http { keepalive_timeout 120s; keepalive_requests 1000; log_format main "$status\t$sent_http_content_length\t$body_bytes_sent\t$request_completion"; access_log logs/access.log main; error_log logs/error.log info; aio threads; sendfile on; limit_rate 10m; server { listen 0.0.0.0:1234 reuseport; location = /test-sendfile-close { alias files/10mb; } } }
- files/10mb is a file of size 10MB, created with "dd" (dd if=/dev/zero of=files/10mb bs=10M count=1)
I then tail -F the access log and the error log file, and send these requests from the same machine:
while true; do wget -q "http://10.1.1.1:1234/test-sendfile-close"; done
The output i get in error log and access log (in this order) in case of a good transaction is:
2019/12/17 14:52:34 [info] 137444#137444: *1 client 10.1.1.1 closed keepalive connection 200 10485760 10485760 OK
But every few transactions i get this output instead:
2019/12/17 14:52:38 [info] 137444#137444: *7 client prematurely closed connection while sending response to client, client: 10.1.1.1, server: , request: "GET /test-sendfile-close HTTP/1.1", host: "10.1.1.1:1234" 200 10485760 3810520
As you can see, the reported sent bytes is lower than the actual value, and the request_completion is empty.
I understand that the closer the client is to Nginx the higher chances this could happen, but it's not just a lab issue - we've seen this in a field trial with clients in a distance of ~30ms RTT, with higher load of course.
If there is need for any other information, or anything else - i'll be glad to provide it.
I appreciate the help, and in general - this great product you've built!
Thank you,
Shmulik Biran
Change History (2)
comment:1 by , 5 years ago
Milestone: | nginx-1.17 |
---|---|
Priority: | major → minor |
Status: | new → accepted |
comment:2 by , 5 years ago
Thank you for the fast response.
I was trying to work around it locally by setting "r->read_event_handler" to "ngx_http_block_reading" when sendfile is on. I was hoping sendfile would just fail to write to socket, so i'll get the same end result of premature client connection end, but from the thread performing the sendfile.
The first simple naive test seems to work (though i did get two logs in a row complaining about premature connection close, for a single transaction).
I'm far from an Nginx expert, and surely didn't do it right, though i'd appreciate it if you can share your opinion if this is an approach that can work, or alternatively when does this break / why shouldn't i go in this direction.
By the way, my initial attempt was to just "remember" during "ngx_http_test_reading" that an EOF was seen if there is a pending async task for sendfile, and then when the task is completed to check this indication and finalize the request at that point. However it felt too delicate to change without stronger knowledge in nginx flows.
Thanks,
Shmulik Biran
Yes, thanks. There is a known race condition when using
aio threads;
and sendfile() on Linux. It was previously reported at least here:http://mailman.nginx.org/pipermail/nginx-devel/2015-May/006940.html
Unfortunately, there is no easy fix.