Changes between Initial Version and Version 1 of Ticket #1870, comment 1
- Timestamp:
- 10/11/19 15:50:26 (5 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Ticket #1870, comment 1
initial v1 1 1 From the numbers — connection closed at 2G point — it looks like the problem is that `sendfile()` doesn't block on sending at all, due to fast network connection and slow enough disk. As such, it sends up to the maximum number of bytes it can send on Linux, 2G. Currently this limitation, if ever reached, is not accounted by the top-level code, and sending is not automatically retried. An obvious workaround is to use `sendfile_max_chunk`, which is specially designed to avoid cases when nginx spins in `sendfile()` for a long time when network connection is faster than disk subsystem. 2 2 3 Further, it looks like using `sendfile_max_chunk` is anyway required here, as "Time Spent" is 1 minute. If the above analysis is correct, this means that all other connection in the worker process in question were blocked for 1 minute, which is certainly not an acceptable behaviour.3 Further, it looks like using `sendfile_max_chunk` is anyway required here, as "Time Spent" is 1 minute. If the above analysis is correct, this means that all other connections in the worker process in question were blocked for 1 minute, which is certainly not an acceptable behaviour. 4 4 5 5 Not sure if we need/want to do anything with this. The `sendfile_max_chunk` directive already exists, resolves this, and anyway have to be used in such configurations. It might be a good idea to introduce some reasonable default for `sendfile_max_chunk` though.