Opened 10 years ago
Closed 10 years ago
#680 closed defect (invalid)
bug in ngx_http_upstream_process_body_in_memory
Reported by: | Eran Kornblau | Owned by: | |
---|---|---|---|
Priority: | minor | Milestone: | 1.7 |
Component: | nginx-core | Version: | 1.6.x |
Keywords: | upstream buffer input_filter | Cc: | |
uname -a: | Linux pa-nginx-vod-stg1 3.5.0-41-generic #64~precise1-Ubuntu SMP Thu Sep 12 16:50:04 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux | ||
nginx -V: |
nginx version: nginx/1.6.0
built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) configure arguments: --add-module=/opt/nginx-vod-module/ --add-module=/usr/local/src/nginx_mod_akamai_g2o/ --with-http_stub_status_module --with-file-aio --conf-path=/opt/nginx-vod-module-saas/conf/nginx.conf |
Description
I believe that in ngx_http_upstream_process_body_in_memory the for (;;) loop should be replaced with while (u->length).
In the module I'm working on, I know in advance the exact size of upstream response (since I'm issuing an HTTP range request) and I manipulate the upstream buffer to point to my buffer in order to avoid memory copy operations.
When my custom u->input_filter returns, u->length is 0, which should indicate that the response arrived in full. However, since the buffer is in the exact size, the code enters if (size == 0) and the upstream request is completed with NGX_ERROR.
I can probably work around this by passing a slightly larger buffer, but that means another call to recv will be made for no reason.
Thank you,
Eran
Change History (3)
comment:1 by , 10 years ago
Resolution: | → invalid |
---|---|
Status: | new → closed |
comment:2 by , 10 years ago
Resolution: | invalid |
---|---|
Status: | closed → reopened |
Thanks for your reply.
In order to try to prove my point, I reproduced the issue using the built-in SSI module.
Yes, it is very artificial, but IMHO shows that this is a bug, the steps are outlined below.
I will gladly post an official patch for this, if you think that will get this change into the main code branch faster, but it will probably be beneficial if we can agree on the problem/solution first.
- Add the following locations to nginx.conf:
location /test/ {
alias /usr/local/nginx/html/;
ssi on;
}
location /remote/ {
proxy_pass http://localhost:10000/;
}
- Copy the text below to /usr/local/nginx/html/ssi.htm:
<!--# include virtual="/remote/test" set="myvar" -->
<!--# echo var="myvar" default="no" -->
- Save the python code below to some file and execute it:
import socket
import time
HOST =
PORT = 10000
CONTENT_LENGTH = 4096 - len('HTTP/1.1 200 OK\r\nContent-Length: 0000\r\n\r\n')
MAX_CHUNK_SIZE = 1024
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
conn, addr = s.accept()
conn.send('HTTP/1.1 200 OK\r\nContent-Length: %s\r\n\r\n' % CONTENT_LENGTH)
sent = 0
while sent < CONTENT_LENGTH:
curSize = min(MAX_CHUNK_SIZE, CONTENT_LENGTH - sent)
sent += conn.send('a' * curSize)
time.sleep(1)
conn.close()
- curl -i 'your-nginx-host/test/ssi.htm'
What happens here is that the request fails even though the default 4096 buffer is big enough to process it. If the CONTENT_LENGTH variable in the python script is reduced by 1 byte the request succeeds.
You could claim that for a buffer of 4096 only 4095 is actually usable, but what I consider a bug here is that even though 4095 works, it will issue another call to recv in ngx_http_upstream_process_body_in_memory even after it knows that it got the whole response body.
Another possible solution, if you think the change I suggested is problematic, is to add if (u->length == 0) break; after the call to u->input_filter in ngx_http_upstream_process_body_in_memory.
Please let me know if any more details are needed.
Thanks,
Eran
comment:3 by , 10 years ago
Resolution: | → invalid |
---|---|
Status: | reopened → closed |
As previously said, another call to recv() is needed to detect connection close. On systems where it's not needed it won't be called.
The upstream module is designed to work with various protocols and many cases. In particular, it is expected to work if the upstream response size isn't known, or connections are kept alive between requests. If you think you see a room for improvement - please follow this article and propose a patch. Claiming that there is a bug which manifests itself only with your module isn't likely to result in anything good.
In this particular case, I think the change suggested is wrong, as it will make it impossible to detect connection close by the upstream server in some cases.