Opened 7 years ago

Closed 7 years ago

#1228 closed defect (fixed)

One ngx_slice_module bug when use nginx as a reverse proxy server

Reported by: zhaiyan0011@… Owned by:
Priority: minor Milestone: 1.11
Component: nginx-module Version: 1.10.x
Keywords: slice mdoule Cc:
uname -a: Linux 2.6.32-642.13.1.el6.x86_64
nginx -V: 1.10.2

Description

  • Nginx -1.10.2
  • Uname -Linux 2.6.32-642.13.1.el6.x86_64

We use nginx as a reverse proxy server , and enable ngx_http_slice_module.
We have a problem:
When the client receives the first slice body, nginx no longer sends the second slice body, and the connection ( for client) is hanging .
Then we debug the problem, we found that before the second slice body sent to client , the connection’s(for client) write’s status as following:
ready is 1 , delayed is 1
After upstream_module use ngx_event_pipe_read_upstream function to read some body response from upstream, then the buffer is full, but due to the write’s delayed status, the response cannot be sent to the client. Because there is no event to call write_event_handler to reset delayed status and timedout status. The upstream_module step in error cycle.
We suspect that this problem is due to execute the wrong timer event handler.
Please check out the attachment for more details.

Attachments (1)

one nginx_slice_module bug report( By Yanbin Zhai ).pdf (216.5 KB ) - added by zhaiyan0011@… 7 years ago.
Detail of bug

Download all attachments as: .zip

Change History (6)

by zhaiyan0011@…, 7 years ago

Detail of bug

comment:1 by Roman Arutyunyan, 7 years ago

This looks like a known issue. When limit_rate/sendfile_max_chunk are used with addition/slice/ssi, output can stop at some point. We're currently working on this.

As a workaround, you can disable limit_rate.

in reply to:  1 comment:2 by zhaiyan0011@…, 7 years ago

Replying to arut:

This looks like a known issue. When limit_rate/sendfile_max_chunk are used with addition/slice/ssi, output can stop at some point. We're currently working on this.

As a workaround, you can disable limit_rate.

Thank you for your reply.
Our server requires speed limits. We have tried to fix this problem by modifying the ngx_http_slice_module module code, which now appears to be effective.
But we're looking for an official fix, thank you!

ngx_http_slice_filter_module.c:

@@ -105,6 +105,9 @@

ngx_http_slice_ctx_t *ctx;
ngx_http_slice_loc_conf_t *slcf;
ngx_http_slice_content_range_t cr;

+ /* get parent request */
+ ngx_http_request_t *pr = r->parent ;

ctx = ngx_http_get_module_ctx(r, ngx_http_slice_filter_module);
if (ctx == NULL) {

@@ -184,6 +187,14 @@

rc = ngx_http_next_header_filter(r);


if (r != r->main) {

+
+ if( pr->connection->write->delayed ) {
+
+ if (pr->connection->write->timedout && pr->connection->write->ready ) {
+
+ ngx_add_timer(pr->connection->write, 0);
+ }
+ }

return rc;

}

Last edited 7 years ago by zhaiyan0011@… (previous) (diff)

comment:3 by Maxim Dounin, 7 years ago

See also ticket #776 and this thread.

comment:4 by Maxim Dounin <mdounin@…>, 7 years ago

In 6961:903fb1ddc07f/nginx:

Moved handling of wev->delayed to the connection event handler.

With post_action or subrequests, it is possible that the timer set for
wev->delayed will expire while the active subrequest write event handler
is not ready to handle this. This results in request hangs as observed
with limit_rate / sendfile_max_chunk and post_action (ticket #776) or
subrequests (ticket #1228).

Moving the handling to the connection event handler fixes the hangs observed,
and also slightly simplifies the code.

comment:5 by Maxim Dounin, 7 years ago

Resolution: fixed
Status: newclosed

Fix committed.

Note: See TracTickets for help on using tickets.