Opened 7 years ago

Closed 7 years ago

Last modified 7 years ago

#1317 closed defect (fixed)

load_balance failed in ngx_stream_proxy_module because of "pending buffers"

Reported by: qleeAI@… Owned by:
Priority: minor Milestone: 1.13
Component: nginx-module Version: 1.13.x
Keywords: stream proxy load_balance Cc: gcc, (Ubuntu, 5.4.0-6ubuntu1~16.04.4), 5.4.0, 20160609
uname -a: Linux kenan-desktop 4.4.0-83-generic #106-Ubuntu SMP Mon Jun 26 17:54:43 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
nginx -V: nginx version: nginx/1.13.4
built by gcc 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4)
configure arguments: --with-debug --with-stream --with-stream_ssl_preread_module --with-pcre --with-zlib=../zlib-1.2.8

Description

If s->connection->read->ready is true when entering ngx_stream_proxy_module, ngx_stream_proxy_downstream_handler is called and read some data , stored in u->upstream_out.

If connect to upstream failed and entering ngx_stream_proxy_next_upstream() to connect to next upstrem server. u->upstream_out is not NULL and stream session is finalized.

In function ngx_stream_proxy_next_upstream

    if (u->upstream_out || u->upstream_busy || (pc && pc->buffered)) {
        ngx_log_error(NGX_LOG_ERR, s->connection->log, 0,
                      "pending buffers on next upstream");
        ngx_stream_proxy_finalize(s, NGX_STREAM_INTERNAL_SERVER_ERROR);
        return;
    }

Example

Obviously the example only used to descript the problem, we don't use it like this on production environment.

Nginx config file like this.

daemon off;
master_process off;

events {
    worker_connections  1024;
}

stream {
    upstream backend {
        server 10.10.10.10:80;
        server 115.231.25.133:80;
    }
    server {
        listen 80;
        ssl_preread on;
        preread_buffer_size 64;
        proxy_pass backend;
        proxy_connect_timeout 3s;
    }
}

Using curl to send a HTTP request, Nginx only try the first server.

error.log

2017/07/12 22:43:37 [error] 25361#0: *1 upstream timed out (110: Connection timed out) while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:80, upstream: "10.10.10.10:80", bytes from/to client:9/0, bytes from/to upstream:0/0
2017/07/12 22:43:37 [error] 25361#0: *1 pending buffers on next upstream while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:80, upstream: "10.10.10.10:80", bytes from/to client:9/0, bytes from/to upstream:0/0

Attachments (1)

stream-udp-next-upstream (1.2 KB ) - added by Roman Arutyunyan 7 years ago.

Download all attachments as: .zip

Change History (7)

by Roman Arutyunyan, 7 years ago

Attachment: stream-udp-next-upstream added

comment:1 by Roman Arutyunyan, 7 years ago

Thanks for reporting this. Please try the attached patch.

in reply to:  1 comment:2 by qleeAI@…, 7 years ago

Replying to arut:

Thanks for reporting this. Please try the attached patch.

This patch works for me. Thanks.

comment:3 by Andrey Zelenkov <zelenkov@…>, 7 years ago

In 1199:08f6eacf1cfe/nginx-tests:

Tests: stream proxy next upstream with ssl_preread (ticket #1317).

Ensure that next TCP upstream can be selected with pending buffers.

comment:4 by Roman Arutyunyan <arut@…>, 7 years ago

In 7098:7bfbf73db920/nginx:

Stream: relaxed next upstream condition (ticket #1317).

When switching to a next upstream, some buffers could be stuck in the middle
of the filter chain. A condition existed that raised an error when this
happened. As it turned out, this condition prevented switching to a next
upstream if ssl preread was used with the TCP protocol (see the ticket).

In fact, the condition does not make sense for TCP, since after successful
connection to an upstream switching to another upstream never happens. As for
UDP, the issue with stuck buffers is unlikely to happen, but is still possible.
Specifically, if a filter delays sending data to upstream.

The condition can be relaxed to only check the "buffered" bitmask of the
upstream connection. The new condition is simpler and fixes the ticket issue
as well. Additionally, the upstream_out chain is now reset for UDP prior to
connecting to a new upstream to prevent repeating the client data twice.

comment:5 by Roman Arutyunyan, 7 years ago

Resolution: fixed
Status: newclosed

comment:6 by Roman Arutyunyan <arut@…>, 7 years ago

In 7142:b9d919b53593/nginx:

Stream: relaxed next upstream condition (ticket #1317).

When switching to a next upstream, some buffers could be stuck in the middle
of the filter chain. A condition existed that raised an error when this
happened. As it turned out, this condition prevented switching to a next
upstream if ssl preread was used with the TCP protocol (see the ticket).

In fact, the condition does not make sense for TCP, since after successful
connection to an upstream switching to another upstream never happens. As for
UDP, the issue with stuck buffers is unlikely to happen, but is still possible.
Specifically, if a filter delays sending data to upstream.

The condition can be relaxed to only check the "buffered" bitmask of the
upstream connection. The new condition is simpler and fixes the ticket issue
as well. Additionally, the upstream_out chain is now reset for UDP prior to
connecting to a new upstream to prevent repeating the client data twice.

Note: See TracTickets for help on using tickets.