Opened 13 years ago

Closed 12 years ago

Last modified 10 years ago

#47 closed defect (fixed)

loop with backup servers and proxy_next_upstream http_404

Reported by: Yasar Semih Alev Owned by: somebody
Priority: minor Milestone:
Component: nginx-core Version: 0.8.x
Keywords: backup upstream Cc:
uname -a: FreeBSD izm-s2 8.2-RELEASE FreeBSD 8.2-RELEASE #0: Thu Feb 17 02:41:51 UTC 2011 root@mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64
nginx -V: nginx: nginx version: nginx/1.1.7
nginx: built by gcc 4.2.1 20070719 [FreeBSD]
nginx: TLS SNI support enabled
nginx: configure arguments: --user=nginx --group=nginx --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --with-http_secure_link_module --with-http_random_index_module --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module --with-http_stub_status_module --with-http_geoip_module --with-mail --with-debug --with-mail_ssl_module --with-file-aio --with-cc-opt='-O2 -g -m64 -mtune=generic' --add-module=/home/nginx/nginx-1.1.7/ngx_http_bytes_filter_module --add-module=/home/nginx/nginx-1.1.7/nginx-udplog-module --add-module=/home/nginx/nginx-1.1.7/ngx_http_secure_download --add-module=/home/nginx/nginx-1.1.7/ngx_cache_purge

Description

Hi,

I added backup upstream server and proxy_next_upstream http_404 in conf if master upstream servers send 404 Nginx is trying from backup upstream. If backup upstream send 404, Nginx is starting loop on backup upstream and nonstop request same file.

I searched the problem and changed these codes in ngx_http_upstream.c and problem solved but I don't know exactly maybe wrong changes.

< if (ft_type == NGX_HTTP_UPSTREAM_FT_HTTP_404) {
< state = NGX_PEER_NEXT;
< } else {
< state = NGX_PEER_FAILED;
< }
---

state = NGX_PEER_FAILED;

Thanks.

Kind Regards

Semih Alev

Change History (5)

comment:1 by Maxim Dounin, 13 years ago

Status: newaccepted

Yes, thank you, it's known problem. Current implementation of backup servers doesn't work well with "proxy_next_upstream http_404".

comment:2 by Maxim Dounin, 13 years ago

Summary: Upstream looping problemloop with backup servers and proxy_next_upstream http_404

comment:3 by Maxim Dounin, 12 years ago

In [4622/nginx]:

Upstream: smooth weighted round-robin balancing.

For edge case weights like { 5, 1, 1 } we now produce { a, a, b, a, c, a, a }
sequence instead of { c, b, a, a, a, a, a } produced previously.

Algorithm is as follows: on each peer selection we increase current_weight
of each eligible peer by its weight, select peer with greatest current_weight
and reduce its current_weight by total number of weight points distributed
among peers.

In case of { 5, 1, 1 } weights this gives the following sequence of
current_weight's:

a b c
0 0 0 (initial state)

5 1 1 (a selected)

-2 1 1

3 2 2 (a selected)

-4 2 2

1 3 3 (b selected)
1 -4 3

6 -3 4 (a selected)

-1 -3 4

4 -2 5 (c selected)
4 -2 -2

9 -1 -1 (a selected)
2 -1 -1

7 0 0 (a selected)
0 0 0

To preserve weight reduction in case of failures the effective_weight
variable was introduced, which usually matches peer's weight, but is
reduced temporarily on peer failures.

This change also fixes loop with backup servers and proxy_next_upstream
http_404 (ticket #47), and skipping alive upstreams in some cases if there
are multiple dead ones (ticket #64).

comment:4 by Maxim Dounin, 12 years ago

Resolution: fixed
Status: acceptedclosed

Fix committed, thanks.

comment:5 by sync, 12 years ago

In [4668/nginx]:

Merge of r4622, r4623: balancing changes.

*) Upstream: smooth weighted round-robin balancing.

For edge case weights like { 5, 1, 1 } we now produce { a, a, b, a, c, a, a }
sequence instead of { c, b, a, a, a, a, a } produced previously.

Algorithm is as follows: on each peer selection we increase current_weight
of each eligible peer by its weight, select peer with greatest current_weight
and reduce its current_weight by total number of weight points distributed
among peers.

In case of { 5, 1, 1 } weights this gives the following sequence of
current_weight's:

a b c
0 0 0 (initial state)

5 1 1 (a selected)

-2 1 1

3 2 2 (a selected)

-4 2 2

1 3 3 (b selected)
1 -4 3

6 -3 4 (a selected)

-1 -3 4

4 -2 5 (c selected)
4 -2 -2

9 -1 -1 (a selected)
2 -1 -1

7 0 0 (a selected)
0 0 0

To preserve weight reduction in case of failures the effective_weight
variable was introduced, which usually matches peer's weight, but is
reduced temporarily on peer failures.

This change also fixes loop with backup servers and proxy_next_upstream
http_404 (ticket #47), and skipping alive upstreams in some cases if there
are multiple dead ones (ticket #64).

*) Upstream: fixed ip_hash rebalancing with the "down" flag.

Due to weight being set to 0 for down peers, order of peers after sorting
wasn't the same as without the "down" flag (with down peers at the end),
resulting in client rebalancing for clients on other servers. The only
rebalancing which should happen after adding "down" to a server is one
for clients on the server.

The problem was introduced in r1377 (which fixed endless loop by setting
weight to 0 for down servers). The loop is no longer possible with new
smooth algorithm, so preserving original weight is safe.

Note: See TracTickets for help on using tickets.