Opened 10 years ago

Closed 10 years ago

#694 closed defect (invalid)

Cannot disable proxy_next_upstream when using hash upstream directive

Reported by: Nathan Butler Owned by:
Priority: minor Milestone:
Component: nginx-core Version: 1.7.x
Keywords: proxy_next_upstream, hash Cc:
uname -a: Linux natedev 3.8.0-39-generic #57~precise1-Ubuntu SMP Tue Apr 1 20:04:50 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
nginx -V: nginx version: nginx/1.7.8
TLS SNI support enabled
configure arguments: --with-debug --prefix=/usr/local --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/body --http-proxy-temp-path=/var/lib/nginx/proxy --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-sha1=/usr/lib --add-module=../ngx_devel_kit-0.2.18 --add-module=../lua-nginx-module-0.9.13 --add-module=../headers-more-nginx-module-0.25

Description

### Problem

proxy_next_upstream off does not work when you use the hash directive from ngx_http_upstream_module. It does not attempt to request from a different upstream in the same request but subsequent requests will be routed to different upstreams until fail_timeout is exceeded. Since it hashes to one upstream only, it should only attempt to hit that upstream.

Workaround: set max_fails=0 for each upstream server. This is suboptimal because it means requests are still sent to the failing upstream and if the upstream is not down but slower than the proxy_timeout, connections could start to pile up for that upstream.

### Configuration

# -*- mode: nginx -*-
#

user www-data;

worker_processes 1;

error_log /mnt/logs/nginx/error.log info;

pid /var/run/nginx.pid;

daemon off;

events {

worker_connections 8096;
use epoll;

}

http {

log_format main '$remote_addr [$time_local] $request '

'"$status" $body_bytes_sent '
'"$request_time" '
'$upstream_addr '
'$upstream_status '
'$upstream_response_time';

access_log /mnt/logs/nginx/access.log main;

upstream test {

server 127.0.0.1:8081;
server 127.0.0.1:8082;
hash $arg_h; # Toggle this to exhibit behavior

}

server {

listen 80;
server_name "";
location / {

proxy_pass http://test/;
proxy_next_upstream off; # Toggle this to exhibit behavior

}

}

}

Running the following to create 8081 upstream: python -m SimpleHTTPServer 8081

### Control Scenario: Not using hash directive, proxy_next_upstream set to off, upstream 8082 is down.

Behavior expected: Over a 20 second window with 20 requests, requests will round robin and we should see 502 responses when it hits the 8082 upstream (but due to max_fail and fail_timeout, 8082 will not be hit 50 percent of the time)

$ count=0 ; while ((count < 20 )) ; do curl -s localhost >/dev/null ; let count=$((count + 1)) ; sleep 1 ; done
127.0.0.1 [08/Jan/2015:12:59:57 -0500] GET / HTTP/1.1 "200" 508 "0.001" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:12:59:58 -0500] GET / HTTP/1.1 "502" 172 "0.000" 127.0.0.1:8082 502 0.000
127.0.0.1 [08/Jan/2015:12:59:59 -0500] GET / HTTP/1.1 "200" 508 "0.002" 127.0.0.1:8081 200 0.002
127.0.0.1 [08/Jan/2015:13:00:00 -0500] GET / HTTP/1.1 "200" 508 "0.002" 127.0.0.1:8081 200 0.002
127.0.0.1 [08/Jan/2015:13:00:01 -0500] GET / HTTP/1.1 "200" 508 "0.001" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:02 -0500] GET / HTTP/1.1 "200" 508 "0.001" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:03 -0500] GET / HTTP/1.1 "200" 508 "0.001" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:04 -0500] GET / HTTP/1.1 "200" 508 "0.001" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:05 -0500] GET / HTTP/1.1 "200" 508 "0.001" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:06 -0500] GET / HTTP/1.1 "200" 508 "0.001" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:07 -0500] GET / HTTP/1.1 "200" 508 "0.002" 127.0.0.1:8081 200 0.002
127.0.0.1 [08/Jan/2015:13:00:08 -0500] GET / HTTP/1.1 "200" 508 "0.001" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:09 -0500] GET / HTTP/1.1 "200" 508 "0.001" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:10 -0500] GET / HTTP/1.1 "200" 508 "0.001" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:11 -0500] GET / HTTP/1.1 "502" 172 "0.000" 127.0.0.1:8082 502 0.000
127.0.0.1 [08/Jan/2015:13:00:12 -0500] GET / HTTP/1.1 "200" 508 "0.002" 127.0.0.1:8081 200 0.002
127.0.0.1 [08/Jan/2015:13:00:13 -0500] GET / HTTP/1.1 "200" 508 "0.001" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:14 -0500] GET / HTTP/1.1 "200" 508 "0.001" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:15 -0500] GET / HTTP/1.1 "200" 508 "0.002" 127.0.0.1:8081 200 0.002
127.0.0.1 [08/Jan/2015:13:00:16 -0500] GET / HTTP/1.1 "200" 508 "0.001" 127.0.0.1:8081 200 0.001

Behavior observed: Indeed, we see some requests fail when the 8082 upstream's fail_timeout is exceeded and another attempt is made to it but while within the fail_timeout window, and using the round-robin balancing method, 8082 is marked as temporarily down and requests aren't routed to it. This is the correct behavior.

### Hash Scenario: Using hash directive, proxy_next_upstream set to off, upstream 8082 is down.

Behavior expected: Over a 20 second window with 20 requests, requests that hash to the 8082 upstream will always fail

In this scenario, we are hashing on the h query string, baz hashes to 8081 upstream and foo hashes to upstream 8082

$ count=0 ; while ((count < 20 )) ; do curl -s localhost?h=baz >/dev/null ; let count=$((count + 1)) ; sleep 1 ; done
127.0.0.1 [08/Jan/2015:13:04:04 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:05 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:06 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:07 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:08 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:09 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:10 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:11 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:12 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:13 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.000" 127.0.0.1:8081 301 0.000
127.0.0.1 [08/Jan/2015:13:04:14 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:15 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:16 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:17 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:18 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.000" 127.0.0.1:8081 301 0.000
127.0.0.1 [08/Jan/2015:13:04:19 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:20 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:21 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:22 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:23 -0500] GET /?h=baz HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001

$ count=0 ; while ((count < 20 )) ; do curl -s localhost?h=foo >/dev/null ; let count=$((count + 1)) ; sleep 1 ; done
127.0.0.1 [08/Jan/2015:13:03:13 -0500] GET /?h=foo HTTP/1.1 "502" 172 "0.001" 127.0.0.1:8082 502 0.001
127.0.0.1 [08/Jan/2015:13:03:14 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:15 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:16 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:17 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:19 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:20 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:21 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.002" 127.0.0.1:8081 301 0.002
127.0.0.1 [08/Jan/2015:13:03:22 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:23 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:24 -0500] GET /?h=foo HTTP/1.1 "502" 172 "0.000" 127.0.0.1:8082 502 0.000
127.0.0.1 [08/Jan/2015:13:03:25 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:26 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:27 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:28 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:29 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:30 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:31 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:32 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:33 -0500] GET /?h=foo HTTP/1.1 "301" 5 "0.001" 127.0.0.1:8081 301 0.001

Behavior observed: baz hashes correctly to 8081 and sends all of its requests to that upstream. foo hashes to 8082 and the first request 502s as expected but subsequent requests get routed to 8081 until fail_timeout is exceeded. With proxy_next_upstream off I would expect it to fail immediately if within fail_timeout because it only hashes to 8082 and thus shouldn't go to another server.

Change History (6)

comment:1 by Roman Arutyunyan, 10 years ago

http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream says: "The cases of error, timeout and invalid_header are always considered unsuccessful attempts, even if they are not specified in the directive". Your second server has max_fails exceeded because of connection error. So it's skipped by the hash balancer.

comment:2 by Nathan Butler, 10 years ago

So maybe this is intended behavior for proxy_next_upstream, that is, setting it to off does not turn it off completely, only setting max_fails to 0 disables it completely. Which is confusing. However, the fact remains that the hash balancer still chooses another upstream if the first one has max_fails exceeded. This seems wrong. A hash balancer should only hash to one upstream.

comment:3 by Maxim Dounin, 10 years ago

Resolution: invalid
Status: newclosed

The "proxy_next_upstream" directive defines what to do if an error happens while talking to a server. It doesn't prevent nginx from selecting another server if the preferred one is known to be down as per max_fails/fail_timeout (or explicitly marked as down).

As for the behaviour of the hash balancer, it is as designed. Very much like all other nginx balancing methods and other code, it assumes that all servers in an upstream{} block are identical, but uses hash function to determine a preferred one. Then it tries to use the preferred server if possible, or re-hashes to another server if not - compatible with what Cache::Memcached will do by default (unless the no_rehash flag is set), and in line with what ip_hash does.

As you already found yourself, if you want nginx to ignore any errors and always use preferred servers, you can set max_fails to 0.

comment:4 by Nathan Butler, 10 years ago

So is there anyway to set the no_rehash flag on the hash directive?

comment:5 by Nathan Butler, 10 years ago

Resolution: invalid
Status: closedreopened

So is there anyway to set the no_rehash flag on the hash directive?

comment:6 by Maxim Dounin, 10 years ago

Resolution: invalid
Status: reopenedclosed

There is no direct equivalent for Cache::Memcached no_rehash flag in the hash balancer. As previously suggested, similar behaviour may be achieved by using max_fails=0.

Note: See TracTickets for help on using tickets.