﻿id	summary	reporter	owner	description	type	status	priority	milestone	component	version	resolution	keywords	cc	uname	nginx_version
694	Cannot disable proxy_next_upstream when using hash upstream directive	Nathan Butler		"### Problem

proxy_next_upstream off does not work when you use the hash directive from ngx_http_upstream_module. It does not attempt to request from a different upstream in the same request but subsequent requests will be routed to different upstreams until fail_timeout is exceeded. Since it hashes to one upstream only, it should only attempt to hit that upstream.

Workaround: set max_fails=0 for each upstream server. This is suboptimal because it means requests are still sent to the failing upstream and if the upstream is not down but slower than the proxy_timeout, connections could start to pile up for that upstream.

### Configuration

# -*- mode: nginx -*-
#

user www-data;

worker_processes  1;

error_log  /mnt/logs/nginx/error.log info;

pid        /var/run/nginx.pid;

daemon off;

events {
    worker_connections  8096;
    use epoll;
}


http {
    log_format main '$remote_addr [$time_local] $request '
                    '""$status"" $body_bytes_sent '
                    '""$request_time"" '
                    '$upstream_addr '
                    '$upstream_status '
                    '$upstream_response_time';

    access_log /mnt/logs/nginx/access.log main;

    upstream test {
        server 127.0.0.1:8081;
        server 127.0.0.1:8082;
        hash $arg_h; # Toggle this to exhibit behavior
    }

    server {
        listen 80;
        server_name """";
        location / {
            proxy_pass http://test/;
            proxy_next_upstream off; # Toggle this to exhibit behavior
        }
    }
}

Running the following to create 8081 upstream: python -m SimpleHTTPServer 8081

### Control Scenario: Not using hash directive, proxy_next_upstream set to off, upstream 8082 is down.

Behavior expected: Over a 20 second window with 20 requests, requests will round robin and we should see 502 responses when it hits the 8082 upstream (but due to max_fail and fail_timeout, 8082 will not be hit 50 percent of the time)

$ count=0 ; while ((count < 20 )) ; do curl -s localhost >/dev/null ; let count=$((count + 1)) ; sleep 1 ; done
127.0.0.1 [08/Jan/2015:12:59:57 -0500] GET / HTTP/1.1 ""200"" 508 ""0.001"" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:12:59:58 -0500] GET / HTTP/1.1 ""502"" 172 ""0.000"" 127.0.0.1:8082 502 0.000
127.0.0.1 [08/Jan/2015:12:59:59 -0500] GET / HTTP/1.1 ""200"" 508 ""0.002"" 127.0.0.1:8081 200 0.002
127.0.0.1 [08/Jan/2015:13:00:00 -0500] GET / HTTP/1.1 ""200"" 508 ""0.002"" 127.0.0.1:8081 200 0.002
127.0.0.1 [08/Jan/2015:13:00:01 -0500] GET / HTTP/1.1 ""200"" 508 ""0.001"" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:02 -0500] GET / HTTP/1.1 ""200"" 508 ""0.001"" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:03 -0500] GET / HTTP/1.1 ""200"" 508 ""0.001"" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:04 -0500] GET / HTTP/1.1 ""200"" 508 ""0.001"" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:05 -0500] GET / HTTP/1.1 ""200"" 508 ""0.001"" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:06 -0500] GET / HTTP/1.1 ""200"" 508 ""0.001"" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:07 -0500] GET / HTTP/1.1 ""200"" 508 ""0.002"" 127.0.0.1:8081 200 0.002
127.0.0.1 [08/Jan/2015:13:00:08 -0500] GET / HTTP/1.1 ""200"" 508 ""0.001"" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:09 -0500] GET / HTTP/1.1 ""200"" 508 ""0.001"" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:10 -0500] GET / HTTP/1.1 ""200"" 508 ""0.001"" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:11 -0500] GET / HTTP/1.1 ""502"" 172 ""0.000"" 127.0.0.1:8082 502 0.000
127.0.0.1 [08/Jan/2015:13:00:12 -0500] GET / HTTP/1.1 ""200"" 508 ""0.002"" 127.0.0.1:8081 200 0.002
127.0.0.1 [08/Jan/2015:13:00:13 -0500] GET / HTTP/1.1 ""200"" 508 ""0.001"" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:14 -0500] GET / HTTP/1.1 ""200"" 508 ""0.001"" 127.0.0.1:8081 200 0.001
127.0.0.1 [08/Jan/2015:13:00:15 -0500] GET / HTTP/1.1 ""200"" 508 ""0.002"" 127.0.0.1:8081 200 0.002
127.0.0.1 [08/Jan/2015:13:00:16 -0500] GET / HTTP/1.1 ""200"" 508 ""0.001"" 127.0.0.1:8081 200 0.001

Behavior observed: Indeed, we see some requests fail when the 8082 upstream's fail_timeout is exceeded and another attempt is made to it but while within the fail_timeout window, and using the round-robin balancing method, 8082 is marked as temporarily down and requests aren't routed to it. This is the correct behavior.

### Hash Scenario: Using hash directive, proxy_next_upstream set to off, upstream 8082 is down.

Behavior expected: Over a 20 second window with 20 requests, requests that hash to the 8082 upstream will always fail

In this scenario, we are hashing on the h query string, baz hashes to 8081 upstream and foo hashes to upstream 8082

$ count=0 ; while ((count < 20 )) ; do curl -s localhost?h=baz >/dev/null ; let count=$((count + 1)) ; sleep 1 ; done
127.0.0.1 [08/Jan/2015:13:04:04 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:05 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:06 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:07 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:08 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:09 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:10 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:11 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:12 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:13 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.000"" 127.0.0.1:8081 301 0.000
127.0.0.1 [08/Jan/2015:13:04:14 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:15 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:16 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:17 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:18 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.000"" 127.0.0.1:8081 301 0.000
127.0.0.1 [08/Jan/2015:13:04:19 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:20 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:21 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:22 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:04:23 -0500] GET /?h=baz HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001

$ count=0 ; while ((count < 20 )) ; do curl -s localhost?h=foo >/dev/null ; let count=$((count + 1)) ; sleep 1 ; done
127.0.0.1 [08/Jan/2015:13:03:13 -0500] GET /?h=foo HTTP/1.1 ""502"" 172 ""0.001"" 127.0.0.1:8082 502 0.001
127.0.0.1 [08/Jan/2015:13:03:14 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:15 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:16 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:17 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:19 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:20 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:21 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.002"" 127.0.0.1:8081 301 0.002
127.0.0.1 [08/Jan/2015:13:03:22 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:23 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:24 -0500] GET /?h=foo HTTP/1.1 ""502"" 172 ""0.000"" 127.0.0.1:8082 502 0.000
127.0.0.1 [08/Jan/2015:13:03:25 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:26 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:27 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:28 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:29 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:30 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:31 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:32 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001
127.0.0.1 [08/Jan/2015:13:03:33 -0500] GET /?h=foo HTTP/1.1 ""301"" 5 ""0.001"" 127.0.0.1:8081 301 0.001

Behavior observed: baz hashes correctly to 8081 and sends all of its requests to that upstream. foo hashes to 8082 and the first request 502s as expected but subsequent requests get routed to 8081 until fail_timeout is exceeded. With proxy_next_upstream off I would expect it to fail immediately if within fail_timeout because it only hashes to 8082 and thus shouldn't go to another server.
"	defect	closed	minor		nginx-core	1.7.x	invalid	proxy_next_upstream, hash		Linux natedev 3.8.0-39-generic #57~precise1-Ubuntu SMP Tue Apr 1 20:04:50 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux	"nginx version: nginx/1.7.8
TLS SNI support enabled
configure arguments: --with-debug --prefix=/usr/local --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/body --http-proxy-temp-path=/var/lib/nginx/proxy --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-sha1=/usr/lib --add-module=../ngx_devel_kit-0.2.18 --add-module=../lua-nginx-module-0.9.13 --add-module=../headers-more-nginx-module-0.25"
