Opened 4 years ago

Closed 4 years ago

Last modified 17 months ago

#2022 closed defect (invalid)

nginx proxy_next_upstream works weird

Reported by: jangys9510@… Owned by:
Priority: minor Milestone:
Component: documentation Version: 1.16.x
Keywords: proxy_next_upstream, upstream Cc: jangys9510@…
uname -a: Linux test.server 2.6.32-754.15.3.el6.x86_64 #1 SMP Tue Jun 18 16:25:32 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
nginx -V: nginx version: nginx/1.16.0
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/home1/irteam/apps/nginx-1.16.0 --user=irteam --group=irteam --error-log-path=/home1/irteam/apps/nginx/logs/error.log --http-log-path=/home1/irteam/apps/nginx/logs/access.log --without-http_scgi_module --without-http_uwsgi_module --without-http_fastcgi_module --with-http_ssl_module --with-http_sub_module --with-http_dav_module --with-http_stub_status_module --add-module=../ngx_http_neoauth_module-1.0.14-x64

Description

nginx's proxy_next_upstream works weird.

I want to pass to the next server when the upstream which proxy to other country fails to return request.
So I set upstream(proxy-to-other-country) like this.
---
upstream proxy-to-other-country {

server de1-test.com max_fails=0;
server de1-test.com:81 max_fails=0 backup; # for proxy_next_upstream
keepalive 60;
keepalive_requests 1000;
keepalive_timeout 300s;

}

---
It works well. But upstream for connecting tomcat also retries when it fails.

upstream tomcat config
---
upstream tomcat {

server 127.0.0.1:8080 max_fails=0;
keepalive 30;

}
---
I think nginx tries next upstream only when the upstream has more than two servers.

But it tries more than twice even if there is one server in the upstream.

I use nginx, tomcat and java spring.

There are my nginx's config, access log, and error log.

proxy config
---
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $x_forward_protocol_scheme;
proxy_http_version 1.1;
proxy_set_header Connection "";

proxy_buffering off; #default on
proxy_request_buffering off; #default on
proxy_buffer_size 32k; #default 4k|8k
proxy_buffers 20 32k; #default 8 4k|8k
proxy_busy_buffers_size 64k; # default 8k|16k
proxy_temp_file_write_size 64k; # default 8k|16k;
proxy_connect_timeout 75s; #default 60s
proxy_read_timeout 1800s; #default 60s
proxy_send_timeout 1800s; #default 60s

proxy_next_upstream error timeout invalid_header non_idempotent http_502 http_504;
proxy_next_upstream_tries 2;
---

log format
---
[$time_local] "$request" [$status] $body_bytes_sent $request_time "$http_referer" "$http_user_agent" $upstream_addr $upstream_connect_time $upstream_header_time $upstream_response_time $upstream_bytes_sent $upstream_bytes_received {$upstream_status}
---

access log
---
[31/Jul/2020:10:14:32 +0900] "GET /sections/a?lc=ko&ts=20200730144337&lm=1595927500 HTTP/1.1" [502] 552 0.510 "https://test.com/main" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36 Edge/18.17763" 127.0.0.1:8080, 127.0.0.1:8080 0.000, - -, - 0.509, 0.000 3494, 0 0, 0 {502, 502}
[31/Jul/2020:10:14:32 +0900] "POST /ajax/a?ts=1596158072677&rl=14101 HTTP/1.1" [502] 552 0.135 "https://test.com/main" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Whale/2.7.98.24 Safari/537.36" 127.0.0.1:8080, 127.0.0.1:8080, 127.0.0.1:8080, 127.0.0.1:8080 0.000, 0.000, 0.000, - -, -, -, - 0.126, 0.000, 0.001, 0.005 4840, 4840, 4840, 0 0, 0, 0, 0 {502, 502, 502, 502}
---

error log
---
2020/07/31 10:14:32 [error] 111509#0: *108712 upstream prematurely closed connection while reading response header from upstream, client: xx.xx.xx.xxx, server: test.com, request: "POST /ajax/a?ts=1596158072677&rl=14101 HTTP/1.1", upstream: "http://127.0.0.1:8080/ajax/a?ts=1596158072677&rl=14101", host: "test.com", referrer: "https://test.com/main"
2020/07/31 10:14:32 [error] 111509#0: *108712 upstream prematurely closed connection while reading response header from upstream, client: xx.xx.xx.xxx, server: test.com, request: "POST /ajax/a?ts=1596158072677&rl=14101 HTTP/1.1", upstream: "http://127.0.0.1:8080/ajax/a?ts=1596158072677&rl=14101", host: "test.com", referrer: "https://test.com/main"
2020/07/31 10:14:32 [error] 111511#0: *108612 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: yy.yy.yy.yyy, server: test.com, request: "GET /sections/a?lc=ko&ts=20200730144337&lm=1595927500 HTTP/1.1", upstream: "http://127.0.0.1:8080/sections/a?lc=ko&ts=20200730144337&lm=1595927500", host: "test.com", referrer: "https://test.com/main"
2020/07/31 10:14:32 [error] 111511#0: *108612 connect() failed (111: Connection refused) while connecting to upstream, client: yy.yy.yy.yyy, server: test.com, request: "GET /sections/a?lc=ko&ts=20200730144337&lm=1595927500 HTTP/1.1", upstream: "http://127.0.0.1:8080/sections/a?lc=ko&ts=20200730144337&lm=1595927500", host: "test.com", referrer: "https://test.com/main"
2020/07/31 10:14:32 [error] 111509#0: *108712 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xxx, server: test.com, request: "POST /ajax/a?ts=1596158072677&rl=14101 HTTP/1.1", upstream: "http://127.0.0.1:8080/ajax/a?ts=1596158072677&rl=14101", host: "test.com", referrer: "https://test.com/main"
---
As you can see, I set proxy_next_upstream_tries to 2. But it retries more than twice.(access log : {502, 502, 502, 502} -> it means upstream tries to proxy to next server 4th times)

I have questions about that.

  1. I want upstream tomcat doesn't retry to pass the next server. Is it possible?
  2. Why does it retry when there is one server in the upstream?
  3. Why does it retry more than twice?
  4. When upstream returns 502, why it cannot connect temporarily? (When it occurred, server status was normal)

If you know the answer to one of my questions, please let me know.

Thank you.

Change History (2)

comment:1 by Maxim Dounin, 4 years ago

Resolution: invalid
Status: newclosed

Using keepalive connections implies that nginx is required to re-try the request as long as the request fails due to asynchronous close events. To do so, nginx allows an additional attempt to contact the upstream server if the request fails, even if there is only one server in the upstream block.

If you want to disable proxy_next_upstream completely, consider using proxy_next_upstream off; in the corresponding location block.

comment:2 by Maxim Dounin, 17 months ago

See also #2421.

Note: See TracTickets for help on using tickets.