Opened 19 months ago
Last modified 15 months ago
#2490 new defect
the backup upstream response inherits the response value of the previous upstream that failed.
Reported by: | Owned by: | ||
---|---|---|---|
Priority: | minor | Milestone: | |
Component: | nginx-module | Version: | 1.23.x |
Keywords: | Cc: | ||
uname -a: | |||
nginx -V: |
/usr/local/nginx/sbin/nginx -V
nginx version: nginx/1.23.4 built by gcc 11.3.0 (Ubuntu 11.3.0-1ubuntu1~22.04) configure arguments: --with-debug |
Description (last modified by )
When an upstream configuration defining primary and backup servers is set up as follows,
it receiving a response with a status code defined in proxy_next_upstream and with "Cache-Control: max-age=XX" header from the primary server, it will be cached the responses from the backup server even that don't have the "Cache-Control" header.
upstream upstream_http { server unix:/run/nginx_1.sock max_fails=1 fail_timeout=10s; server unix:/run/nginx_2.sock max_fails=1 fail_timeout=10s backup; }
primary upstream server's response:
HTTP/1.1 500 Internal Server Error Server: - Date: - Content-Type: text/html Content-Length: 174 Connection: keep-alive Cache-Control: max-age=15
backup upstream server's response:
HTTP/1.1 200 OK Server: - Date: - Content-Type: application/octet-stream Content-Length: 30 Connection: keep-alive
Based on the debug log, it appears that when receiving the response from the backup server, it is marked as "http cacheable: 1", and is cached for the amount of time specified by the "Cache-Control: max-age=15" header on the primary server.
[debug] 8278#0: *1 http write filter: l:0 f:0 s:184 [debug] 8278#0: *1 http file cache set header [debug] 8278#0: *1 http cacheable: 1 [debug] 8278#0: *1 http proxy filter init s:200 h:0 c:0 l:30 [debug] 8278#0: *1 http upstream process upstream
It seems that the initialization is insufficient when the upstream transitions because applying this patch prevents the backup response from being cached.
diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c index 9cc202c9..1487e9ca 100644 --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -1626,6 +1626,9 @@ ngx_http_proxy_reinit_request(ngx_http_request_t *r) r->upstream->pipe->input_filter = ngx_http_proxy_copy_filter; r->upstream->input_filter = ngx_http_proxy_non_buffered_copy_filter; r->state = 0; + if (r->cache != NULL) { + r->cache->valid_sec = 0; + } return NGX_OK; }
Is there a better way to initialize to prevent each server in the upstream from affecting the response of the other servers?
※ I understand that it is not common for a status code 500 response to have a "Cache-Control: max-age=XX" header. However, I sometimes receive such responses in my nginx reverse proxy and I want to cache them as so-called negative cache.
I am attaching the configuration and debug log.
conf.
worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream upstream_http { server unix:/run/nginx_1.sock max_fails=1 fail_timeout=10s; server unix:/run/nginx_2.sock max_fails=1 fail_timeout=10s backup; } server { listen unix:/run/nginx_1.sock; access_log /var/log/nginx/access_y.log; error_log /var/log/nginx/error_1.log debug; location / { resolver 127.0.0.53; resolver_timeout 5s; proxy_http_version 1.1; proxy_pass ht tp://fail.example.com/$request_uri; proxy_set_header Connection ""; proxy_set_header Host "fail.example.com"; proxy_pass_header x-accel-expires; } } server { listen unix:/run/nginx_2.sock; access_log /var/log/nginx/access_y.log; error_log /var/log/nginx/error_2.log debug; location / { resolver 127.0.0.53; resolver_timeout 5s; proxy_http_version 1.1; proxy_pass ht tp://success.example.com/$request_uri; proxy_set_header Connection ""; proxy_pass_header x-accel-expires; } } proxy_cache_path /var/data/cache/ levels=1:2 use_temp_path=off keys_zone=cache_all:365516 inactive=720m max_size=539553; server { listen 80; server_name localhost; access_log /var/log/nginx/access_y.log; error_log /var/log/nginx/error_x.log debug; proxy_cache_lock on; proxy_cache_lock_timeout 10s; proxy_cache_revalidate on; proxy_cache_min_uses 1; proxy_set_header Connection ""; proxy_http_version 1.1; proxy_next_upstream http_500 http_502 http_503 http_504 http_429 timeout; location / { proxy_cache cache_all; proxy_pass ht tp://upstream_http; add_header X-Cache-Status $upstream_cache_status; } } }
Attachments (4)
Change History (7)
by , 19 months ago
Attachment: | request_log.txt added |
---|
by , 19 months ago
Attachment: | debuglog_error_1.log added |
---|
by , 19 months ago
Attachment: | debuglog_error_2.log added |
---|
by , 19 months ago
Attachment: | debuglog_error_x.log added |
---|
comment:1 by , 19 months ago
Description: | modified (diff) |
---|
follow-up: 3 comment:2 by , 15 months ago
comment:3 by , 15 months ago
Replying to Sergey Kandaurov:
Any way, inheriting the cache status on next upstream from a previous unsuccessful attempt looks like a bug. Below is a more generic patch:
# HG changeset patch # User Sergey Kandaurov <pluknet@nginx.com> # Date 1695042260 -14400 # Mon Sep 18 17:04:20 2023 +0400 # Node ID 47755c5238e310514b1a049f116edf6546fd8524 # Parent 4681f1faf86c9fa6aab1407260ed32a7be374b7f Upstream: do not inherit cache valid_sec on next upstream. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -4446,6 +4446,12 @@ ngx_http_upstream_next(ngx_http_request_ return; } +#if (NGX_HTTP_CACHE) + if (r->cache) { + r->cache->valid_sec = 0; + } +#endif + if (u->peer.connection) { ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "close http upstream connection: %d",
This should be in ngx_http_upstream_reinit()
. It is not enough though, as u->cacheable
, r->cache->updating_sec
, and r->cache->error_sec
have the same problem (check ngx_http_upstream_process_cache_control()
and other header processing functions). And there is no easy fix for at least u->cacheable
, since it cannot be simply reset.
Correct handling probably requires moving all the values as parsed from the upstream response headers into u->headers_in
, and applying them to r->cache
/ r->upstream
only when copying the response headers, much like we do for other values extracted from the upstream response headers. Some parts of this approach were already implemented in 8041:0784ab86ad08 (where it matters for parsing of a single response).
So you would like to cache 500 errors, but at the same time try next upstream servers. This looks a bit contradictory, because once 500 response is ever cached, it will be served from cache bypassing upstream. This may happen if all servers tried and this is the last response within the upstream group, or if served in different location without proxy_next_upstream. This particular case can be addressed, though, by selectively disabling caching such responses at the first place, e.g. by using proxy_no_cache.
Any way, inheriting the cache status on next upstream from a previous unsuccessful attempt looks like a bug. Below is a more generic patch: