Opened 12 months ago

Last modified 7 months ago

#2490 new defect

the backup upstream response inherits the response value of the previous upstream that failed. — at Version 1

Reported by: soukichi@… Owned by:
Priority: minor Milestone:
Component: nginx-module Version: 1.23.x
Keywords: Cc:
uname -a:
nginx -V: /usr/local/nginx/sbin/nginx -V
nginx version: nginx/1.23.4
built by gcc 11.3.0 (Ubuntu 11.3.0-1ubuntu1~22.04)
configure arguments: --with-debug

Description (last modified by soukichi@…)

When an upstream configuration defining primary and backup servers is set up as follows,
it receiving a response with a status code defined in proxy_next_upstream and with "Cache-Control: max-age=XX" header from the primary server, it will be cached the responses from the backup server even that don't have the "Cache-Control" header.

upstream upstream_http {
    server unix:/run/nginx_1.sock max_fails=1 fail_timeout=10s;
    server unix:/run/nginx_2.sock max_fails=1 fail_timeout=10s backup;
}

primary upstream server's response:

HTTP/1.1 500 Internal Server Error
Server: -
Date: -
Content-Type: text/html
Content-Length: 174
Connection: keep-alive
Cache-Control: max-age=15

backup upstream server's response:

HTTP/1.1 200 OK
Server: -
Date: -
Content-Type: application/octet-stream
Content-Length: 30
Connection: keep-alive

Based on the debug log, it appears that when receiving the response from the backup server, it is marked as "http cacheable: 1", and is cached for the amount of time specified by the "Cache-Control: max-age=15" header on the primary server.

[debug] 8278#0: *1 http write filter: l:0 f:0 s:184
[debug] 8278#0: *1 http file cache set header
[debug] 8278#0: *1 http cacheable: 1
[debug] 8278#0: *1 http proxy filter init s:200 h:0 c:0 l:30
[debug] 8278#0: *1 http upstream process upstream

It seems that the initialization is insufficient when the upstream transitions because applying this patch prevents the backup response from being cached.

diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c
index 9cc202c9..1487e9ca 100644
--- a/src/http/modules/ngx_http_proxy_module.c
+++ b/src/http/modules/ngx_http_proxy_module.c
@@ -1626,6 +1626,9 @@ ngx_http_proxy_reinit_request(ngx_http_request_t *r)
     r->upstream->pipe->input_filter = ngx_http_proxy_copy_filter;
     r->upstream->input_filter = ngx_http_proxy_non_buffered_copy_filter;
     r->state = 0;
+    if (r->cache != NULL) {
+        r->cache->valid_sec = 0;
+    }
 
     return NGX_OK;
 }

Is there a better way to initialize to prevent each server in the upstream from affecting the response of the other servers?
※ I understand that it is not common for a status code 500 response to have a "Cache-Control: max-age=XX" header. However, I sometimes receive such responses in my nginx reverse proxy and I want to cache them as so-called negative cache.

I am attaching the configuration and debug log.

conf.

worker_processes  1;

events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    keepalive_timeout  65;

    upstream upstream_http {
        server unix:/run/nginx_1.sock max_fails=1 fail_timeout=10s;
        server unix:/run/nginx_2.sock max_fails=1 fail_timeout=10s backup;
    }

    server {
        listen unix:/run/nginx_1.sock;

        access_log /var/log/nginx/access_y.log;
        error_log  /var/log/nginx/error_1.log debug;

        location / {
            resolver 127.0.0.53;
            resolver_timeout 5s;
            proxy_http_version 1.1;
            proxy_pass ht tp://fail.example.com/$request_uri;
            proxy_set_header Connection "";
            proxy_set_header Host "fail.example.com";
            proxy_pass_header x-accel-expires;
        }
    }

    server {
        listen unix:/run/nginx_2.sock;

        access_log /var/log/nginx/access_y.log;
        error_log  /var/log/nginx/error_2.log debug;

        location / {
            resolver 127.0.0.53;
            resolver_timeout 5s;
            proxy_http_version 1.1;
            proxy_pass ht tp://success.example.com/$request_uri;
            proxy_set_header Connection "";
            proxy_pass_header x-accel-expires;
        }
    }
    proxy_cache_path /var/data/cache/ levels=1:2 use_temp_path=off keys_zone=cache_all:365516 inactive=720m max_size=539553;

    server {
        listen 80;
        server_name  localhost;
        access_log /var/log/nginx/access_y.log;
        error_log  /var/log/nginx/error_x.log debug;

        proxy_cache_lock on;
        proxy_cache_lock_timeout 10s;
        proxy_cache_revalidate on;
        proxy_cache_min_uses 1;
        proxy_set_header Connection "";
        proxy_http_version 1.1;
        proxy_next_upstream http_500 http_502 http_503 http_504 http_429 timeout;

        location / {
            proxy_cache cache_all;
            proxy_pass ht tp://upstream_http;
            add_header X-Cache-Status $upstream_cache_status;
        }
    }
}

Change History (5)

by soukichi@…, 12 months ago

Attachment: request_log.txt added

by soukichi@…, 12 months ago

Attachment: debuglog_error_1.log added

by soukichi@…, 12 months ago

Attachment: debuglog_error_2.log added

by soukichi@…, 12 months ago

Attachment: debuglog_error_x.log added

comment:1 by soukichi@…, 12 months ago

Description: modified (diff)
Note: See TracTickets for help on using tickets.