#1233 closed defect (invalid)

Stale 200 Served even when backend sets 404 with new proxy_cache_background_update directive

Reported by: stackoverflow.com/users/2528298/ted-wilmont Owned by:
Priority: major Milestone:
Component: nginx-core Version: 1.11.x
Keywords: proxy_cache_background_update, proxy cache Cc:
uname -a: Darwin 14.3.0 Darwin Kernel Version 14.3.0: Mon Mar 23 11:59:05 PDT 2015; root:xnu-2782.20.48~5/RELEASE_X86_64 x86_64
nginx -V: nginx version: nginx/1.11.10


Hi there,

We have recently enabled proxy_cache_background_update and what a great feature it is.

However, we have noticed a bug. Hopefully it's just a mis-config on our part but I thought I'd post it as I can't seem to find out why this is happening.

We run a backend server that sets X-Accel-Expires headers at 3600 seconds.

The problem we had was when we enabled proxy_cache_background_update, if we deleted a page on the origin server that was previously cached it would continuously serve the old stale 200 page even though the backend was returning a 404 - even after 3600 seconds.

The 200 response was always "STALE" and never updated as a 404. To fix this, we had to disable proxy_cache_background_update. Once that was disabled, a correct 404 page was sent to the client.

Our config (in http context):

proxy_cache_use_stale  updating;
proxy_cache_background_update on;

As you can see, we are not using "proxy_cache_use_stale error;" or anything like that; just "updating".

Our proxy cache config:

proxy_cache_path /cache levels=1:2 keys_zone=one:256m max_size=128g inactive=14d;
proxy_cache_key $scheme$host$request_uri;

Our config for the site in question:

server {

        proxy_cache one;
    listen ssl http2;
    server_name www.oursite.co.uk oursite.co.uk;

    ssl_certificate           /certificates/www.oursite.co.uk.cer;
    ssl_certificate_key       /certificates/www.oursite.co.uk.key;

    ssl on;
    ssl_session_cache  builtin:1000  shared:SSL:10m;
    ssl_dhparam /certificates/dhparams.pem;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;

#ssl_stapling on;
#ssl_stapling_verify on;
#ssl_trusted_certificate /certificates/thawte.cer;
    resolver valid=300s;
    resolver_timeout 10s;

location / {

set $wordpress_auth "";
    if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") {
        set $wordpress_auth wordpress_logged_in_$1;
        proxy_cache_bypass $wordpress_auth;
        proxy_no_cache $wordpress_auth;
        #add_header Strict-Transport-Security "max-age=31536000";
        add_header X-Cache $upstream_cache_status;
        proxy_set_header        Host $host;
        proxy_set_header        X-Real-IP $remote_addr;
        proxy_set_header        X-Forwarded-For $remote_addr;
        proxy_set_header        X-Forwarded-Proto $scheme;
        proxy_read_timeout  180s;

Hopefully somebody can shed some light?

Change History (8)

comment:1 Changed 21 months ago by stackoverflow.com/users/2528298/ted-wilmont


The original page had the X-Accel-Expires header set at 3600 seconds but the new 404 on the same URL had no X-Accel-Expires headers set at all with an expires header set at: Wed, 11 Jan 1984 05:00:00 GMT.

comment:2 Changed 21 months ago by arut

The background cache updater receives an uncacheable response (expires in the past) and ignores it. What behaviour did you expect?

comment:3 Changed 21 months ago by stackoverflow.com/users/2528298/ted-wilmont

But that's not the case when proxy_cache_background_update is disabled. When proxy_cache_background_update is disabled, the correct 404 page is sent to the client and the cache entry is removed - as expected when 3600 seconds have expired.

I would expect the STALE page to be removed and to forward on the correct 404.

Once the 3600 seconds have expired, the background cache updater will fetch the URL once again from the origin server and if it 404s it should then forward that 404 on to future clients. That's how I would anticipate it should work. It works this way with proxy_cache_background_update disabled.

Maybe I'm not configuring correctly? How can changes to the config perform this behaviour?

Last edited 21 months ago by stackoverflow.com/users/2528298/ted-wilmont (previous) (diff)

comment:4 Changed 21 months ago by arut

When proxy_cache_background_update is disabled, there's a client, which receives the 404 response (which is not cached anyway). In the background subrequest there's no client to send 404 to. Why would nginx cache an already expired response? In a way, it's even "more stale" than the old one.

comment:5 Changed 21 months ago by stackoverflow.com/users/2528298/ted-wilmont

Thank you for your reply.

In that case, how do we configure nginx to drop the cache entry when the background updater receives a 404? Otherwise, nginx would never forward the 404 for URLs that have been removed (but were once previously cached)?

Last edited 21 months ago by stackoverflow.com/users/2528298/ted-wilmont (previous) (diff)

comment:6 Changed 21 months ago by arut

What you can do is disable getting cache time from the response using the "proxy_ignore_headers" directive. And instead set it directly with "proxy_cache_valid".

proxy_ignore_headers Expires;
proxy_cache_valid 404 1s;

comment:7 Changed 21 months ago by stackoverflow.com/users/2528298/ted-wilmont

That's interesting, and thank you for suggesting that.

However, we also cache image/video files from the origin server and deliver them from the nginx cache direct to the client. These have expires headers of 3 months set on the origin server. We don't want to set 3 months on all "200" codes so how could we work around that?

Last edited 21 months ago by stackoverflow.com/users/2528298/ted-wilmont (previous) (diff)

comment:8 Changed 20 months ago by mdounin

  • Resolution set to invalid
  • Status changed from new to closed

Closing this, observed behaviour is as intended and not a bug. It is also not really related to proxy_cache_background_update, but rather a result of proxy_cache_use_stale used. Background updates just make the behaviour more obvious and easier to observe.

There are two basic options to resolve this:

  • Make sure that resources cached do not return uncacheable results to revalidation requests, as suggested by Roman.
  • Avoid using proxy_cache_use_stale updating for infinite time, but use Cache-Control: stale-while-revalidate=<seconds> header instead. This allows to specify maximum time after which a stale response will be removed.
Note: See TracTickets for help on using tickets.