Opened 6 years ago

Last modified 4 years ago

#1738 reopened defect

NGINX Not Honoring proxy_cache_background_update with Cache-Control: stale-while-revalidate Header

Reported by: Ian Stephens Owned by:
Priority: minor Milestone:
Component: other Version: 1.15.x
Keywords: background update cache control stale-while-revalidate Cc:
uname -a:
nginx -V: nginx version: nginx/1.15.9

Description

We are running NGINX in front of our backend server.

We are attempting to enable the proxy_cache_background_update feature to allow NGINX to async updates to the cache and serve STALE content while it does this.

However, we are noticing that it still delivers STALE content slowly as if it's not serving from the cache. The time it takes to deliver a response to the client after an item expires is very slow and clearly not served from cache - you can tell it's going to the backend server, getting an update and serving the client in the same request.

Here is our configuration from NGINX:

proxy_ignore_headers Expires;
proxy_cache_background_update   on;

Our backend server is delivering the following headers:

HTTP/1.1 200 OK
Date: Thu, 28 Feb 2019 21:07:09 GMT
Server: Apache
Cache-Control: max-age=1800, stale-while-revalidate=604800
Content-Type: text/html; charset=UTF-8

When attempting an expired page fetch we do correctly notice a STALE response in the header:

X-Cache: STALE

However, when providing this response it is very slow as if it's contacted the backend server and done it in real-time.

NGINX version:

$ nginx -v
nginx version: nginx/1.15.9

It seems that nginx is honoring serving stale content (as we have tested) but it also updates the cache from the backend on the same request/thread thus causing the slow response time to the client. I.e. it seems to be totally ignoring the proxy_cache_background_update on; directive and not updating in the background on a separate subrequest (async).

We have also tried with

proxy_cache_use_stale updating;

However, the same behavior happens. As far as I'm aware, there is also no need to use proxy_cache_use_stale updating; when the backend sets a Cache Control: stale-while-revalidate header. The issue seems to be that it honors serving STALE content but it is also updating the cache on the same thread as the request comes in - i.e. it's simply ignoring proxy_cache_background_update on;

Change History (7)

comment:1 by Roman Arutyunyan, 6 years ago

It is true to a certain degree that nginx updates the stale cache entry in the context of a client request which triggered the update. But the client is supposed to receive the entire cached stale response quickly without waiting for a new backend response. However the next request in the same client connection will be delayed until the cache entry is updated. Depending on how you measure the response time, this may look like a slow response, while the actual cached response is served quickly.

comment:2 by Maxim Dounin, 6 years ago

Resolution: duplicate
Status: newclosed

Duplicate of #1723.

in reply to:  2 comment:3 by Ian Stephens, 6 years ago

Resolution: duplicate
Status: closedreopened

Replying to mdounin:

Duplicate of #1723.

Thank you for your reply.

This is not the issue we are discussing. We have tested this by doing the following...

Ensuring a valid cache entry is stored (HIT). Once we know the entry has expired, we then run a brand new connection test expecting a fast STALE. However, we don't get a fast STALE - we get a STALE that is returned in the same time it takes to grab a new page from the backend.

We only notice the behavior when using

Cache-Control: stale-while-revalidate

on the backend server.

We do not experience the behavior when we used standard hardcoded config, for example:

proxy_cache_valid 200 3M;
proxy_cache_valid 304 0;
proxy_cache_valid 404 410 1m;
proxy_cache_use_stale updating;
proxy_cache_background_update on;

The config above does return fast STALE responses.

The issue seems to be with stale-while-revalidate headers. It does honor retuning STALE (no problem there) but the update is performed not in the background (proxy_cache_background_update).

in reply to:  1 comment:4 by Ian Stephens, 6 years ago

Replying to arut:

It is true to a certain degree that nginx updates the stale cache entry in the context of a client request which triggered the update. But the client is supposed to receive the entire cached stale response quickly without waiting for a new backend response. However the next request in the same client connection will be delayed until the cache entry is updated. Depending on how you measure the response time, this may look like a slow response, while the actual cached response is served quickly.

Sorry, I should have replied to you - not mdounin:

Replying to mdounin:

Duplicate of #1723.

Thank you for your reply.

This is not the issue we are discussing. We have tested this by doing the following...

Ensuring a valid cache entry is stored (HIT). Once we know the entry has expired, we then run a brand new connection test expecting a fast STALE. However, we don't get a fast STALE - we get a STALE that is returned in the same time it takes to grab a new page from the backend.

We only notice the behavior when using

Cache-Control: stale-while-revalidate

on the backend server.

We do not experience the behavior when we used standard hardcoded config, for example:

proxy_cache_valid 200 3M;
proxy_cache_valid 304 0;
proxy_cache_valid 404 410 1m;
proxy_cache_use_stale updating;
proxy_cache_background_update on;

The config above does return fast STALE responses.

The issue seems to be with stale-while-revalidate headers. It does honor retuning STALE (no problem there) but the update is performed not in the background (proxy_cache_background_update).

comment:5 by maxim, 6 years ago

Milestone: nginx-1.15

Ticket retargeted after milestone closed

comment:6 by capraynor@…, 4 years ago

Hi, Team:
Same issue here.

We are using proxy_cache_background_update and proxy_cache_use_stale.
Nginx returns the stale content / cache very slow.

Return value when executing nginx -v: nginx version: nginx/1.17.10 (Ubuntu)
Return value when executing uname -a: Linux dd3-raynor-ubuntu 5.4.0-33-generic #37-Ubuntu SMP Thu May 21 12:53:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

JFYI
upstream returns the following header:

HTTP/1.1 200 OK
Cache-Control: public, max-age=0
Content-Length: 8113
Content-Type: application/javascript; charset=UTF-8
Last-Modified: Wed, 10 Jun 2020 02:54:24 GMT
Accept-Ranges: bytes
ETag: W/"1fb1-1729c266f91"
Vary: Accept-Encoding
Access-Control-Allow-Origin: *
X-DNS-Prefetch-Control: off
X-Frame-Options: SAMEORIGIN
Strict-Transport-Security: max-age=15552000; includeSubDomains
X-Download-Options: noopen
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Date: Wed, 17 Jun 2020 02:53:30 GMT

Regards
Raynor

Version 1, edited 4 years ago by capraynor@… (previous) (next) (diff)

comment:7 by Maxim Dounin, 4 years ago

As long as Request 4 is in the same connection as Request 3, it is expected that Request 4 won't be processed until the background cache update initiated by Request 3 finishes, see ticket:1723#comment:1.

Note: See TracTickets for help on using tickets.