Opened 6 years ago
Last modified 4 years ago
#1738 reopened defect
NGINX Not Honoring proxy_cache_background_update with Cache-Control: stale-while-revalidate Header
Reported by: | Ian Stephens | Owned by: | |
---|---|---|---|
Priority: | minor | Milestone: | |
Component: | other | Version: | 1.15.x |
Keywords: | background update cache control stale-while-revalidate | Cc: | |
uname -a: | |||
nginx -V: | nginx version: nginx/1.15.9 |
Description
We are running NGINX in front of our backend server.
We are attempting to enable the proxy_cache_background_update feature to allow NGINX to async updates to the cache and serve STALE content while it does this.
However, we are noticing that it still delivers STALE content slowly as if it's not serving from the cache. The time it takes to deliver a response to the client after an item expires is very slow and clearly not served from cache - you can tell it's going to the backend server, getting an update and serving the client in the same request.
Here is our configuration from NGINX:
proxy_ignore_headers Expires; proxy_cache_background_update on;
Our backend server is delivering the following headers:
HTTP/1.1 200 OK Date: Thu, 28 Feb 2019 21:07:09 GMT Server: Apache Cache-Control: max-age=1800, stale-while-revalidate=604800 Content-Type: text/html; charset=UTF-8
When attempting an expired page fetch we do correctly notice a STALE response in the header:
X-Cache: STALE
However, when providing this response it is very slow as if it's contacted the backend server and done it in real-time.
NGINX version:
$ nginx -v nginx version: nginx/1.15.9
It seems that nginx is honoring serving stale content (as we have tested) but it also updates the cache from the backend on the same request/thread thus causing the slow response time to the client. I.e. it seems to be totally ignoring the proxy_cache_background_update on; directive and not updating in the background on a separate subrequest (async).
We have also tried with
proxy_cache_use_stale updating;
However, the same behavior happens. As far as I'm aware, there is also no need to use proxy_cache_use_stale updating; when the backend sets a Cache Control: stale-while-revalidate header. The issue seems to be that it honors serving STALE content but it is also updating the cache on the same thread as the request comes in - i.e. it's simply ignoring proxy_cache_background_update on;
Change History (7)
follow-up: 4 comment:1 by , 6 years ago
follow-up: 3 comment:2 by , 6 years ago
Resolution: | → duplicate |
---|---|
Status: | new → closed |
Duplicate of #1723.
comment:3 by , 6 years ago
Resolution: | duplicate |
---|---|
Status: | closed → reopened |
Replying to mdounin:
Duplicate of #1723.
Thank you for your reply.
This is not the issue we are discussing. We have tested this by doing the following...
Ensuring a valid cache entry is stored (HIT). Once we know the entry has expired, we then run a brand new connection test expecting a fast STALE. However, we don't get a fast STALE - we get a STALE that is returned in the same time it takes to grab a new page from the backend.
We only notice the behavior when using
Cache-Control: stale-while-revalidate
on the backend server.
We do not experience the behavior when we used standard hardcoded config, for example:
proxy_cache_valid 200 3M; proxy_cache_valid 304 0; proxy_cache_valid 404 410 1m; proxy_cache_use_stale updating; proxy_cache_background_update on;
The config above does return fast STALE responses.
The issue seems to be with stale-while-revalidate headers. It does honor retuning STALE (no problem there) but the update is performed not in the background (proxy_cache_background_update).
comment:4 by , 6 years ago
Replying to arut:
It is true to a certain degree that nginx updates the stale cache entry in the context of a client request which triggered the update. But the client is supposed to receive the entire cached stale response quickly without waiting for a new backend response. However the next request in the same client connection will be delayed until the cache entry is updated. Depending on how you measure the response time, this may look like a slow response, while the actual cached response is served quickly.
Sorry, I should have replied to you - not mdounin:
Replying to mdounin:
Duplicate of #1723.
Thank you for your reply.
This is not the issue we are discussing. We have tested this by doing the following...
Ensuring a valid cache entry is stored (HIT). Once we know the entry has expired, we then run a brand new connection test expecting a fast STALE. However, we don't get a fast STALE - we get a STALE that is returned in the same time it takes to grab a new page from the backend.
We only notice the behavior when using
Cache-Control: stale-while-revalidate
on the backend server.
We do not experience the behavior when we used standard hardcoded config, for example:
proxy_cache_valid 200 3M; proxy_cache_valid 304 0; proxy_cache_valid 404 410 1m; proxy_cache_use_stale updating; proxy_cache_background_update on;
The config above does return fast STALE responses.
The issue seems to be with stale-while-revalidate headers. It does honor retuning STALE (no problem there) but the update is performed not in the background (proxy_cache_background_update).
comment:6 by , 4 years ago
Hi, Team:
Same issue here.
We are using proxy_cache_background_update
and proxy_cache_use_stale
.
Nginx won't return the stale content / cache when making more than one stale request. Occasionally.
We didn't use Cache-Control header.
Behavior:
Request 1: Missing cache, Request delay: 500 ms (MISS)(OK)
Request 2: Cache valid, Request delay: 5 ms (HIT)(OK)
Request 3: Cache invalid, Request delay: 5 ms (STALE)(OK)
Request 4 or 5 or more: Cache invalid, Request delay: 500 ms (HIT) (Not Correct, Request should continue returns Stale content until the latest cache is got)
Is my understanding correct in Request 4?
Sorry for disturbing.
nginx version: nginx/1.17.10 (Ubuntu)
Linux dd3-raynor-ubuntu 5.4.0-33-generic #37-Ubuntu SMP Thu May 21 12:53:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Regards
Raynor
comment:7 by , 4 years ago
As long as Request 4 is in the same connection as Request 3, it is expected that Request 4 won't be processed until the background cache update initiated by Request 3 finishes, see ticket:1723#comment:1.
It is true to a certain degree that nginx updates the stale cache entry in the context of a client request which triggered the update. But the client is supposed to receive the entire cached stale response quickly without waiting for a new backend response. However the next request in the same client connection will be delayed until the cache entry is updated. Depending on how you measure the response time, this may look like a slow response, while the actual cached response is served quickly.