Opened 2 years ago

Closed 2 years ago

Last modified 2 years ago

#2286 closed defect (wontfix)

When I use the proxy_cache_background_update and proxy_cache_use_stale parameter, the expired cache content cannot be returned quickly.

Reported by: magicgaro@… Owned by:
Priority: minor Milestone:
Component: nginx-core Version: 1.19.x
Keywords: background update Cc: magicgaro@…
uname -a: Linux localhost.localdomain 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
nginx -V: nginx version: nginx/1.19.9
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC)
built with OpenSSL 1.1.1l 24 Aug 2021
TLS SNI support enabled
configure arguments: --prefix=/usr/local/nginx --with-debug --with-cc-opt='-DNGX_LUA_USE_ASSERT -DNGX_LUA_ABORT_AT_PANIC -O2' --add-module=../ngx_devel_kit-0.3.1 --add-module=../echo-nginx-module-0.62 --add-module=../xss-nginx-module-0.06 --add-module=../ngx_coolkit-0.2 --add-module=../set-misc-nginx-module-0.32 --add-module=../form-input-nginx-module-0.12 --add-module=../encrypted-session-nginx-module-0.08 --add-module=../srcache-nginx-module-0.32 --add-module=../ngx_lua-0.10.20 --add-module=../ngx_lua_upstream-0.07 --add-module=../headers-more-nginx-module-0.33 --add-module=../array-var-nginx-module-0.05 --add-module=../memc-nginx-module-0.19 --add-module=../redis2-nginx-module-0.15 --add-module=../redis-nginx-module-0.3.7 --add-module=../rds-json-nginx-module-0.15 --add-module=../rds-csv-nginx-module-0.09 --add-module=../ngx_stream_lua-0.0.10 --with-ld-opt=-Wl,-rpath,/usr/local/nginx/luajit/lib --with-http_stub_status_module --with-http_ssl_module --with-http_realip_module --with-http_geoip_module --with-threads --with-http_v2_module --with-http_stub_status_module --with-http_sub_module --with-threads

Description (last modified by magicgaro@…)

When I use the proxy_cache_background_update on; proxy_cache_use_stale timeout updating;parameter, the expired cache content can't be returned quickly.It seems to be delayed by 200 milliseconds.The following is the process.

backend server i use sleep 5 second.

1.cache miss response time 5s.
# time curl -i -H "Host:test.lua" "127.0.0.1"
HTTP/1.1 200 OK
Cache: MISS
real    0m5.085s

2. cache hit response time 0.004s
# time curl -i -H "Host:test.lua" "127.0.0.1"
HTTP/1.1 200 OK
Cache: HIT
real    0m0.004s

3.cache stale response time 0.205s. (This is obviously unreasonable.)
# time curl -i -H "Host:test.lua" "127.0.0.1"
HTTP/1.1 200 OK
Cache: STALE
real    0m0.205s

4.cache updating response time 0.04s
# time curl -i -H "Host:test.lua" "127.0.0.1"
HTTP/1.1 200 OK
Cache: UPDATING
real    0m0.004s

Is this a bug or is it designed like this?

Change History (5)

comment:1 by magicgaro@…, 2 years ago

Description: modified (diff)

comment:2 by Maxim Dounin, 2 years ago

Resolution: wontfix
Status: newclosed

That's a known limitation of proxy_cache_background_update, see this thread for details, notably this message: since the response is actually finalized once the background update finishes, final cleanup which pushes last bytes of the response might not happen until the update completes. As a result with tcp_nopush on; the last bytes of the response might be delayed till the kernel decides to send them. One possible workaround would be to disable tcp_nopush.

Note well that with proxy_cache_background_update further requests on the same connection might be delayed till the background updates completes, see #1723. In general it is a good idea to assume that background update isn't faster than normal update, though makes it possible to return the response at the same time the update takes place.

in reply to:  2 comment:3 by magicgaro@…, 2 years ago

Replying to Maxim Dounin:

That's a known limitation of proxy_cache_background_update, see this thread for details, notably this message: since the response is actually finalized once the background update finishes, final cleanup which pushes last bytes of the response might not happen until the update completes. As a result with tcp_nopush on; the last bytes of the response might be delayed till the kernel decides to send them. One possible workaround would be to disable tcp_nopush.

Note well that with proxy_cache_background_update further requests on the same connection might be delayed till the background updates completes, see #1723. In general it is a good idea to assume that background update isn't faster than normal update, though makes it possible to return the response at the same time the update takes place.

Thank you for the reply!I still found a phenomenon, can you help me answer it?If i use method HEAD request the stale content,it can be returned quickly.

comment:4 by Maxim Dounin, 2 years ago

Thank you for the reply!I still found a phenomenon, can you help me answer it?If i use method HEAD request the stale content,it can be returned quickly.

That's because nginx does not try to use tcp_nopush for HEAD requests, as it doesn't make sense. Or, more strictly, it doesn't use tcp_nopush for any responses which does not try to send a file with some header, see docs and src/os/unix/ngx_linux_sendfile_chain.c for details.

in reply to:  4 comment:5 by magicgaro@…, 2 years ago

Replying to Maxim Dounin:

Thank you for the reply!I still found a phenomenon, can you help me answer it?If i use method HEAD request the stale content,it can be returned quickly.

That's because nginx does not try to use tcp_nopush for HEAD requests, as it doesn't make sense. Or, more strictly, it doesn't use tcp_nopush for any responses which does not try to send a file with some header, see docs and src/os/unix/ngx_linux_sendfile_chain.c for details.

I got it.Thank you very much for your answer.

Note: See TracTickets for help on using tickets.