Custom Query (2296 matches)
Results (43 - 45 of 2296)
Ticket | Resolution | Summary | Owner | Reporter |
---|---|---|---|---|
#2557 | invalid | Odd STALE reponse under long run stability test | ||
Description |
Hi, in a test enviroment the run some 12h+ tests to verify an odd behaviour we noticed in production. A couple of scritps were asking the same file with a ramdom sleep beetween few seconds and 65 seconds. Nginx is acting as reverse proxy and caching server. The file from the real server is always coming from the the same real server, in the worst case the difference between Date and Last-Modified will be 6 seconds on the real server. Nginx has STALE activated. Nginx cache is configure to cache "any" for 1 second. We are expecting on nginx a difference between Date and Last-Modified of about 7/8 seconds, maximum 9 seconds. Are we are wrong ? Here's the output of the scripts, combined. Date Cache Status Difference between Date and Last-Modified sleep for xxx milliseconds -------- Fri Oct 27 00:28:26 CEST 2023 MISS 2 sleep for 81137 milliseconds -------- Fri Oct 27 00:28:56 CEST 2023 MISS 2 sleep for 60115 milliseconds -------- Fri Oct 27 00:29:47 CEST 2023 MISS 3 sleep for 60890 milliseconds -------- Fri Oct 27 00:29:57 CEST 2023 STALE 13 sleep for 60968 milliseconds -------- Fri Oct 27 00:30:48 CEST 2023 MISS 2 sleep for 26001 milliseconds -------- Fri Oct 27 00:30:58 CEST 2023 STALE 12 sleep for 61950 milliseconds -------- Fri Oct 27 00:31:14 CEST 2023 MISS 1 sleep for 58485 milliseconds -------- Fri Oct 27 00:32:00 CEST 2023 MISS 1 sleep for 59154 milliseconds -------- Fri Oct 27 00:32:13 CEST 2023 MISS 3 sleep for 27922 milliseconds You can spot a couple of STALE 12 and 13 seconds old. logs are attached... here's the proxy cache configuration: proxy_connect_timeout 2s; #proxy_connect_timeout 10s; proxy_send_timeout 2s; #proxy_send_timeout 10s; proxy_read_timeout 2s; #proxy_read_timeout 5s; proxy_buffering on; #TODO TO TEST proxy_cache_lock on; #CACHE LOCK proxy_cache_lock_timeout 100ms; #CACHE LOCK, old value 1s proxy_cache_lock_age 50ms; #CACHE LOCK, old value default = 5s proxy_cache_key $proxy_host$uri; proxy_cache_methods GET HEAD POST; proxy_cache_use_stale updating error timeout invalid_header http_500 http_502 http_503 http_504; # http_404; No 404 stale response proxy_cache_revalidate on; proxy_cache_background_update on; proxy_http_version 1.1; proxy_ignore_headers X-Accel-Expires Expires Cache-Control; proxy_hide_header Accept-Encoding; # TODO to be discussed proxy_next_upstream error timeout invalid_header http_403 http_404 http_502 http_503 http_504; proxy_set_header Accept-Encoding none; proxy_set_header Connection "keep-alive"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_cache_path /data/nginx/manifest levels=1:1 keys_zone=cache-manifest:1m inactive=1s max_size=100m min_free=50m use_temp_path=off manager_files=10000 manager_threshold=1000ms manager_sleep=50ms loader_files=10000 loader_threshold=1000ms; and here's the cache configuration in the correct location: set $cache cache-manifest; proxy_pass http://usp-live-preprod; proxy_cache_valid any 1s; add_header X-Proxy-Cache $upstream_cache_status; add_header Access-Control-Allow-Origin *; # Can restrict this to specific origins. add_header 'Access-Control-Expose-Headers' 'Date,X-CDN' always; add_header 'Nginx-Cache-Name' $cache always; Why are we experiencing STALE response instead of a MISS sometimes ? Here's some reaults:
2 response where 11 seconds old 4 responses where 10 seconds old 7 responses where 9 seconds old you can see that this is happening just sometimes... BR Francesco |
|||
#2556 | invalid | Module ABI breakage with NGINX Plus R30 P1 | ||
Description |
The said binaries are claiming that they are based on 1.25.1, however, with the h2 patch applied, ngx_http_v2_connection_s structure would've had changed to include two new members, which renders modules compiled against the actual 1.25.1 sources incompatible with the shipped binaries if there are such references to the h2 connection structure. |
|||
#2555 | fixed | Race when both aio threads and background subrequests are used | ||
Description |
When the specific configurations are set(e.g. aio threads + proxy_cache_background_update on), multiple threads can operate on the same request connection structure and https://hg.nginx.org/nginx/file/tip/src/event/ngx_event_openssl.c#l3361 can cause ngx_ssl_recv() call from another thread to access a NULL c->ssl pointer. |