Custom Query (2311 matches)
Results (70 - 72 of 2311)
Ticket | Resolution | Summary | Owner | Reporter |
---|---|---|---|---|
#2540 | invalid | nginx stable 1.24 issues with cache file deletion under heavy load | ||
Description |
Hi, during some synthetic benchmarks on our nginx we faced a strange behaviour: after the benchmark was stopped and the cache files validity expired we found a large ammount (>200K files) that appeared in lsof of cache directory while a find -type f of the same returned nothing. The space inside the directory was still in use. Our test:
see nginx initial status: [root@conginx01 live]# find /data/nginx/live/ -type f [root@conginx01 live]# lsof /data/nginx/live/ | grep deleted | wc -l 0 [root@conginx01 live]# df -h /data/nginx/live/ Filesystem Size Used Avail Use% Mounted on tmpfs 205G 0 205G 0% /data/nginx/live
see: Filesystem Size Used Avail Use% Mounted on tmpfs 205G 196G 9.8G 96% /data/nginx/live Filesystem Size Used Avail Use% Mounted on tmpfs 205G 196G 10G 96% /data/nginx/live Filesystem Size Used Avail Use% Mounted on tmpfs 205G 196G 9.8G 96% /data/nginx/live Filesystem Size Used Avail Use% Mounted on tmpfs 205G 196G 10G 96% /data/nginx/live
we measured during the 10 minutes test:
at this point this is the status: [root@conginx01 live]# find /data/nginx/live/ -type f | wc -l 345645 [root@conginx01 live]# lsof /data/nginx/live/ | grep deleted | wc -l 359 [root@conginx01 live]# df -h /data/nginx/live/ Filesystem Size Used Avail Use% Mounted on tmpfs 205G 195G 11G 96% /data/nginx/live At this point everything looks FINE, the isn't anymore any traffic on the nginx server. after while we see:
[root@conginx01 live]# df -h /data/nginx/live/ Filesystem Size Used Avail Use% Mounted on tmpfs 205G 143G 63G 70% /data/nginx/live
[root@conginx01 live]# find /data/nginx/live/ -type f | wc -l 240997
[root@conginx01 live]# lsof /data/nginx/live/ | grep deleted | wc -l 11810 we still wait all cache items to expire: finally we see:
[root@conginx01 live]# find /data/nginx/live/ -type f | wc -l 0
[root@conginx01 live]# df -h /data/nginx/live/ Filesystem Size Used Avail Use% Mounted on tmpfs 205G 134G 72G 65% /data/nginx/live
[root@conginx01 live]# date; lsof /data/nginx/live/ | grep deleted | wc -l Wed Aug 30 14:39:41 CEST 2023 268690 [root@conginx01 live]# date; lsof /data/nginx/live/ | grep deleted | wc -l Wed Aug 30 14:41:05 CEST 2023 268690 [root@conginx01 live]# date; lsof /data/nginx/live/ | grep deleted | wc -l Wed Aug 30 14:42:21 CEST 2023 268690 Most of them are unique cache files, some have 2, 3, 4, 5 occurences: see: [root@conginx01 live]# cat log | grep "/data/nginx/live/6/0/82/2f68c2d5bdd35176a5606507d0ee8206" nginx 3277770 nginx 2087r REG 0,47 47685 88106191 /data/nginx/live/6/0/82/2f68c2d5bdd35176a5606507d0ee8206 (deleted) nginx 3277774 nginx *637r REG 0,47 47685 88106191 /data/nginx/live/6/0/82/2f68c2d5bdd35176a5606507d0ee8206 (deleted) nginx 3277776 nginx 7101r REG 0,47 47685 88106191 /data/nginx/live/6/0/82/2f68c2d5bdd35176a5606507d0ee8206 (deleted) nginx 3277779 nginx *163r REG 0,47 47685 88106191 /data/nginx/live/6/0/82/2f68c2d5bdd35176a5606507d0ee8206 (deleted) nginx 3277789 nginx 94r REG 0,47 47685 88106191 /data/nginx/live/6/0/82/2f68c2d5bdd35176a5606507d0ee8206 (deleted) We are monitoring the disk usage with nagios, "those files" in the previous tests stayed there from 16:39 of yesterday until 10:36 until we started a new load test. In the meanwhile no clients where connetcted to the nginx. kill -USR1 doesn't cleanup the "(deleted)" files. kill -HUP cleans them up immediately. We are wondering if this behaviour is correct. I'll put down here our cache configuration: open_file_cache_errors on; proxy_cache_path /data/nginx/live levels=1:1:2 keys_zone=cache-live:200m inactive=10m max_size=200g min_free=10g use_temp_path=off manager_files=10000 manager_threshold=5000ms manager_sleep=50ms loader_files=10000 loader_threshold=1000ms; proxy_connect_timeout 2s; proxy_send_timeout 2s; proxy_read_timeout 2s; proxy_buffering on; proxy_cache_lock on; proxy_cache_lock_timeout 100ms; proxy_cache_lock_age 50ms; proxy_cache_key $host$uri; proxy_cache_methods GET HEAD POST; proxy_cache_use_stale updating error timeout invalid_header http_500 http_502 http_503 http_504; proxy_cache_revalidate on; proxy_cache_background_update on; proxy_http_version 1.1; proxy_ignore_headers X-Accel-Expires Expires Cache-Control; proxy_hide_header Accept-Encoding; proxy_next_upstream error timeout invalid_header http_403 http_404 http_502 http_503 http_504; proxy_set_header Accept-Encoding none; proxy_set_header Connection "keep-alive"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_max_temp_file_size 0; # Get rid of [warn] 30083#30083: *511554 an upstream response is buffered to a temporary file /data/nginx_temp/0000054131 while reading upstream, proxy_buffers 64 8k; #http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers gzip on; gzip_proxied off; Then in the correct location we include this one: set $cache cache-live; proxy_cache_valid 400 401 403 405 2s; proxy_cache_valid 404 412 500 501 502 503 504 505 509 0s; proxy_cache_valid any 5m; open_file_cache_valid 10m; #LIVE open_file_cache max=200000 inactive=5m; #LIVE gzip off; Best Regards Francesco |
|||
#2539 | fixed | --with-http_v3_module changes the layout of the ngx_connection_s structure that can be used by modules, but does not change the signature | ||
Description |
Nginx compiled with v3 support has following signature: ./configure --with-http_v3_module && make -j8 egrep -ao '.,.,.,[01]{34}' objs/nginx 8,4,8,0000111111010111001110111111000110 Nginx compiled without v3 support will have the same signature: ./configure && make -j8 egrep -ao '.,.,.,[01]{34}' objs/nginx 8,4,8,0000111111010111001110101111000110 But nginx compiled with v3 support has different struct ngx_connection_s layout, it has additional 'quic' field - https://trac.nginx.org/nginx/browser/nginx/src/core/ngx_connection.h?rev=58afcd72446ff33811e773f1cabb7866a92a09a0#L153. Thus, if we compile a module for nginx without v3 support and try to load it into nginx with v3 support, the module will be successfully loaded because they have the same versions and signatures, but the module will behave badly due to different structure layouts. |
|||
#2538 | duplicate | Site has TLS 1.2 connection despite being configured with TLS 1.3 only | ||
Description |
Hello, I'm running nginx on Archlinux. |