Opened 9 years ago

Closed 8 years ago

#749 closed defect (worksforme)

Nginx not correctly cleaning fasctgi cache

Reported by: FredericA. Owned by:
Priority: minor Milestone:
Component: nginx-core Version: 1.7.x
Keywords: Cc:
uname -a: Linux 3.10.23-xxxx-grs-ipv6-64 #1 SMP Mon Dec 9 16:02:37 CET 2013 x86_64 GNU/Linux
nginx -V: nginx version: nginx/1.4.4
built by gcc 4.7.2 (Debian 4.7.2-5)
TLS SNI support enabled
configure arguments: --with-http_ssl_module --with-http_stub_status_module --add-module=/root/nginx/nginx-upload-progress-module-master/ --add-module=/root/nginx/nginx-push-stream-module-master/ --with-pcre=/root/nginx/pcre-8.33/ --add-module=/root/nginx/mapcache/build/nginx --with-http_secure_link_module

Description

Hello,

I'm using nginx as a cache for a busy backend (dynamic tiles server).
The configuration is basically as follow :

In the http block:
fastcgi_cache_path /dev/shm/nginx levels=1:2 keys_zone=fcgiA:50m inactive=30m max_size=2048m;

In the location block, something like:
fastcgi_cache fcgiA;
fastcgi_cache_key "fcgiA $request_uri";
fastcgi_cache_valid 30m;
fastcgi_pass unix:/var/run/sA.sock;

Unfortunately, nginx seems to use much more than 2048Mb, until saturation of /dev/shm partition (which is mounted as a tmpfs with 32Gb). proxy_cache is working perfectly by the way, I only have that problem using fastcgi.

Do you think that problem can be related to the huge number of files stored in /dev/shm/nginx (>500.000), preventing the cache manager nginx process to do its work ? Maybe I should change default "[loader_files=number] [loader_sleep=time] [loader_threshold=time]" in fastcgi_cache_path or increase the number of levels used to store cached files?

Thanks for your help !

Change History (2)

in reply to:  description comment:1 by FredericA., 9 years ago

I found some errors in my logs like that :
"ngx_slab_alloc() failed: no memory in cache keys zone fcgiA"
I think this is related with my problem and will increase the shared memory size (50m -> 100m).

comment:2 by Maxim Dounin, 8 years ago

Resolution: worksforme
Status: newclosed

There can be a various cases when cache manager will not be able to maintain maximum cache size specified, notably,

  • before cache loader finished working;
  • on very short time periods (as cache manger only checks max_size once per 10 seconds if there are no inactive items to remove);
  • if there are more cache files added than a single process can remove;
  • if there are long locked cache entries, usually due to nginx worker processes previously crashed or killed.

In my practice, most common cause was worker process crashes due to bugs in 3rd party modules.

Note: See TracTickets for help on using tickets.