Opened 13 years ago

Closed 4 years ago

#157 closed defect (fixed)

cache max_size limit applied incorrectly with xfs — at Version 8

Reported by: Tom Kostin Owned by: somebody
Priority: minor Milestone:
Component: nginx-core Version: 1.2.x
Keywords: proxy_cache_path max_size Cc:
uname -a: Linux sfw4-p 2.6.18-238.12.1.el5 #1 SMP Tue May 31 13:22:04 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
nginx -V: nginx version: nginx/1.2.0
built by gcc 4.1.2 20080704 (Red Hat 4.1.2-50)
TLS SNI support disabled
configure arguments: --prefix=/usr --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/tmp/client_body/ --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi/ --http-proxy-temp-path=/var/lib/nginx/tmp/proxy/ --with-http_ssl_module --with-http_stub_status_module --with-http_geoip_module

Description (last modified by Maxim Dounin)

No matter what I write in inactive= parameter in proxy_cache_path directive - it is always
resolved to 10 minutes .

I tried different formats :
inactive=14d
inactive=2w
inactive=336h

but the result is always the same : 10 minutes .

Checked both by counting files in cache and manually doing ls -ltr in cache dir .

This bug exists in 1.0.15 too .

This bug does NOT exist in 0.8.55 ( the version we had to roll back to ) .

relevant lines :

proxy_cache_path /ssd/two levels=1:2:2 keys_zone=static:2000m inactive=14d max_size=120000m;
proxy_temp_path /ssd/temp;

in some server :

location /images {

expires 5d ;
proxy_pass http://static-local.domain:80;
proxy_cache_valid 2w;
proxy_cache static;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

}

Change History (8)

comment:1 by Maxim Dounin, 13 years ago

Looks strange, it works fine here and there are no apparent code which may cause such behaviour. Could you please reproduce the problem in some test environment with minimal load (just one test request), and provide full config and debug log (see http://wiki.nginx.org/Debugging) from request start till response removal from cache?

Actually, I would rather suggest that it's max_size limit which forces cache files removal (and this indeed changed in 1.0.1, and may theoretically result in incorrect behaviour on some exotic file systems). Could you please provide details on file system used, total size of cache (as shown by "du -h"), as well as "ls -l" and "stat" output for several example cache files?

comment:2 by Tom Kostin, 13 years ago

File system is xfs .

Output from df -h
/dev/cciss/c0d1p1 137G 16G 122G 12% /ssd

mount output :
/dev/cciss/c0d1p1 on /ssd type xfs (rw,nobarrier,inode64,allocsize=64m)

[root@host nginx]# stat /ssd/cache/f/ff/ff/420a652e7173818543b2945d0b3fffff

File: `/ssd/cache/f/ff/ff/420a652e7173818543b2945d0b3fffff'
Size: 12256 Blocks: 24 IO Block: 4096 regular file

Device: 6811h/26641d Inode: 94385039 Links: 1
Access: (0600/-rw-------) Uid: ( 101/ nginx) Gid: ( 502/ nginx)
Access: 2012-04-30 11:53:33.469525290 -0700
Modify: 2012-04-30 11:53:33.469525290 -0700
Change: 2012-04-30 11:53:33.469525290 -0700

[root@host nginx]# stat /ssd/cache/f/ff/fe/b13927e6b907618ca33e7d9cc03fefff

File: `/ssd/cache/f/ff/fe/b13927e6b907618ca33e7d9cc03fefff'
Size: 6961 Blocks: 16 IO Block: 4096 regular file

Device: 6811h/26641d Inode: 86051919 Links: 1
Access: (0600/-rw-------) Uid: ( 101/ nginx) Gid: ( 502/ nginx)
Access: 2012-04-30 09:42:26.358122808 -0700
Modify: 2012-04-30 09:42:26.358122808 -0700
Change: 2012-04-30 09:42:26.397122164 -0700

[root@host nginx]# ls -l /ssd/cache/f/ff/ff/420a652e7173818543b2945d0b3fffff
-rw------- 1 nginx nginx 12256 Apr 30 11:53 /ssd/cache/f/ff/ff/420a652e7173818543b2945d0b3fffff

comment:3 by Maxim Dounin, 13 years ago

I suspect the allocsize=64m is a culprit. As per http://stackoverflow.com/questions/7992828/how-to-modify-the-number-of-xfs-pre-allocated-blocks, xfs reports this preallocation in st_blocks till a file close, and this results in incorrect current cache size calculations in nginx as nginx doesn't know that allocated size shrinks after the file close. Could you please try without it?

comment:4 by Tom Kostin, 13 years ago

Yes , without allocsize=64m nginx cache works as expected .
Thanks

comment:5 by Maxim Dounin, 13 years ago

Keywords: max_size added
Priority: majorminor
Status: newaccepted
Summary: inactive= parameter is ignored in proxy_cache_path directivecache max_size limit applied incorrectly with xfs

Ok, I've updated summary of the ticket and leaving it open for now. We probably need a way to cope with this xfs behaviour.

comment:6 by Maxim Dounin, 6 years ago

See also #1712.

comment:7 by Maxim Dounin <mdounin@…>, 4 years ago

In 7669:52b34c3f89b4/nginx:

Too large st_blocks values are now ignored (ticket #157).

With XFS, using "allocsize=64m" mount option results in large preallocation
being reported in the st_blocks as returned by fstat() till the file is
closed. This in turn results in incorrect cache size calculations and
wrong clearing based on max_size.

To avoid too aggressive cache clearing on such volumes, st_blocks values
which result in sizes larger than st_size and eight blocks (an arbitrary
limit) are no longer trusted, and we use st_size instead.

The ngx_de_fs_size() counterpart is intentionally not modified, as
it is used on closed files and hence not affected by this problem.

comment:8 by Maxim Dounin, 4 years ago

Description: modified (diff)
Resolution: fixed
Status: acceptedclosed

Should be fixed now.

Note: See TracTickets for help on using tickets.