Opened 6 years ago
Closed 5 years ago
#1665 closed defect (wontfix)
Minimum zone size for limit_req strangely high
Reported by: | Owned by: | ||
---|---|---|---|
Priority: | minor | Milestone: | |
Component: | nginx-module | Version: | 1.14.x |
Keywords: | limit_req page size | Cc: | |
uname -a: | Linux hostname 3.10.108+ #1 SMP PREEMPT Thu Oct 18 17:03:12 CEST 2018 tilegx GNU/Linux | ||
nginx -V: | nginx version: nginx/1.14.0 |
Description
Hi,
I'm running nginx on an embedded platform, namely a Tilera Tile-Gx36. The embedded http server is not intended for any heavy lifting, but for API calls that are possibly expensive. I tried using this config:
limit_req_zone 1 zone=global:32k rate=10r/s;
This works on my x86 computer, but not on the Tilera. I checked the zone requirements in the source code, and found this code:
if (size < (ssize_t) (8 * ngx_pagesize)) { ... "zone \"%V\" is too small" ... }
The Tilera has a 64k page size. This limit means I have to allocate at least 512k memory for each zone I want to define, which seems excessive on an embedded system which will run with a very low rate and a very low burst.
Is there any reason why this limit needs to be "at least 8 pages"?
Change History (3)
comment:1 by , 6 years ago
comment:2 by , 6 years ago
Thanks for the detailed answer. As this is a web server running an admin interface on an embedded device, with a very simple limit_req config, it sounds like we could safely patch nginx to reduce the limit to 4 pages. Thanks.
comment:3 by , 5 years ago
Resolution: | → wontfix |
---|---|
Status: | new → closed |
Closing this, it doesn't look like there are practical reasons to try to change the limit to 4 pages instead.
This is because shared memory uses slab allocator, which allocates memory by dividing a memory page into small fixed-size chunks. Absolute minimum for the slab allocator to work is 2 pages - one page will be used for internal slab allocator structures, and another one for real allocations. This is, however, is only enough for very simple cases, when all allocations from the memory zone will use the same size.
With limit_req, absolute minimum is 3 pages - with only 2 pages nginx won't be able to allocate limit_req global structures, because there are two allocations, and they happen to use distinct slabs. This is, however, do not take into account actual allocations of limit states, and, depending on key size used, 3 pages may not be enough to store even one state.
Moreover, things become worse when different key sizes are used. When there are only a few pages available, this means that only a few distinct states with differently sized keys can be stored. For example, if there are only 4 pages (with one of them used for slab allocator internal data), you'll be able to store a 64-byte state, 128-bytes state, but trying to store a 256-bytes state will result in an error.
The "at least 8 pages" limit is a safety limit used by (almost) all modules which work with shared memory. It ensures that the specified shared memory size is big enough to store various global allocations, including slab allocator's own data, and there is some room to make allocations from.
Note well that in most real-world cases you actually need some states to be stored, and for things to work you actually need reasonable shared memory size, in most cases more than 1 megabyte, which only enough to store 16 thousands of 64-byte states.
While changing this limit is possible, given the above the smallest practical value is about 4 pages. So it will be about 256k on your platform, and I don't really think it makes a difference. On the other hand, 8 pages limit as used now seems to be a better limit from the user experience point of view as it ensures that the shared memory zone can be used for allocations of various sizes.