Opened 9 months ago

Closed 9 months ago

#2598 closed defect (duplicate)

ngx_http_limit_req_module documentation should specify rate_limit work on millisecond basis

Reported by: alexgarel@… Owned by:
Priority: minor Milestone:
Component: documentation Version: 1.25.x
Keywords: Cc:
uname -a: Linux off2 5.15.74-1-pve #1 SMP PVE 5.15.74-1 (Mon, 14 Nov 2022 20:17:15 +0100) x86_64 GNU/Linux
nginx -V: nginx version: nginx/1.18.0
built with OpenSSL 1.1.1n 15 Mar 2022 (running with OpenSSL 1.1.1w 11 Sep 2023)
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -ffile-prefix-map=/build/nginx-x3gsRV/nginx-1.18.0=. -fst
ack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-
ld-opt='-Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf -
-http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/
lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-t
emp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=
/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx
/uwsgi --with-compat --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_mod
ule --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_
module --with-http_slice_module --with-threads --with-http_addition_module --with-http_gunzip_module
--with-http_gzip_static_module --with-http_sub_module

Description

While trying to use rate limiting on NGINX, I was a bit lost because I specified I wanted a 6000r/m rate limit, but a client doing 154 request would have been rejected (I was in dry mode, so no harm).

I did setup burst=100 because, I though burst was some more extra on the 6000 request.

I found the solution thanks to a stack overflow post (stackoverflow.com/a/70989063/2886726) citing the nginx blog post

In the example, the rate cannot exceed 10 requests per second. NGINX actually tracks requests at millisecond granularity, so this limit corresponds to 1 request every 100 milliseconds (ms). Because we are not allowing for bursts (see the next section), this means that a request is rejected if it arrives less than 100ms after the previous permitted one.

This information about millisecond granularity and the fact that rate limiting is directly transformed in a temporality at a single request level is essential and should be part of the reference documentation.

In my case, I can in fact set burst=6000 to only limit at minute granularity.

It is important because, other rate limiting implementation may use other techniques, like leaky buckets which does not lead to same type of configuration.

Change History (3)

comment:1 by Maxim Dounin, 9 months ago

Resolution: invalid
Status: newclosed

I did setup burst=100 because, I though burst was some more extra on the 6000 request.

The limit_req module limits rate at which the client is allowed to do requests, not a total number of requests it is allowed to do within a minute.

The documentation clearly says that limiting is done using the "leaky bucket" method. This means that each request is added to a bucket, and is allowed to pass as long there is some room in the bucket, but rejected as long as the bucket is full (for example, see here for a general description of the algorithm). The size of the bucket is set using burst=, and the rate at which the bucket leaks is set with the rate=. Further, this is additionally explained in docs:

... If the requests rate exceeds the rate configured for a zone, their processing is delayed such that requests are processed at a defined rate. Excessive requests are delayed until their number exceeds the maximum burst size in which case the request is terminated with an error. ...

While the blog post explanation involving time granularity might be somewhat easier to understand for some, it is not really correct. Depending on nginx options, particular workload and OS capabilities, timekeeping granularity as used by nginx might be different, see timer_resolution. Still, regardless of timekeeping granularity being used, as long as burst= is not set or set to a low value, this means that adjacent requests are likely to be rejected, as two requests with 1 milliseconds between them implies rate of 1000 requests per second. Timekeeping granularity might affect calculation errors in some corner cases, such as when requests are seen by nginx as arriving at the same time or nearly at the same time, but that's all.

A better analogy would be a speed ticket you'll get if you'll try to drive over the speed limit. Regardless of the time granularity being used, you'll get a ticket as long as you drive with a speed above the speed limit.

Closing this as a duplicate of #2253.

comment:2 by Maxim Dounin, 9 months ago

Resolution: invalid
Status: closedreopened

comment:3 by Maxim Dounin, 9 months ago

Resolution: duplicate
Status: reopenedclosed
Note: See TracTickets for help on using tickets.