Opened 4 years ago
Closed 4 years ago
#2346 closed defect (invalid)
limit_req_zone is incorrect block requests
| Reported by: | Maxim | Owned by: | |
|---|---|---|---|
| Priority: | minor | Milestone: | |
| Component: | nginx-module | Version: | 1.19.x |
| Keywords: | limit_req_zone | Cc: | |
| uname -a: | Linux 065bb1414626 4.15.0-45-generic #48-Ubuntu SMP Tue Jan 29 16:28:13 UTC 2019 x86_64 GNU/Linux | ||
| nginx -V: |
nginx version: nginx/1.21.6
built by gcc 10.2.1 20210110 (Debian 10.2.1-6) built with OpenSSL 1.1.1k 25 Mar 2021 (running with OpenSSL 1.1.1n 15 Mar 2022) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -ffile-prefix-map=/data/builder/debuild/nginx-1.21.6/debian/debuild-base/nginx-1.21.6=. -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie' |
||
Description (last modified by )
Hello!
I try setup limit_req_zone for load balancing.
My config:
limit_req_zone $binary_remote_addr zone=block_1:10m rate=50r/s;
server {
server_name test.local;
listen 80;
location / {
limit_req zone=block_1;
limit_req_status 418;
root /etc/nginx/conf.d/;
}
}
For tests i use yandex-tank with config:
phantom:
address: mysite.com:80 # [Target's address]:[target's port]
uris:
- /
load_profile:
load_type: rps # schedule load by defining requests per second
schedule: line(50, 50, 1m) # starting from 1rps growing linearly to 10rps during 10 minutes
headers:
- "[Referer: /]"
ssl: false
console:
enabled: false # enable console output
telegraf:
enabled: false # let's disable telegraf monitoring for the first time
overload:
enabled: true
package: yandextank.plugins.DataUploader
token_file: "token.txt"
and results is very strange:
for 30rps in fact ~18rps
https://overload.yandex.net/520419#tab=test_data&tags=&plot_groups=main&machines=&metrics=&slider_start=1651236460&slider_end=1651236520
for 50rps in fact ~30rps
https://overload.yandex.net/520421#tab=test_data&tags=&plot_groups=main&machines=&metrics=&slider_start=1651236627&slider_end=1651236687
docker-compose.yml
version: '3.5'
services:
nginx:
image: nginx
volumes:
- ./conf:/etc/nginx/conf.d
ports:
- '7777:80'
Change History (2)
comment:1 by , 4 years ago
| Description: | modified (diff) |
|---|
comment:2 by , 4 years ago
| Resolution: | → invalid |
|---|---|
| Status: | new → closed |
Note:
See TracTickets
for help on using tickets.

You are using limit_req without any
burstset, so every request which comes before exactly 1/30 of a second passes will be rejected. Test results suggest this sometimes happens, so the number of the requests actually allowed is less than the theoretical maximum allowed. With a client which maintains internals between requests more precisely you should be able to reach limits set more closely.On the other hand, it is quite normal that intervals between requests vary in the real life. To work in such real-life conditions, nginx provides the
burstparameter, which makes it possible to tolerate traffic spikes while still maintaining the limit in the long run. Usually it is a good idea to configureburstto a number of requests allowed per couple of seconds. That is, something like:should be good enough for
rate=50r/s. Depending on the particular use case,nodelaymight be also a good idea.Note well that
50r/sis a very large rate when limiting individual clients, and withburstproperly set you may want to use much smaller values.Hope this helps. If you need further help with configuring nginx, please use support options available.