Opened 18 months ago

Closed 17 months ago

Last modified 17 months ago

#2496 closed defect (invalid)

UDP traffic bandwidth is not limited by proxy_upload_rate and proxy_download_rate

Reported by: m-cieslinski@… Owned by:
Priority: minor Milestone:
Component: nginx-module Version: 1.19.x
Keywords: udp ngx_stream_proxy_module proxy_upload_rate proxy_download_rate Cc: m-cieslinski@…
uname -a: Linux host 3.10.0-1160.80.1.el7.x86_64 #1 SMP Tue Nov 8 15:48:59 UTC 2022 x86_64 Linux
nginx -V: nginx version: nginx/1.19.7
built by gcc 10.2.1 20201203 (Alpine 10.2.1_pre1)
built with OpenSSL 1.1.1o 3 May 2022
TLS SNI support enabled
configure arguments: --with-compat --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-perl_modules_path=/usr/lib/perl5/vendor_perl --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-Os' --with-ld-opt=-Wl,--as-needed --with-stream

Description

Hi,
I have problems with bandwidth limit for UDP traffic in Nginx Community Edition (non-enterprise). I've tested TCP/HTTP traffic and bandwidth limit is working fine. But for UDP it is not limiting it at all. Also UDP traffic was tested with DNS config like here https://github.com/tatsushid/nginx-udp-lb-example/blob/master/nginx.conf.template. And still without proper limit applied. Only rate/request per sec is working correctly. I followed this guide https://docs.nginx.com/nginx/admin-guide/security-controls/controlling-access-proxied-tcp/ and https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_upload_rate.
UDP protocol is GELF UDP proxied to upstream Graylog server.
Nginx version tested: 1.19.X and latest.

events {
worker_connections 1024;

}

stream {

upstream gelf_tcp {

hash $remote_addr;
server graylog:12211 fail_timeout=10s;

}

server {

listen 12201;
proxy_timeout 10s;
proxy_download_rate 20;
proxy_upload_rate 11;
proxy_connect_timeout 1s;
proxy_pass gelf_tcp;

}

upstream gelf_udp {

hash $remote_addr;
server graylog:12211 fail_timeout=10s;

}
limit_conn_zone $binary_remote_addr zone=conn_perip:10m;

server {

listen 12201 udp;
proxy_download_rate 20;
proxy_responses 0;
proxy_timeout 1s;
proxy_upload_rate 11;
proxy_pass gelf_udp;
proxy_bind $remote_addr transparent;
# limit_conn conn_perip 5;
# limit_conn_log_level warn;

}
log_format proxy_log '$remote_addr [$time_local] '

'$protocol $status $bytes_sent $bytes_received '
'$session_time "$upstream_addr" '
'"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"';

access_log /var/log/nginx/access.log proxy_log buffer=32k;
error_log /var/log/nginx/error.log warn;

}

Change History (6)

comment:1 by Maxim Dounin, 18 months ago

Only rate/request per sec is working correctly.

Note that with UDP, it is not possible to split a packet into multiple UDP packets. As such, proxy_upload_rate and proxy_download_rate rates are applied on a per-packet basis. That is, as long as nginx hits a limit, it won't try to read any additional packets on the UDP session till the configured rate allows it.

For proxy_download_rate, this means that up to the recv socket buffer (on the socket to the upstream server) and one packet (within nginx itself) can be buffered. For proxy_upload_rate, since the listening socket is shared among all clients, just one packet (within nginx itself) can be buffered, any additional packets will be dropped.

With the mentioned limitations both proxy_upload_rate and proxy_download_rate everything works as expected in the tests here. For example, consider the following configuration:

stream {
    server {
        listen 9000 udp;
        proxy_pass 127.0.0.1:9001;
        proxy_download_rate 5;
    }
}

And the following test scripts for server and client:

$ cat test-udp-server.pl 
#!/usr/bin/perl

use warnings;
use strict;

use Socket;
use Time::HiRes qw/time/;

socket(my $socket, PF_INET, SOCK_DGRAM, 0)
	or die "socket: $!";
setsockopt($socket, SOL_SOCKET, SO_REUSEPORT, 1)
	or die "setsockopt: $!";
bind($socket, pack_sockaddr_in(9001, inet_aton("127.0.0.1")))
	or die "bind: $!";

my $peer;
my $msg = '';

while ($peer = recv($socket, $msg, 65536, 0)) {

	chomp($msg);
	my ($port, $ip) = unpack_sockaddr_in($peer);
	my $s = inet_ntoa($ip) . ':' . $port;
        my $t = sprintf("%.6f", time());

	print "<< ($s $t) $msg\n";

	for (1..10) {
		$t = sprintf("%.6f", time());
		print ">> ($s $t) bar\n";
		send($socket, "bar\n", 0, $peer);
	}
}


$ cat test-udp-client.pl 
#!/usr/bin/perl

use warnings;
use strict;

use Socket;
use Time::HiRes qw/time/;

socket(my $socket, PF_INET, SOCK_DGRAM, 0)
	or die "socket: $!";
setsockopt($socket, SOL_SOCKET, SO_REUSEPORT, 1)
	or die "setsockopt: $!";
connect($socket, pack_sockaddr_in(9000, inet_aton("127.0.0.1")))
	or die "connect: $!";

my $peer;
my $msg = '';

my $s = '127.0.0.1:9000';
my $t = sprintf("%.6f", time());

print ">> ($s $t) foo\n";
send($socket, "foo\n", 0);

while ($peer = recv($socket, $msg, 65536, 0)) {

	chomp($msg);
	my ($port, $ip) = unpack_sockaddr_in($peer);
	my $s = inet_ntoa($ip) . ':' . $port;
        my $t = sprintf("%.6f", time());

	print "<< ($s $t) $msg\n";
}

Test run results on the server side:

$ perl test-udp-server.pl
<< (127.0.0.1:30152 1685228441.707326) foo
>> (127.0.0.1:30152 1685228441.707620) bar
>> (127.0.0.1:30152 1685228441.707737) bar
>> (127.0.0.1:30152 1685228441.707867) bar
>> (127.0.0.1:30152 1685228441.707949) bar
>> (127.0.0.1:30152 1685228441.708032) bar
>> (127.0.0.1:30152 1685228441.708108) bar
>> (127.0.0.1:30152 1685228441.708200) bar
>> (127.0.0.1:30152 1685228441.708290) bar
>> (127.0.0.1:30152 1685228441.708380) bar
>> (127.0.0.1:30152 1685228441.708453) bar

Note that 10 response packets are sent almost immediately. And here are the results on the client side:

$ perl test-udp-client.pl
>> (127.0.0.1:9000 1685228441.703600) foo
<< (127.0.0.1:9000 1685228441.710255) bar
<< (127.0.0.1:9000 1685228442.520890) bar
<< (127.0.0.1:9000 1685228443.382143) bar
<< (127.0.0.1:9000 1685228444.211470) bar
<< (127.0.0.1:9000 1685228445.073640) bar
<< (127.0.0.1:9000 1685228445.890434) bar
<< (127.0.0.1:9000 1685228446.720146) bar
<< (127.0.0.1:9000 1685228447.580390) bar
<< (127.0.0.1:9000 1685228448.409910) bar
<< (127.0.0.1:9000 1685228449.270287) bar

Note that the first response packet is sent immediately, and other packets are delayed as per proxy_download_rate.

If you still think there is a bug, please provide more details: notably, how do you test, and what do you observe, and what do you expect instead. A simple test script which demonstrates the problem would be awesome.

comment:2 by m-cieslinski@…, 17 months ago

First of all thank You for your answer.

I have configuration WITHOUT proxy_response -> "proxy_responses 0;" in official tests there is config with few request/responses also in your perl script server is reposnding with UDP packets to client. In my case it is shoot and forget model -> UDP packet from client without ANY response from NGINX and then Graylog server. I could only provide that I'm using this configuration (https://storiesfromtheherd.com/building-a-simple-docker-based-graylog-journald-integration-96628653f81), but not "BEATS input", just "GELF UDP input" configured. No rocket science, no other configs.

Moreover rate limiting is also not working for UDP at all.

I still believe that without proxy_responses NGINX doesn't work properly.

comment:3 by Maxim Dounin, 17 months ago

Resolution: invalid
Status: newclosed

Thanks for pointing this out - I've missed proxy_responses 0; in your configuration.

Indeed, with proxy_responses 0; an UDP session is terminated immediately after sending the packet to the backend, so each input packet starts a new session. And, since proxy_upload_rate for UDP sockets works only within an UDP session, it is essentially does nothing in such a configuration: it can only delay non-first packets on a session, but with proxy_responses 0; there will be no such packets.

This does not look like a bug in nginx though. Rather, it's how your nginx is configured. If you want proxy_upload_rate to work, consider reconfiguring nginx to maintain UDP sessions, that is, consider using non-zero proxy_responses and use proxy_timeout instead to control how long UDP sessions are maintained.

comment:4 by m-cieslinski@…, 17 months ago

Hmmmmmm Okay I will try this.

[SUCCESS] Scenario Rate Limiting with proxy_responses (default config)
That's a really nice result (thank you again). Rate limiting is working fine. For burst requests f.e. 100 per second nginx forwarded only 5 as
expected (limit_conn conn_perip 5;).

[FAILED] Scenario Bandwidth Limiting with proxy_download_rate set 100.
All requests forwarded during tests.

Example command netcat with wait 0 seconds flag set (1795 Bytes):

for i in {1..100}; do echo '{"version": "1.1","host":"david.org","short_message":"Backtrace here\n\nmore stuffBacktrace here\n\nmore stuffBacktrace here\n\nmore stuffBacktrace here\n\nmore stuffBacktrace here\n\nmore stuffBacktrace here\n\nmore stuffBacktrace here\n\nmore stuffA short message that helps you identify what is going on","full_message":"Backtrace here\n\nmore stuffBacktrace here\n\nmore stuffBacktrace here\n\nmore stuffBacktrace here\n\nmore stuffBacktrace here\n\nmore stuffBacktrace here\n\nmore stuffBacktrace here\n\nmore stuffBacktrace here\n\nmore stuffBacktrace here\n\nmore stuffBacktrace here\n\nmore stuffBacktrace here\n\nmore stuffBacktrace here\n\nmore stuff","level":1,"_user_id":9001,"_some_info":"dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddfoo","_some_env_var":"bar"}' | nc 127.0.0.1 -u -w0 12201; done

Ehhh another problem comes with "session" UDP "2023/06/26 09:34:26 [alert] 98#98: 1024 worker_connections are not enough". Zero proxy_responses for UDP is crucial part to have maximum throughput with the setup (nginx <-> Graylog UDP Input).

To sum up:
I'm wondering how the workflow for rate limiting works now - with proxy_responses default config - but bandwidth limit not?
Could You please share with me if it is completely different code block or flow?

in reply to:  4 comment:5 by Maxim Dounin, 17 months ago

Replying to m-cieslinski@…:

[FAILED] Scenario Bandwidth Limiting with proxy_download_rate set 100.
All requests forwarded during tests.

The proxy_download_rate directive is not expected to limit/delay any packets from the client to the server. It works only for downloading, that is, for packets from the server to the client.

Further, it only works within a particular UDP sesssion.

Example command netcat with wait 0 seconds flag set (1795 Bytes):

The -w flag of netcat is documented as follows:

     -w timeout
             If a connection and stdin are idle for more than timeout seconds,
             then the connection is silently closed.  The -w flag has no
             effect on the -l option, i.e. nc will listen forever for a
             connection, with or without the -w flag.  The default is no
             timeout.

That is, with -w0 it closes the connection as long as it reads all data from stdin. Another invocation of netcat will create a new connection (a new UDP session in case of UDP) with distinct limits: note that proxy_download_rate only works within a particular connection / a particular UDP session.

Ehhh another problem comes with "session" UDP "2023/06/26 09:34:26 [alert] 98#98: 1024 worker_connections are not enough". Zero proxy_responses for UDP is crucial part to have maximum throughput with the setup (nginx <-> Graylog UDP Input).

Tacking UDP sessions comes with costs: nginx keeps a connection structure and associated information in memory to be able to match further packets on the connection, much like with TCP connections. As previously suggested, use proxy_timeout to control how long UDP sessions are maintained. It also might be a good idea to use larger value for worker_connections.

If you have further questions on how to configure nginx, please use support options available.

comment:6 by m-cieslinski@…, 17 months ago

It was a pleasure to discuss with you. Thanks Maxim

Note: See TracTickets for help on using tickets.