Opened 5 years ago

Closed 5 years ago

Last modified 3 years ago

#2001 closed defect (worksforme)

nginx memory leak URL with large cookies and connections with long keepalive_requests

Reported by: 2clarkd@… Owned by:
Priority: major Milestone:
Component: nginx-core Version: 1.18.x
Keywords: memory leak cookies Cc: 2clarkd@…
uname -a: uname -a
$ uname -a
Linux adevhost 3.10.0-1062.12.1.el7.x86_64 #1 SMP Tue Feb 4 23:02:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
nginx -V: $ ./nginx-1.18.0/objs/nginx -V
nginx version: nginx/1.18.0
built by gcc 7.3.1 20180303 (Red Hat 7.3.1-5) (GCC)
configure arguments:

Description (last modified by 2clarkd@…)

*summary*: Persisting upstream connections for a large number of transactions with attribute "keepalive_requests 100000" tends to leak memory rapidly with large URL cookies (ie. > 512 characters).
*impact*: memory not freed until large transaction count completes. Cookies leaked in memory pool for re-used connection may be security risk.
*workaround*: reduce the keepalive_requests to smaller value (default 100). (minimizes size of leak as connection reset clears memory usage)
*topics*; memory leak, large URL cookies, keepalive_requests,
*configuration*;

sample URL
sample URL command (note cookie text suppressed, likely any 512-1024 byte string will do in one or more cookie variables)
curl --cookie "cookie1=1234567890....; cookie2=abcd...; " "http://adevhost/apath/file.sfx?var1=xyz&var2=abc&var3=zyx"
Note, client test should have many instances of connections (1000's helpful) with the client re-issuing commands with cookies over long running connections. Curl example is just overview of passing large cookie, not actual test.

cat nginx_leak.conf

worker_processes auto;
worker_rlimit_nofile 100000; exacerbates leak

events {

worker_connections 65536;
# optimized to serve many clients with each thread, essential for linux
use epoll;
# accept as many connections as possible
multi_accept on;

}

http {

include /etc/nginx/mime.types;
default_type application/octet-stream;

# access_log /var/log/nginx/access.log main;
rewrite_log on;

keepalive_timeout 30s;
keepalive_requests 100000;

proxy_connect_timeout 15s;
proxy_send_timeout 15s;
proxy_read_timeout 15s;
client_body_timeout 15s;
client_header_timeout 15s;
send_timeout 15s;

expires off;
proxy_http_version 1.1;
proxy_cache_bypass 1;

proxy_no_cache 1;
proxy_set_header Connection "";

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;

# include /etc/nginx/conf.d/*.conf;
# added here for simplification

upstream upstream_leak {

keepalive 300;
server 127.0.0.1:8080;

}

server {

listen 80 backlog=1280 default_server ipv6only=off;
server_name $hostname;

location ~* \.(sfx)$ {

proxy_pass http://upstream_leak;
break;

}
location ~ / {

return 403;

}

}

add_header Connection "keep-alive";
add_header Keep-Alive "timeout=15";

}

Change History (5)

comment:1 by 2clarkd@…, 5 years ago

Description: modified (diff)

comment:2 by 2clarkd@…, 5 years ago

Description: modified (diff)

comment:3 by 2clarkd@…, 5 years ago

Description: modified (diff)

comment:4 by Maxim Dounin, 5 years ago

Quoting the keepalive_requests directive description:

Closing connections periodically is necessary to free per-connection memory allocations. Therefore, using too high maximum number of requests could result in excessive memory usage and not recommended.

The description of this ticket suggests that what you observe is exactly the case described in the comment. That is, what you are observing is not a memory leak, but rather a memory usage implied by your configuration and use case.

If you think that nginx could use less memory in such a configuration, you may want to elaborate more on what you consider to be "rapid" and provide some actual test which demonstrates the problem.

Looking at your configuration suggests that nginx is expected to use at least one of the large_client_header_buffers per idle connection when large enough cookies are used. That is, about 8k additionally allocated per connection, or ~8 megabytes for 1000 connections. While this might be noticeable, this is hardly looks "rapid". There might be other connection-specific allocations though, and an actual test could help to find out why you observe rapid memory usage growth.

comment:5 by Maxim Dounin, 5 years ago

Resolution: worksforme
Status: newclosed

Feedback timeout.

Note: See TracTickets for help on using tickets.