Opened 2 years ago
Closed 2 years ago
#2365 closed defect (wontfix)
reload increases memory used (freed memory for config is not released back to the system)
Reported by: | Owned by: | ||
---|---|---|---|
Priority: | major | Milestone: | |
Component: | nginx-core | Version: | |
Keywords: | reload memory | Cc: | |
uname -a: | Linux k1.lab 5.4.0-117-generic #132-Ubuntu SMP Thu Jun 2 00:39:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | ||
nginx -V: |
nginx version: nginx/1.18.0 (Ubuntu)
built with OpenSSL 1.1.1f 31 Mar 2020 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-7KvRN5/nginx-1.18.0=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-compat --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-mail=dynamic --with-mail_ssl_module |
Description (last modified by )
Seen on 1.18.x && 1.23.x acrocc 5 OSs (noted below in the semi-solution section)
Problem
nginx reload (sig hub) makes a second copy of the config (which takes more memory), does its thing, and frees the memory used by the second copy. Unfortunately, it is not released it back to the system (it does reuse it as needed, for example, on a subsequent reload).
The bigger the config the bigger the growth.
In action
# hard restart ➜ RSS 1500 ps aux | grep nginx
# nginx -s reload ➜ RSS 5588
# nginx -s reload ➜ RSS 5612
# nginx -s reload ➜ RSS 5616
# hard restart ➜ RSS 1500
Note: this holds true w/ other forms of reload since its all essential sig hup:
/bin/kill -s HUP $(cat /var/run/nginx.pid)
service nginx reload
Attempts to address
These did not have any effect:
- use mmap as memory allocator
- use tcmalloc as memory allocator
- build with no compiled in modules
One solution that did work for CentOS 7 and Cloudlinux 6 (but had no effect on Ubuntu 20.04, Almalinux 8, or Cloudlinux 6) was to use jemalloc.
On C7:
# hard restart ➜ RSS 5304
# nginx -s reload ➜ RSS 9660
# hard restart ➜ RSS 5304
# LD_PRELOAD="/usr/lib64/libjemalloc.so.1" nginx -c reload
➜ RSS 5304
uname -a for C7 (which was nginx 1.23.0):
Linux 10-2-67-42.cprapid.com 3.10.0-1160.25.1.el7.x86_64 #1 SMP Wed Apr 28 21:49:45 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Change History (5)
comment:1 by , 2 years ago
Description: | modified (diff) |
---|
comment:2 by , 2 years ago
Description: | modified (diff) |
---|
comment:3 by , 2 years ago
Description: | modified (diff) |
---|
comment:4 by , 2 years ago
comment:5 by , 2 years ago
Resolution: | → wontfix |
---|---|
Status: | new → closed |
Unfortunately, it is not released it back to the system
This is not something which depends on nginx. As you've mentioned, all allocated memory is properly freed by nginx. It's up to the system allocator (and, in some cases, to its tuning) to release the memory back to the system.
If you really care about this memory for some reason, you may want to tune (or change) the allocator being used. Also, tuning nginx own allocations might help in some cases: in particular, changing NGX_CYCLE_POOL_SIZE to 128k or more more might help to trigger mmap-based allocations in ptmalloc as used on most Linux systems, so returning memory to the system will be more likely (tuning M_MMAP_THRESHOLD/MALLOC_MMAP_THRESHOLD_ might be needed in practice, as mmap threshold is dynamic by default, see mallopt(3); this won't affect indirect allocations, such as by the OpenSSL library; also, the obvious drawback of large NGX_CYCLE_POOL_SIZE is additional memory wasted with small configurations).
Note though, that for the reload to work, you have to keep enough free memory not only for the additional configuration, but also for additional set of worker processes with their data. As such, the system allocator decision to cache the memory in question within the nginx master process can be seen as correct, as this memory is going to be reused on the next reload.
Closing this, as this does not look like something to fix in nginx.
On one it almost tripled: