Opened 6 years ago

Closed 6 years ago

#1587 closed defect (invalid)

memory leak with ngx_http_image_filter_module

Reported by: dyeldandi@… Owned by:
Priority: major Milestone:
Component: nginx-module Version: 1.12.x
Keywords: image filter Cc:
uname -a: Linux hostname 3.14.32-xxxx-grs-ipv6-64 #9 SMP Thu Oct 20 14:53:52 CEST 2016 x86_64 x86_64 x86_64 GNU/Linux
nginx -V: nginx version: nginx/1.12.2
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC)
built with OpenSSL 1.0.2l 25 May 2017
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-openssl=openssl-1.0.2l --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --with-ld-opt=

Description

We have an nginx running on port 8889 to resize/crop images with http_image_filter module. It's doing about 600K hits (resizes/crops) per day. The configuration file is attached.

Over time its memory footprint grows at approximate rate of 1.5 GB per day.

USER       PID %CPU %MEM    VSZ   RSS   TTY   STAT START   TIME COMMAND
root     26612  0.0  0.0  56764   492    ?     Ss   Jun25   0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
root     26613  1.8  4.5 796764  741116  ?     S<   Jun25 180:38 nginx: worker process
root     26614  1.5  3.6 643172  582292  ?     S<   Jun25 149:16 nginx: worker process
root     26615  5.1 15.4 2606648 2498452 ?     S<   Jun25 506:30 nginx: worker process
root     26616  4.0 13.4 2282260 2175168 ?     S<   Jun25 401:32 nginx: worker process
root     26617  2.9  8.8 1509224 1434856 ?     S<   Jun25 289:00 nginx: worker process
root     26618  6.9 20.4 3438772 3298556 ?     S<   Jun25 688:02 nginx: worker process
root     26619  3.4  9.0 1558788 1459996 ?     S<   Jun25 344:03 nginx: worker process
root     26620  2.3  6.6 1150284 1078564 ?     S<   Jun25 230:11 nginx: worker process

In about a week it grew from 400Mb to 10GB.

The same nginx build without image_filter module serving static files and doing proxy doesn't have this problem.

Attachments (1)

nginx.conf (4.0 KB ) - added by dyeldandi@… 6 years ago.
nginx configuration

Download all attachments as: .zip

Change History (5)

by dyeldandi@…, 6 years ago

Attachment: nginx.conf added

nginx configuration

comment:1 by Maxim Dounin, 6 years ago

How many connected clients each process have? Given image_filter_buffer set to 32m, 50 active clients can easily consume 1.5 GB (and even more, as there are other buffers as well), so this might not be a real leak even with 10 GB process size. Checking stub_status numbers and/or netstat output should be enough.

Also, please check if stub_status and netstat numbers match, to make sure there are no socket leaks. Alternatively, you can check for socket leaks by doing a configuration reload and looking for open socket <N> left in connection <M> alerts in nginx logs.

As long as the above doesn't show any problems, the most likely reason are leaks in the GD library. Check the version of the library you are using and test if using the latest library fixes things.

comment:2 by dyeldandi@…, 6 years ago

According to netstat it doesn't have a lot, about 3 to 5 in total for all nginx workers. I restarted it with stub-status configured and will wait for it to build up memory footprint to compare it with netstat.

Currently it linked against gd 2.0.35.26.el7, I'll try to rebuild it with the latest gd.

comment:3 by dyeldandi@…, 6 years ago

Rebuilding with the latest GD did help! There is a fixed memory leak in 2.0.36, but CentOS still has 2.0.35.

Thanks a lot!

comment:4 by Maxim Dounin, 6 years ago

Resolution: invalid
Status: newclosed

Thanks for the information.

Note: See TracTickets for help on using tickets.