Opened 3 years ago

Closed 3 years ago

#2226 closed enhancement (wontfix)

Please add cache compression

Reported by: gidiwe2427 Owned by:
Priority: minor Milestone:
Component: documentation Version: 1.19.x
Keywords: Cc:
uname -a:
nginx -V: nginx version: nginx/1.21.1
built by gcc 9.3.0 (Ubuntu 9.3.0-10ubuntu2)
built with OpenSSL 1.1.1f 31 Mar 2020
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx-1.21.1/debian/debuild-base/nginx-1.21.1=. -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'

Description

Previously we use redis for our page cache.
And we do a comparison between doing compress before saving the data and uncompress before sending the data to clients.

We do it after our redis enterprise cloud server is slow when there is a surge in traffic. And one of the tech guy recommend to do the compression.
=> https://docs.redislabs.com/latest/ri/memory-optimizations/#compress-values

We achieve 20-30% performance gain with this setup. Compared to just store it directly. Its because compression and decompression in CPU is very very fast compared disk read latency.

This is the latency comparison between CPU and disk in human time:
=> https://formulusblack.com/wp-content/uploads/2019/02/Screen-Shot-2019-02-01-at-12.16.39-PM.png

If Nginx has the ability to compress data before saving. We can also use tmpfs which use extra ram as storage. Which will make it more faster.

Change History (1)

comment:1 by Maxim Dounin, 3 years ago

Resolution: wontfix
Status: newclosed

While compression might be beneficial to reduce latency, note that compression is also one of the most important places where CPU time is spent on loaded nginx servers. It is generally unwise to do extra compression/decompression if it can be avoided.

On the other hand, it is trivial to configure nginx to cache compressed responses and uncompress them when sending to clients which do not support compression. This is usually also beneficial in terms of CPU usage, since compression results are used multiple times. Something like:

proxy_pass ...
proxy_set_header Accept-Encoding gzip;
gunzip on;

does the trick, see gunzip. If your backend cannot be configured to compress responses, you can configure additional proxying via nginx itself to do the compression.

Note: See TracTickets for help on using tickets.