Custom Query (2296 matches)

Filters
 
Or
 
  
 
Columns

Show under each result:


Results (52 - 54 of 2296)

Ticket Resolution Summary Owner Reporter
#2543 invalid wrong "host" header when using upstreams girsch.ventx.de@…
Description

Hi,

we have a reverse proxy setup with 2 upstreams, the decision to which upstream to route a request is based on value of a specific header. Lets say we want to route requests to upstreams based on a value of the user-agent header

          upstream banana {
             server banana-server;
           }
           upstream apple {
             server apple-server;
           }

          map $http_user_agent $proxied_server {
            default apple;

            "~*Firefox*" apple;
            "~*Chrome*" banana;
          }
 
          server {
              listen 8080;
              server_name localhost;
              location / {
                    proxy_pass http://$proxied_server;
              }
          }

the request is sent to the right upstream IP by a reverse proxy, but instead of taking the DNS name of the server specified in the "upstream" the reverse proxy takes the name of upstream as a value for the host header. So instead of "banana-server" it sends just "banana".

#2541 invalid TLS 1.2 connection on TLS 1.3 only site vp1981@…
Description

I configured nginx to accept only TLS 1.3 connections and up to version 1.25.2 everything was fine. But since version 1.25.2, both curl and ssllabs show me that a site is accessible with TLS 1.2 as well.

To force the use of TLS 1.3, I used a trick with the OPENSSL_CONF environment variable pointing to a file with the content

openssl_conf = default_conf

[default_conf]
ssl_conf = ssl_sect

[ssl_sect]
system_default = system_default_sect

[system_default_sect]
Ciphersuites = TLS_CHACHA20_POLY1305_SHA256:TLS_AES_256_GCM_SHA384
Options = ServerPreference,PrioritizeChaCha

and the following configuration for the site

  listen            443 ssl;
  listen       [::]:443 ssl;
  http2        on;
  server_name  isu.bkoty.ru;

  ssl_session_cache          shared:SSL:10m;
  ssl_session_timeout        10m;
  ssl_password_file          /etc/cert/hosts/isu.ppp;
  ssl_certificate            /etc/cert/hosts/isu.crt;
  ssl_certificate_key        /etc/cert/hosts/isu.key;
  ssl_protocols              TLSv1.3;
  ssl_prefer_server_ciphers  on;
  ssl_ecdh_curve             secp384r1;

To use the OPENSSL_CONF variable I added a line

Environment=OPENSSL_CONF=/etc/nginx/openssl.conf

to the nginx.service file.

Now, to test the connection to the site I ran the command

$ curl -I -v --tlsv1.2 --tls-max 1.2 https://isu.bkoty.ru

and the site responded using the TLS 1.2 protocol. I don't understand why TLS 1.2 is being used (I didn't configure it, right?). Has something changed in nginx regarding how openssl configuration is used?

P.S. Sorry, this might be my second ticket, I didn't manage to write the first one correctly.

#2540 invalid nginx stable 1.24 issues with cache file deletion under heavy load p4rancesc0@…
Description

Hi,

during some synthetic benchmarks on our nginx we faced a strange behaviour: after the benchmark was stopped and the cache files validity expired we found a large ammount (>200K files) that appeared in lsof of cache directory while a find -type f of the same returned nothing. The space inside the directory was still in use.

Our test:

  • using a custom client generate > 10Gbit/s using 900 connection on one nginx server, asking every 4 seconds new files that the backend will provide.
  • each file is asked 2 times.
  • /data/live/nginx is our caching directory

see nginx initial status:

[root@conginx01 live]# find /data/nginx/live/ -type f
[root@conginx01 live]# lsof /data/nginx/live/ | grep deleted | wc -l
0
[root@conginx01 live]# df -h /data/nginx/live/
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           205G     0  205G   0% /data/nginx/live
  • we start the benchmark for 10 minutes, we can easily saturate our 205GB RAM cache disk in few minutes (5 circa...)
  • after te cache disk is full, in the meaning of it's arounf its size MINUS min_free we keep runnign 5 min.

see:

 Filesystem      Size  Used Avail Use% Mounted on
tmpfs           205G  196G  9.8G  96% /data/nginx/live

Filesystem      Size  Used Avail Use% Mounted on
tmpfs           205G  196G   10G  96% /data/nginx/live

Filesystem      Size  Used Avail Use% Mounted on
tmpfs           205G  196G  9.8G  96% /data/nginx/live


Filesystem      Size  Used Avail Use% Mounted on
tmpfs           205G  196G   10G  96% /data/nginx/live
  • we stop the benchmark and we wait 10 minutes the cache to expire and to be deleted.

we measured during the 10 minutes test:

  • average response duration: 0.003 seconds
  • maximum response duration: 0.636 seconds
  • a total of 2169600 requests
  • a total of 41 errors on the client side
  • zero 4xx,5xx
  • 802022225512 bytes transferred, around 800GB

at this point this is the status:

[root@conginx01 live]# find /data/nginx/live/ -type f | wc -l
345645
[root@conginx01 live]# lsof /data/nginx/live/ | grep deleted | wc -l
359
[root@conginx01 live]# df -h /data/nginx/live/
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           205G  195G   11G  96% /data/nginx/live

At this point everything looks FINE, the isn't anymore any traffic on the nginx server.

after while we see:

  • used space going down, OK
[root@conginx01 live]# df -h /data/nginx/live/
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           205G  143G   63G  70% /data/nginx/live
  • cache file number decreasing, OK
[root@conginx01 live]# find /data/nginx/live/ -type f | wc -l
240997
  • files in deleted status growing
[root@conginx01 live]# lsof /data/nginx/live/ | grep deleted | wc -l
11810

we still wait all cache items to expire:

finally we see:

  • no more files in cache partition
[root@conginx01 live]# find /data/nginx/live/ -type f | wc -l
0
  • a lot of space used the the cache partition
[root@conginx01 live]# df -h /data/nginx/live/
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           205G  134G   72G  65% /data/nginx/live
  • around 250K files in status deleted with lsof, a stable value
[root@conginx01 live]# date; lsof /data/nginx/live/ | grep deleted | wc -l
Wed Aug 30 14:39:41 CEST 2023
268690
[root@conginx01 live]# date; lsof /data/nginx/live/ | grep deleted | wc -l
Wed Aug 30 14:41:05 CEST 2023
268690
[root@conginx01 live]# date; lsof /data/nginx/live/ | grep deleted | wc -l
Wed Aug 30 14:42:21 CEST 2023
268690

Most of them are unique cache files, some have 2, 3, 4, 5 occurences:

see:

[root@conginx01 live]# cat log  | grep "/data/nginx/live/6/0/82/2f68c2d5bdd35176a5606507d0ee8206"                                                                                                                                             nginx   3277770 nginx 2087r   REG   0,47    47685 88106191 /data/nginx/live/6/0/82/2f68c2d5bdd35176a5606507d0ee8206 (deleted)
nginx   3277774 nginx *637r   REG   0,47    47685 88106191 /data/nginx/live/6/0/82/2f68c2d5bdd35176a5606507d0ee8206 (deleted)
nginx   3277776 nginx 7101r   REG   0,47    47685 88106191 /data/nginx/live/6/0/82/2f68c2d5bdd35176a5606507d0ee8206 (deleted)
nginx   3277779 nginx *163r   REG   0,47    47685 88106191 /data/nginx/live/6/0/82/2f68c2d5bdd35176a5606507d0ee8206 (deleted)
nginx   3277789 nginx   94r   REG   0,47    47685 88106191 /data/nginx/live/6/0/82/2f68c2d5bdd35176a5606507d0ee8206 (deleted)

We are monitoring the disk usage with nagios, "those files" in the previous tests stayed there from 16:39 of yesterday until 10:36 until we started a new load test. In the meanwhile no clients where connetcted to the nginx. kill -USR1 doesn't cleanup the "(deleted)" files. kill -HUP cleans them up immediately.

We are wondering if this behaviour is correct.

I'll put down here our cache configuration:

    open_file_cache_errors      on; 
    proxy_cache_path            /data/nginx/live
                                levels=1:1:2
                                keys_zone=cache-live:200m
                                inactive=10m
                                max_size=200g
                                min_free=10g
                                use_temp_path=off
                                manager_files=10000
                                manager_threshold=5000ms
                                manager_sleep=50ms
                                loader_files=10000
                                loader_threshold=1000ms;

    proxy_connect_timeout       2s; 
    proxy_send_timeout          2s;
    proxy_read_timeout          2s; 
    proxy_buffering             on; 
    proxy_cache_lock            on;     
    proxy_cache_lock_timeout    100ms; 
    proxy_cache_lock_age        50ms;   
    proxy_cache_key             $host$uri;
    proxy_cache_methods         GET HEAD POST;
    proxy_cache_use_stale       updating error timeout invalid_header http_500 http_502 http_503 http_504;
    proxy_cache_revalidate      on;
    proxy_cache_background_update       on;
    proxy_http_version          1.1;
    proxy_ignore_headers        X-Accel-Expires Expires Cache-Control;
    proxy_hide_header           Accept-Encoding;

    proxy_next_upstream         error timeout invalid_header http_403 http_404 http_502 http_503 http_504;
    proxy_set_header            Accept-Encoding none;
    proxy_set_header            Connection "keep-alive";
    proxy_set_header            Host       $host;
    proxy_set_header            X-Real-IP  $remote_addr;
    proxy_max_temp_file_size    0; # Get rid of [warn] 30083#30083: *511554 an upstream response is buffered to a temporary file /data/nginx_temp/0000054131 while reading upstream,
    proxy_buffers               64 8k; #http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers
    gzip                        on;
    gzip_proxied                off;

Then in the correct location we include this one:

   set $cache cache-live;
   proxy_cache_valid 400 401 403 405 2s;
   proxy_cache_valid 404 412 500 501 502 503 504 505 509 0s;
   proxy_cache_valid any 5m;
   open_file_cache_valid   10m; #LIVE
   open_file_cache         max=200000 inactive=5m; #LIVE
   gzip              off;

Best Regards Francesco

Batch Modify
Note: See TracBatchModify for help on using batch modify.
Note: See TracQuery for help on using queries.