﻿__group__	ticket	summary	component	version	type	owner	status	created	_changetime	_description	_reporter
	86	"the ""if"" directive have problems in location context"	nginx-core		defect	somebody	accepted	2012-01-17T08:58:29Z	2023-06-12T21:36:31Z	"To start, I'm doing tricky stuff so please don't point out at the weird things and stay focused on the issue at hand.
I'm mixing a configuration with userdir and symfony2 (http://wiki.nginx.org/Symfony) for a development environment, php is using php-fpm and a unix socket.
The userdir configuration is classic, all your files in `~user/public_html/` will be accessible through `http://server/~user/`.
I add to this the fact that if you create a folder `~user/public_html/symfony/` and put a symfony project in it (`~user/public_html/symfony/project/`) it will have the usual symfony configuration applied (rewrites and fastcgi path split).

Here you go for the configuration :
{{{
    # match 1:username, 2:project name, 3:the rest
    location ~ ^/~(.+?)/symfony/(.+?)/(.+)$ {
        alias /home/$1/public_html/symfony/$2/web/$3;
        if (-f $request_filename) {
            break;
        }
        # if no app.php or app_dev.php, redirect to app.php (prod)
        rewrite ^/~(.+?)/symfony(/.+?)/(.+)$ /~$1/symfony/$2/app.php/$3 last;
    }

    # match 1:username, 2:project name, 3:env (prod/dev), 4:trailing ('/' or
    # end)
    location ~ ^/~(.+?)/symfony(/.+)/(app|app_dev)\.php(/|$) {
        root /home/$1/public_html/symfony$2/web;
        # fake $request_filename
        set $req_filename /home/$1/public_html/symfony$2/web/$3.php;
        include fastcgi_params;
        fastcgi_split_path_info ^((?U).+\.php)(/?.+)$;
        fastcgi_param PATH_INFO $fastcgi_path_info;
        fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
        fastcgi_param SCRIPT_FILENAME $req_filename;
        fastcgi_pass unix:/tmp/php-fpm.sock;
    }
}}}

The second block (PHP backend) works on its own. The first block (files direct access) works on its own.

You can see that I already had a problem with PHP but went around it with creating my own variable.

To help understand, here is a sample of a symfony project layout (I removed some folders to help the comprehension):
{{{
project/
    src/
        [... my php code ...]
    web/
        app_dev.php
        app.php
        favicon.ico
}}}

If I try to access `http://server/~user/symfony/project/favicon.ico` I see this in the logs :
{{{
2012/01/17 16:36:25 [error] 27736#0: *1 open() ""/home/user/public_html/symfony/project/web/favicon.icoavicon.ico"" failed (2: No such file or directory), client: 10.11.60.36, server: server, request: ""HEAD /~user/symfony/project/favicon.ico HTTP/1.1"", host: ""server""
}}}

If I remove the block that tests `$request_filename`, it works but I have to remove the rewrite as well.

The server is a CentOS 5.7 and the nginx is coming from the EPEL repository.

Unfortunately my C skills are down the floor so I can't really provide a better understanding of the problem. I tried to poke around the code but with not much luck."	"s ""hr"" berder"
	97	try_files and alias problems	nginx-core		defect	somebody	accepted	2012-02-03T11:46:46Z	2021-06-06T08:18:34Z	"{{{
# bug: request to ""/test/x"" will try ""/tmp/x"" (good) and
# ""/tmp//test/y"" (bad?)
location /test/ {
    alias /tmp/;
    try_files $uri /test/y =404;
}
}}}

{{{
# bug: request to ""/test/x"" will fallback to ""fallback"" instead of ""/test/fallback""
location /test/ {
    alias /tmp/;
    try_files $uri /test/fallback?$args;
}
}}}

{{{ 
# bug: request to ""/test/x"" will try ""/tmp/x/test/x"" instead of ""/tmp/x""
location ~ /test/(.*) {
    alias /tmp/$1;
    try_files $uri =403;
}
}}}

Or document special case for regexp locations with alias? See 3711bb1336c3.

{{{
# bug: request ""/foo/test.gif"" will try ""/tmp//foo/test.gif""
location /foo/ {
    alias /tmp/;
    location ~ gif {
        try_files $uri =405;
    }
}
}}}"	Maxim Dounin
	348	Excessive urlencode in if-set	nginx-core		defect		accepted	2013-05-02T10:25:35Z	2022-02-10T16:23:49Z	"Hello,

I had setup Apache with mod_dav_svn behind nginx acting as front-end proxy and while commiting a copied file with brackets ([]) in filename into that subversion I found a bug in nginx.

How to reproduce it (configuration file is as simple as possible while still causing the bug):

{{{
$ cat nginx.conf 
error_log  stderr debug;
pid nginx.pid;
events {
    worker_connections  1024;
}
http {
    access_log access.log;
    server {
        listen 8000;
        server_name localhost;
        location / {
            set $fixed_destination $http_destination;
            if ( $http_destination ~* ^(.*)$ )
            {
                set $fixed_destination $1;
            }
            proxy_set_header        Destination $fixed_destination;            
            proxy_pass http://127.0.0.1:8010;
        }
    }
}

$ nginx -p $PWD -c nginx.conf -g 'daemon off;'
...
}}}

In second terminal window:

{{{
$ nc -l 8010
}}}

In third terminal window:

{{{
$ curl --verbose --header 'Destination: http://localhost:4000/foo%5Bbar%5D.txt' '0:8000/%41.txt'
* About to connect() to 0 port 8000 (#0)
*   Trying 0.0.0.0...
* Adding handle: conn: 0x7fa91b00b600
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fa91b00b600) send_pipe: 1, recv_pipe: 0
* Connected to 0 (0.0.0.0) port 8000 (#0)
> GET /%41.txt HTTP/1.1
> User-Agent: curl/7.30.0
> Host: 0:8000
> Accept: */*
> Destination: http://localhost:4000/foo%5Bbar%5D.txt
> 
}}}

Back in the second terminal window:

{{{
($ nc -l 8010)
GET /%41.txt HTTP/1.0
Destination: http://localhost:4000/foo%255Bbar%255D.txt
Host: 127.0.0.1:8010
Connection: close
User-Agent: curl/7.30.0
Accept: */*
}}}

The **problem is** that the Destination header was changed from `...foo%5Bbar%5D.txt` to `...foo%255Bbar%255D.txt`. This happens only when

- that `if ( $http_destination ~* ^(.*)$ )` is processed
- and URL (HTTP GET URL, not that Destination URL) also contains urlencoded (%41) character(s).

In other cases (URL does not contain urlencoded character or that `if` is not matched) the Destination header is proxy_passed untouched, which is expected behavior.

------

Note: Why do I need that `if ( $http_destination ~* ^(.*)$ )`? In this example it is simplified, but for that Subversion setup I have mentioned I need to rewrite the Destination from https to http when nginx proxy_passes from https to Apache over http.

This bug also happens on nginx/0.7.67 in Debian Squeeze."	Petr Messner
	564	map regex matching affects rewrite directive	nginx-core		defect		accepted	2014-05-28T17:19:40Z	2020-02-19T18:42:50Z	"Using a regex in the `map` directive changes the capture groups in a rewrite directive. This happens only if the regex in `map` is matched. A minimal exampe config:


{{{
http {
        map $http_accept_language $lang {
                default en;
                 ~(de) de;
        }
        server {
                server_name test.local
                listen 80;
                rewrite ^/(.*)$ http://example.com/$lang/$1 permanent;
        }
}
}}}

Expected:

{{{
$ curl -sI http://test.local/foo | grep Location
Location: http://example.com/en/foo
$ curl -H ""Accept-Language: de"" -sI http://test.local/foo | grep Location
Location: http://example.com/de/foo
}}}


Actual:

{{{
$ curl -sI http://test.local/foo | grep Location
Location: http://example.com/en/foo
$ curl -H ""Accept-Language: de"" -sI http://test.local/foo | grep Location
Location: http://example.com/de/de
}}}

If I leave out the parentheses in `~(de) de;` (so it becomes `~de de;`), `$1` is simply empty:

{{{
$ curl -H ""Accept-Language: de"" -sI http://test.local/foo | grep Location
Location: http://example.com/de/
}}}"	Pascal Jungblut
	756	Client disconnect in ngx_http_image_filter_module	nginx-module		defect		accepted	2015-04-29T05:27:31Z	2015-04-29T12:29:24Z	"I have encountered a bug in ngx_http_image_filter_module when used in conjunction with ngx_http_proxy_module ; the configuration is as following:

location /img/ {
	proxy_pass http://mybucket.s3.amazonaws.com;
	image_filter resize 150 100;
}

The steps to reproduce are rather complicated as they depend on how TCP fragments the response coming from the proxy:

- If http://mybucket.s3.amazonaws.com returns, in the first TCP packet, a sizable amount of data (1-2k), the image is resized correctly.

- If http://mybucket.s3.amazonaws.com returns, in the first TCP packet, a small amount of data (HTTP header, or HTTP header + a few bytes), the content is marked as not an image and NGX_HTTP_UNSUPPORTED_MEDIA_TYPE is returned (disconnecting the client), irrespective on whether or not subsequent data would complete the response to a valid image.

Nginx appears to give up right away on waiting for data if the contents of the first TCP packet received from the proxy does not contain a valid image header- i.e. ngx_http_image_test() will return NGX_HTTP_IMAGE_SIZE, etc.

In my experience this was triggered by a subtle change in AWS S3 that introduced further fragmentation of the TCP responses.

Versions affected: 1.6.2, 1.6.3, 1.7.2, 1.8.0, etc. (all?)

Attaching a 1.8.0 patch that resolves it; the other versions can be fixed similarly.

I think a better fix would be to ""return NGX_OK"" if we do not have enough data in ""case NGX_HTTP_IMAGE_START"", and ""return NGX_HTTP_UNSUPPORTED_MEDIA_TYPE"" (as per the original code) if enough data has been read, but it’s really not an image- but this exceeds the scope of the fix and my use case.

nginx-devel thread: http://mailman.nginx.org/pipermail/nginx-devel/2015-April/006876.html
"	Dan Podeanu
	220	Feature Request - Per-server proxy_connect_timeout	nginx-module		enhancement	somebody	new	2012-09-18T16:04:34Z	2012-09-18T16:04:34Z	"Imagine a simple upstream block as follows:

upstream name {
    server     1.1.1.1;
    server     2.2.2.2 backup;
}

It would be really useful to have the ability to set ""per server"" proxy_connect_timeout timeouts. The reasoning for this is pretty straight forward but I will explain:

It would be VERY useful for 'long route' (eg: trans-atlantic / trans-pacific) connections to backends that are far far away, to be able to give a backup server more time then the standard proxy_connect_timeout."	riddla riddla
	239	Support for large (> 64k) FastCGI requests	nginx-module		enhancement	somebody	accepted	2012-10-30T14:45:17Z	2022-12-19T22:12:02Z	"Currently, a hardcoded limit returns a '[alert] fastcgi request record is too big:...' message on the error output when requests larger than 64k are tempted to be sent with Nginx.

The improvement would be to handle larger requests based on configuration, if possible.
Something similar to the work already done on output uffers would be nice.

The only current workaround is not to use FastCGI, ie revert to some Apache for example, which is a huge step backwards..."	https://stackoverflow.com/users/573152/bernard-rosset
	267	Introduce static variables	nginx-core		enhancement		new	2012-12-30T09:23:24Z	2021-04-26T16:02:18Z	"A very common scenario is checking in config files in whatever VCS/DVCS you have available. Most people doing this also check in things like ip addresses or passwords just to have their deploy scripts push them to servers. This is non-optimal, since it changes more often than not. Alternatives to this is either to pre-process your config scripts (defining your own variable scheme) or use set from the rewrite module. As much as I'd like to use the latter, knowing that it is evaluated on every request, it's very hard to swallow.

With this said, **I'd like to see a 'static' variable introduced which would be evaluated when parsing the config**. The suggested scenario would be to include a `variables.conf` in whatever config scope needed, that replaces $variable with value set in `variable.conf`. Syntax for doing this is somewhat irrelevant, but `set foo bar;` is generally accepted within the rewrite module. Whatever improvements (or avoiding conflicts) to naming is appreciated.

Hopefully, more people than me would appreciate such a feature to nginx core."	Johan Bergström
	287	Add option to enable IP_TRANSPARENT	nginx-core		enhancement		new	2013-01-21T18:31:39Z	2018-08-02T05:25:11Z	"For Nginx to be able to respond to packets redirected with the Linux netfilter TPROXY target, the IP_TRANSPARENT option should be enabled. It would be nice to have this in Nginx as an additional parameter to the listen directive.

I have a patch that implements this for http, and will attach it to this ticket. The patch is made against 1.2.6, but also applies on 1.3.11."	Stijn Tintel
	360	Feature wish proxy_ignore_client_abort = force	nginx-core		enhancement		new	2013-05-23T19:34:21Z	2013-05-23T19:34:21Z	"Hi,

I have installed nginx to cache youtube on my network. Now I got a view users with a fast F5 finger, if a video takes to long to load, or they close the webtab. With this behaviour I get up to 5 or 10 streams to the same video, and nginx doesn't cancel the download even if the user doesn't like to see the video any more.

Here is a part of my configuration:
{{{
        # WEBM (MP4) Youtube Caching
        server {
                listen          127.0.0.1:3131;
                server_name     localhost;
                access_log      /media/daten/system/logs/nginx_youtube_cache_access.log main;
                error_log       /media/daten/system/logs/nginx_youtube_cache_error.log debug;

                location / {
                        resolver                        8.8.8.8;
                        proxy_pass                      ""http://$host"";
#                       proxy_temp_path                 ""/media/daten/system/nginx/tmp"";
                        proxy_buffering                 on;
                        proxy_cache                     yt-cache;
                        proxy_cache_valid               404      1m;
                        proxy_cache_valid               200      40d;
                        proxy_cache_key                 ""$scheme$host/id=$arg_id.itag=$arg_itag.range=$arg_range.algo=$arg_algorithm"";
                        proxy_ignore_headers            ""Set-Cookie"" ""Cache-Control"" ""Expires"" ""X-Accel-Expires"" ;
                        proxy_ignore_client_abort       off;
                        proxy_method                    GET;
#                       proxy_set_header                X-YouTube-Cache ""none.none@noweb.de"";
                        proxy_set_header                Accept ""video/*"";
                        proxy_set_header                User-Agent ""YouTube Cacher WEBM (nginx)"";
                        proxy_set_header                Accept-Encoding """";
                        proxy_set_header                Accept-Language """";
                        proxy_set_header                Accept-Charset """";
                        proxy_set_header                Cache-Control """";
                }
        }
}}}
Currently I configured it so that squid will only forward youtube video streams to nginx but may be in future nginx could do all this stuff for me. ... Lets see.

Now it would be nice to set proxy_ignore_client_abort = force to cancel the download if the user disconnects. I've seen a message like this in my logfile: ""client prematurely closed connection while reading upstream"". Then it would be nice if nginx would also cancel the download and delete the tmp file.

Thank you and best regards from Bavaria :-)

"	Thomas Freller
	376	log file reopen should pass opened fd from master process	nginx-core		enhancement		accepted	2013-06-14T05:17:06Z	2023-03-21T13:51:48Z	"When starting nginx all the log files (error_log, access_log) are created and opened by the master process and the filehandles passed to the worker while forking.

On SIGUSR1 the master reopens the files, chown's them and then the worker reopens the files himself. This has several drawbacks:

* It is inconsistent behaviour and rather surprising (sudden change of ownership upon signal). If you really want to do it this way you should chown the files from the very beginning.
* It '''permits the unprivileged nginx user read and write access''' to the current log files which is bad from the security perspective since the unprivileged user also needs to be able to change into/read the log directory

A better solution may be to reopen the log files in the master process as currently done and then use the already available ngx_{read,write}_channel functions to pass the new filehandles down to the worker."	Tiziano Müller
	394	gzip module doesn't handle all certain HTTP verbs/statuses	nginx-module		enhancement		reopened	2013-08-09T10:56:38Z	2020-03-27T08:01:45Z	"I use nginx as a proxy for a CardDAV server. For CardDAV, HTTP verbs like PROPFIND and REPORT are used and these verbs often return large textual content (text/vcard, text/xml) so compression would be very useful (especially when the DAV collections are synced with a mobile device).

However, HTTP responses from the CardDAV server are only gzip-ed when they have been requested using GET. I thinks this could have two reasons:
* the gzip module restricts the HTTP verbs and doesn't work then using PROPFIND or REPORT, or/and
* the gzip module restricts the HTTP status codes ands doesn't compress responses with the 207 Multi-Status.

I think this is not correct behaviour because every HTTP response entity may and should be compressed when the matching Accept-Encoding was set by the client (and the Content-Type in the server config is correct).

Steps to reproduce:
* Set up a CalDAV/CardDAV server and access it via nginx proxy.
* In nginx, set up gzip compression for proxy requests and the appropriate content types.
* Send a GET request with Accept-Encoding: gzip -> the 200 response is gzip-ed
* Send a PROPFIND request with Accept-Encoding: gzip -> the 207 response is *not* gzip-ed

Expected result:
All responses for all requests should be gzip-ed."	Andrea Mayer
	426	log entire header and cookie	nginx-module		enhancement		new	2013-10-18T16:16:18Z	2013-10-18T16:16:18Z	"http://nginx.2469901.n2.nabble.com/Log-entire-header-amp-cookie-td7388501.html

This above nabble link explains the use case.

Basically I have a client who is getting the 400 request headers too large error. Before I blindly increase our limits, I was hoping to know exactly what it was they were sending that was causing the issue. Unfortunately there doesn't seem to be a way for me to log all headers, I have to specify the header I want logged. I can't even be sure of the header they are sending, perhaps it is a malicious header that I'm unaware of.

If the above nabble just didn't mention that the feature was added, I apologize. I have been unable to find any documentation that leads me to believe the feature was added though."	Chris Cooper
	430	Allow variables in userid_domain	nginx-module		enhancement		new	2013-10-30T12:37:26Z	2013-10-30T12:37:26Z	"It would be useful to use variables when setting userid_domain. In my case we have a multidomain configuration, with multiple subdomains, and the cookie should be valid for the top domain.

Ex: The domain is foo.bar

We have this subdomains: www.foo.bar and images.foo.bar
When I access to www.foo.bar, userid send me a cookie, only for ""www.foo.bar"" domain. Inside the index.html of www.foo.bar there are some <img ...> pointing to images.foo.bar. When the browser try to get these images, it gets another userid cookie, because the domain has changed.

If I could tell userid_domain to set cookie domain to ""foo.bar"" this problem should disappear. After all it was the same user.

Now, if I use variables in userid_domain this is what happens:
In vhost config:
    set
    userid_domain $mydomain;

I get this Set-Cookie header:
    Set-Cookie: atr_ut=wKgBglJw9HQPZBr0AwMDAg==; expires=Wed, 29-Oct-14 11:58:44 GMT; domain=$mydomain; path=/

Thanks."	Jordi Clariana
	557	autoindex_show_hidden_files (autoindex feature option to show hidden files	nginx-module		enhancement		new	2014-05-08T00:36:45Z	2014-09-10T01:52:35Z	"Attached a small patch to the autoindex module adding the option to choose whether to show hidden files or not.

The option can be set as follows:

syntax:  autoindex_show_hidden_files on | off;
default: autoindex_show_hidden_files off;
context: http, server, location

"	Shin Sterneck
	761	The auth_request does not supports query string/arguments	nginx-module		enhancement		new	2015-05-13T10:09:46Z	2021-01-14T14:14:29Z	"Having in config:
{{{
auth_request /users/v1/auth?usergroup=devel;
}}}
[[BR]]

The debug log from nginx shows:
{{{
2013/01/01 01:52:49 [notice] 1607#0: *125 ""^/users/(.*)"" matches ""/users/v1/auth?usergroup=devel"", client: 10.9.96.2, server: localhost, request: ""GET /runner/v1/status HTTP/1.1"", subrequest: ""/users/v1/auth?usergroup=devel"", host: ""10.9.96.81""
2013/01/01 01:52:49 [notice] 1607#0: *125 rewritten data: ""/v1/auth?usergroup=devel"", args: """", client: 10.9.96.2, server: localhost, request: ""GET /runner/v1/status HTTP/1.1"", subrequest: ""/users/v1/auth?usergroup=devel"", host: ""10.9.96.81""
}}}

The strace on upstream shows:
{{{
recv(6, ""GET /v1/auth%3Fusergroup=devel H""..., 8192, 0) = 507
}}}
[[BR]]

As it seen - the question mark separating path and query got urlencoded and whole query string became part of path. 

Checking the code of auth_request seems that subrequest made w/o taking care of args - there is NULL passed.
"	rustler2000.livejournal.com
	712	limit_conn and internal redirects	documentation		defect		accepted	2015-02-03T23:38:55Z	2020-11-04T18:27:42Z	"It seems that limit_conn is only checked at the beginning of the request processing and is ignored in other processing stages. This sometimes results in somewhat unanticipated behaviour when dealing with internal redirects.

Consider an example:

{{{
limit_conn_zone $binary_remote_addr zone=addr:10m;

server {
    listen       80;
    server_name  site.com;

    index index.html;

    limit_conn addr 20; # first rule

    location / {
        limit_conn addr 10; # second rule
        root /var/www;
    }
}
}}}

Since any request ends up in the only defined location, one would expect that the second rule would always be used. However, only the first rule is applied if we try to request http://site.com (that is,  without relative reference part). If we move index directive inside the location though, the second rule will be used without exception.

This may not be exactly a bug, but if this behaviour is ""by design"" some additional explanation might be worth mentioning in the documentation.
"	Alex Habarov
	246	Don't install config files for unused modules	nginx-core		enhancement	Ruslan Ermilov	assigned	2012-11-12T20:57:15Z	2022-06-06T06:57:21Z	"Hello,

This patch fixes a slightly annoying behavior whereby ""make install""
causes configuration files for charset, fastcgi, uwsgi and scgi
modules to be installed, even if those modules have been excluded from
the build.

Thanks!
Brian Waters"	Brian Waters
0.8.x	55	Неправильно определяется версия Opera	nginx-module	0.8.x	defect	somebody	accepted	2011-11-19T03:03:55Z	2022-02-13T12:18:28Z	"В новых версиях у браузера Opera user-agent выглядит так
Opera/9.80 (Windows NT 6.1; U; MRA 5.8 (build 4661); ru) Presto/2.8.131 Version/11.11
Тоесть версию отражает Version/11.11 а не Opera/9.80
В модуле ngx_http_browser_module она определяется так:
   { ""opera"",
    0,
    sizeof(""Opera "") - 1,
    ""Opera""},
Замена на 
   { ""opera"",
    sizeof(""Opera "") - 1,
    sizeof(""Version/"") - 1,
    ""Version/""},
выявляет правильно новые версии но со старыми версиями будет проблема"	ilzhan.ya.ru
1.0.x	68	Include larger speed units for HttpLimitReqModule	nginx-module	1.0.x	enhancement	somebody	new	2011-12-14T06:35:25Z	2018-12-12T15:08:20Z	"I have a huge problem on my site with site scrapers, spam and spider bots. While having per second and per minute request limits helps mitigate a DDOS it doesn't help with scrapers & spiders. 

Most of these bots fetch pages at a similar rate to regular users ie. 2-3 per minute. The difference between these bots and users is the consistency over time;they can fetch hundreds or thousands or pages a day. I can't use iptables either because they usually use one connection and sometimes users with real browsers establish more connections than these robots.

My request is to increase the allowable measured speed unit in the limit_req_zone directive to something larger than per minute; ie per hour, per day."	incognito2.myopenid.com
1.0.x	704	Nginx configure script can't detect groups reliably	nginx-core	1.0.x	defect		new	2015-01-26T19:38:10Z	2015-01-26T19:38:10Z	"As part of auto/unix, the configure scripts attempt to detect a ""nobody"" group by doing ""grep nobody /etc/group""

However, if you have a nobody user in a different group, but no nobody group present on your box, then this test will pass incorrectly. For example, consider the following:


{{{
/etc/passwd:
nobody:x:65534:65534:Nobody:/dev/null:/sbin/nologin

/etc/group:
users:x:1000:nobody
}}}

In this situation, the configure script will try and test for a group called ""nobody"", but because the grep isn't anchored to the start of the line and doesn't have any context (i.e. the trailing colon) then the ""users..."" line matches the grep and nginx will compile with ""nobody"" as its default group.

If this binary is then run without a ""user"" directive in its config file, nginx will fail to start because it can't drop privileges to the ""nobody"" group since it doesn't exist."	Gavin Chappell
1.0.x	52	urlencode/urldecode needed in rewrite and other places	nginx-module	1.0.x	enhancement	somebody	accepted	2011-11-13T19:45:17Z	2023-07-18T14:04:14Z	"Если в $http_accept есть пробелы, то они передаются без кодирования

rewrite ^ /cgi-bin/index.pl?_requri=$uri&_accept=$http_accept break;
...
proxy_pass http://127.0.0.1:82; # mini-httpd listening"	joni-jones.ya.ru
1.0.x	88	HttpRewriteModule - Feature Request - enhanced control structures	nginx-module	1.0.x	enhancement	somebody	new	2012-01-23T17:32:13Z	2023-07-18T14:03:45Z	"I would love to be able to do

if (condition1 or condition2) {
}

and/or

if () {
} else {
}

and/or

if () {
} else if {
} else {
}



"	Will Rowe
1.0.x	129	include_shell directive	nginx-core	1.0.x	enhancement	somebody	new	2012-03-20T13:49:31Z	2012-03-20T13:49:31Z	"[http://redmine.lighttpd.net/projects/lighttpd/wiki/Docs:Configuration Lighttpd] has an include_shell directive which reads configuration from output of a command.

This would be a very useful feature to have in nginx too."	poly
1.0.x	165	Nginx worker processes don't seem to have the right group permissions	nginx-core	1.0.x	enhancement	somebody	accepted	2012-05-11T01:44:00Z	2014-10-20T22:09:46Z	"Package: nginx
Version: 1.2.0-1~squeeze (from Nginx repository, Debian version)

When a UNIX domain socket permissions are set to allow the primary group of the nginx worker processes to read/write on it, the Nginx worker process fail to access it with a 'permission denied' error logged.

Way to reproduce it: Binding Nginx on PHP-FPM UNIX domain socket

PHP-FPM socket configured as follow:
- User: www-data
- Group: www-data
- Mode: 0660

Nginx configured as follow:
- Worker processes spawned with the user 'nginx'
- User 'nginx' has 'www-data' as primary group

Details on the configuration can be found here: http://forum.nginx.org/read.php?2,226182

It would be also nice to check than any group of the Nginx worker processes can be used for setting access permissions on sockets, not only its primary one."	https://stackoverflow.com/users/573152/bernard-rosset
1.0.x	225	Please support nested if statements with SSI	nginx-module	1.0.x	enhancement	somebody	new	2012-09-25T12:20:28Z	2013-08-26T11:49:11Z	I'm in the process of moving a site from an old server using apache to a new one using nginx. There is however one major problem, namely that the apache httpd mod_include allows nested #if statements. Would it be possible to enhance the nginx SSI module to support this feature, please?	launchpad.net/~blomqvist-janne
1.10.x	1010	Invalid request sent when serving error pages from upstream	nginx-core	1.10.x	defect		new	2016-06-30T09:56:20Z	2016-06-30T09:56:20Z	"The bug appears when request_body_buffering is off (but maybe not only), and error pages are set to be served from an upstream using an internal redirect.

If a problem happens while the request body is read/sent to the upstream in a non-buffered fashion, and nginx tries to serve an error page, it rewrites the HTTP method from POST to GET request, but keeps the old value of the Content-Length header when trying to serve the error page from the upstream.

This is done here: https://trac.nginx.org/nginx/browser/nginx/src/http/ngx_http_special_response.c#L575

For example:
===
POST /do HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Content-Length: 100

<body>
===
gets transformed into
===
GET /5xx.html HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Content-Length: 100

===

No body data is sent in the error page upstream request, even though it is declared in the header, which can cause the upstream server to wait for it. This is seen by the client as the request hanging until the configured upstream timeout.

Can this be fixed by clearing any Content-Length or Transfer-Encoding headers when error pages are served?"	sorin-manole@…
1.10.x	1085	multiple calls to make install from a read-only source fails to copy config files	other	1.10.x	defect		new	2016-09-28T08:43:55Z	2016-09-28T08:43:55Z	"Let's say you have the source of nginx and it is read-only.
Doing ""make install"" would preserve the read-only attributes. Then calling ""make install"" again would fail with ""Permission denied"", because cp would not overwrite read-only files.
I feel the correct behavior is for the install to succeeded no matter the read-only attributes of source files.
"	stackoverflow.com/users/324204/sogartar
1.10.x	1238	Core dump when $limit_rate is set both in a map and in a location	nginx-core	1.10.x	defect		accepted	2017-04-06T17:00:21Z	2021-09-24T15:24:03Z	"This is a minimal server configuration used to reproduce the problem (only the map & server section, the rest is the default configuration from nginx.org centos 7 nginx-1.10.3 package).

{{{
map $arg_test $limit_rate {
        default 128k;
        test 4k;
}

server {
        listen 8080;
        location / {
                root /var/www;
                set $limit_rate 4k;
        }
}
}}}

If a request to an affected location is made, nginx crashes with the following stack.

{{{
Program terminated with signal 7, Bus error.
#0  ngx_http_variable_request_set_size (r=0x7fb5c2761650, v=<optimized out>, data=140418628385320) at src/http/ngx_http_variables.c:730
730	    *sp = s;

(gdb) thread apply all bt

Thread 1 (Thread 0x7fb5c1237840 (LWP 2648)):
#0  ngx_http_variable_request_set_size (r=0x7fb5c2761650, v=<optimized out>, data=140418628385320) at src/http/ngx_http_variables.c:730
#1  0x00007fb5c12e992d in ngx_http_rewrite_handler (r=0x7fb5c2761650) at src/http/modules/ngx_http_rewrite_module.c:180
#2  0x00007fb5c12a669c in ngx_http_core_rewrite_phase (r=0x7fb5c2761650, ph=<optimized out>) at src/http/ngx_http_core_module.c:901
#3  0x00007fb5c12a1b3d in ngx_http_core_run_phases (r=r@entry=0x7fb5c2761650) at src/http/ngx_http_core_module.c:847
#4  0x00007fb5c12a1c3a in ngx_http_handler (r=r@entry=0x7fb5c2761650) at src/http/ngx_http_core_module.c:830
#5  0x00007fb5c12ad0de in ngx_http_process_request (r=0x7fb5c2761650) at src/http/ngx_http_request.c:1910
#6  0x00007fb5c12ad952 in ngx_http_process_request_line (rev=0x7fb5c27bae10) at src/http/ngx_http_request.c:1022
#7  0x00007fb5c128de60 in ngx_event_process_posted (cycle=cycle@entry=0x7fb5c2745930, posted=0x7fb5c1575290 <ngx_posted_events>) at src/event/ngx_event_posted.c:33
#8  0x00007fb5c128d9d7 in ngx_process_events_and_timers (cycle=cycle@entry=0x7fb5c2745930) at src/event/ngx_event.c:259
#9  0x00007fb5c12944f0 in ngx_worker_process_cycle (cycle=cycle@entry=0x7fb5c2745930, data=data@entry=0x1) at src/os/unix/ngx_process_cycle.c:753
#10 0x00007fb5c1292e66 in ngx_spawn_process (cycle=cycle@entry=0x7fb5c2745930, proc=proc@entry=0x7fb5c1294460 <ngx_worker_process_cycle>, data=data@entry=0x1, 
    name=name@entry=0x7fb5c131c197 ""worker process"", respawn=respawn@entry=-3) at src/os/unix/ngx_process.c:198
#11 0x00007fb5c12946f0 in ngx_start_worker_processes (cycle=cycle@entry=0x7fb5c2745930, n=2, type=type@entry=-3) at src/os/unix/ngx_process_cycle.c:358
#12 0x00007fb5c1295283 in ngx_master_process_cycle (cycle=cycle@entry=0x7fb5c2745930) at src/os/unix/ngx_process_cycle.c:130
#13 0x00007fb5c127039d in main (argc=<optimized out>, argv=<optimized out>) at src/core/nginx.c:367
}}}"	sklochkov@…
1.10.x	1285	map regexp positional captures interfere with location regexp positional captures	documentation	1.10.x	defect		new	2017-05-31T07:28:06Z	2023-07-05T20:55:31Z	"I have map configuration like this:
    map $uri $rewrite_scheme {
        default              ""http"";
        ~*test ""https"";
    }

And location configuration like this:
        location ~ ^/__proxy_host/(host1|host2)/(.*)$ {
            proxy_pass $rewrite_scheme://$1/$2$is_args$args;
        }

Before I have added map configuration location was working as expected, but after adding map all positional captures inside location became empty. Probably it is due to map initialization on variable call (which occurs after location).
If it is by design, then it would be nice to document this behavior as it is too hard to investigate.

As a solution named captures could be used like this:
        location ~ ^/__proxy_host/(?<rewrite_host>host1|host2)/(?<rewrite_path>.*)$ {
            proxy_pass $rewrite_scheme://$rewrite_host/$rewrite_path$is_args$args;
        }"	dimaslv@…
1.10.x	1521	Enable open_file_cache may cause index module return 403 forbidden	nginx-module	1.10.x	defect		new	2018-04-11T06:52:29Z	2018-04-11T16:37:24Z	"After enable open_file_cache, Nginx index module will return 403 forbidden directly for those index file which cannot be read by Nginx.
It will not happen when open_file_cache is disable, since Nginx do not need to read these file. Nginx just pass the file path to the right upstream when it can stat the file. What it needed is only the read permission of the index file's directory but not the index file's.

The reproduce steps:
1. Let nginx worker run as user http
2. Let php-fpm run as user php
3. Create a index.php under document root which can only read by php, but not http
4. Enable open_file_cache, curl localhost, then you will find it return 403
5. Disable open_file_cache, curl localhost, then you will find it work perfect

After tracing the ngx_http_index_module.c, I think the problem is when open_file_cache disable, the function, ngx_open_cached_file (ngx_http_index_module.c line 217), will only stat the file. But after open_file_cache enable, ngx_open_cached_file will try to open the file to get fd. Then it gets the NGX_EACCES.

The workarround solution maybe call ngx_open_cached_file again with ngx_open_cached_file(NULL, &path, &of, r->pool) after getting NGX_EACCES by first call.

By the way, this problem also happen in nginx 1.13.9"	cangmingh@…
1.10.x	1025	No country detected for requests with X-Forwarded-For 127.0.0.1 or any reserved IP address	nginx-module	1.10.x	enhancement		new	2016-07-15T10:43:40Z	2016-07-15T15:45:35Z	"I use ngx_http_geoip_module to detect origin country of every request including requests behind public proxy servers.

{{{
geoip_country /usr/local/etc/nginx/geobase/GeoIP-106_20160712.dat;
geoip_proxy_recursive on; # Use X-Forwarded-For
geoip_proxy 0.0.0.0/0;
}}}

if X-Forwarded-For header contains IP address from reserved range https://en.wikipedia.org/wiki/Reserved_IP_addresses

'''$geoip_country_code is empty """"'''

For example: X-Forwarded-For: 127.0.0.1

GeoIP database has empty values for these IP ranges.
GeoIP2 database has no values.

I think nginx_geoip module must query geoip database again using REMOTE_ADDR if no country value received using X-Forwarded-For value.

How to repeat
test.php
{{{
<?
var_dump($_SERVER['HTTP_X_FORWARDED_FOR']);
var_dump($_SERVER['GEOIP_COUNTRY']);
?>
}}}

{{{
curl --header ""X-Forwarded-For: 127.0.0.1"" http://localhost/test.php
}}}

Alternative solution may be put all valid IP ranges into geoip_proxy like
geoip_proxy 1.0.0.0/8;
geoip_proxy 2.0.0.0/7;
4.0.0.0/6
8.0.0.0/7
11.0.0.0/8
12.0.0.0/6
16.0.0.0/4
32.0.0.0/3
64.0.0.0/3
96.0.0.0/6
100.0.0.0/10
100.128.0.0/9
101.0.0.0/8
102.0.0.0/7
104.0.0.0/5
112.0.0.0/5
120.0.0.0/6
124.0.0.0/7
126.0.0.0/8
128.0.0.0/1
but it may lower performance."	romamo@…
1.10.x	1114	New variable suggestion (Date/Time)	other	1.10.x	enhancement		new	2016-10-18T20:54:16Z	2016-10-18T20:54:16Z	"Is it possible to add a new variable in Nginx that produces a GMT date in RFC 1123 format for us who would like to manually set a ""last-modified"" header (and others) in our configuration files?

The current $date_gmt produces a date/time format (RFC 850 HTTP/1.0) which is now obsolete.

All (date/time) headers require the new updated format."	mayhem30@…
1.10.x	1145	"Can't set redirection port to the port from the ""Host"" request header field"	nginx-module	1.10.x	enhancement		new	2016-11-26T19:56:58Z	2018-04-15T04:26:24Z	"Currently you can't set the redirection port to the one that is specified by the HTTP Host field. There should be another possible value for {{{port_in_redirection}}} that does exactly this.

Example: The client requests path /test from the host example.com:4545. The server now responds with a redirection to the location http://example.com:4545/test2

Also, compare {{{server_name_in_redirect}}} and {{{port_in_redirect}}}, they strangely do very different things altough their names are similiar:

* {{{server_name_in_redirect on}}} means the server name specified by the server_name directive is used
* {{{server_name_in_redirect off}}} means the server name specified in the ""Host"" request header field is used
but
* {{{port_in_redirect on}}} means the port specified by the listen directive is used
* {{{port_in_redirect off}}} means that no port is send

This is not intuitive and also misleading because {{{server_name_in_redirect off}}} actually sends a server name in the redirection packet.."	exap@…
1.10.x	1154	Passing URG flag via nginx	nginx-module	1.10.x	enhancement		new	2016-12-12T10:10:24Z	2017-02-10T17:22:00Z	"I have a problem with URG flag and passing it via nginx. I use stream module for TCP connections.
Configuration for nginx:
{{{
stream {

    server {
        listen 6002;
        proxy_pass 127.0.0.1:8000;
        proxy_timeout 10;
    }

}
}}}

tcpdumps:
from sender:
{{{
sudo tcpdump  -i any dst port 6002
10:24:08.532935 IP 172.20.9.82.54296 > t40487.te4.local.x11-2: Flags [S], seq 1000, win 8192, length 0
10:24:08.609577 IP 172.20.9.82.54296 > t40487.te4.local.x11-2: Flags [.], ack 1034154399, win 8192, length 0
10:24:08.680219 IP 172.20.9.82.54296 > t40487.te4.local.x11-2: Flags [U], seq 1003:1005, win 8192, urg 0, length 2
}}}

on nginx host (collect everything from network):
{{{
tcpdump -i any src host  172.20.9.82 and not dst port 22
10:24:08.678898 IP 172.20.9.82.54296 > t40487.te4.local.6002: Flags [.], ack 2008903947, win 8192, length 0
10:24:08.749167 IP 172.20.9.82.54296 > t40487.te4.local.6002: Flags [U], seq 2816206677:2816206679, win 8192, urg 0, length 2
}}}
on application (check only application port):
{{{
tcpdump -i any dst port 8000
10:25:38.704441 IP localhost.52258 > localhost.irdmi: Flags [S], seq 361184170, win 65495, options [mss 65495,sackOK,TS val 5902248 ecr 0,nop,wscale 7], length 0
10:25:38.704471 IP localhost.52258 > localhost.irdmi: Flags [.], ack 1535641746, win 512, options [nop,nop,TS val 5902248 ecr 5902248], length 0
}}}

As you can see the URG flag is not visible in application.
I think that nginx takes packet with URG flag for himself :-)

Is it possible to pass it to application?"	jjagodzinski@…
1.10.x	1215	Add support for SHA2 (SHA3?) family for RFC2307 passwords for HTTP Basic authentication	nginx-module	1.10.x	enhancement		new	2017-03-10T16:17:42Z	2019-01-31T18:06:16Z	"The [https://nginx.org/en/docs/http/ngx_http_auth_basic_module.html#auth_basic_user_file auth_basic_user_file] docs state only `PLAIN`, `SHA` & `SSHA` schemes are supported for [https://tools.ietf.org/html/rfc2307 RFC 2307]-formatted passwords.

As the docs also warn, `SHA` should be avoided (you could actually issue the same warning for `SSHA`).

It would be best if this directive was supporting at least the versions of password schemes which are considered safe, rather than merely outdated ones, for example `SSHA512`."	https://stackoverflow.com/users/573152/bernard-rosset
1.10.x	1230	proxy_next_upstream: Add a config to add other errors	other	1.10.x	enhancement		new	2017-03-30T15:48:16Z	2017-04-03T15:51:26Z	"Hi

i need to set a proxy_next_upstream on errors ""410 Gone"", but nginx do not support it.

Please add the http_410 or even better, a way to add custom error codes to the proxy_next_upstream, this way you cover all future error codes and increase nginx flexibility"	higuita
1.10.x	1262	connect_(timeout|error) option in proxy_next_upstream	nginx-module	1.10.x	enhancement		new	2017-05-01T05:57:02Z	2017-05-01T05:57:02Z	"Currently proxy_next_upstream supports only ""error"" or ""timeout"", that are [https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream defined] as:

error
   an error occurred '''while establishing a connection with the server, passing a request to it, or reading the response header''';
timeout
   a timeout has occurred '''while establishing a connection with the server, passing a request to it, or reading the response header''';

both conditions are pretty broad: it would be very helpful to have better control over which phase the error/timeout happens. In one of our use-cases it would be very useful to limit the failover to the connection establishment phase, i.e. something along those lines:

connect_error
   an error occurred '''while establishing a connection with the server''';
connect_timeout
   a timeout has occurred '''while establishing a connection with the server''';

We're running nginx 1.10.x on linux 4.4.x, but even in the current 1.13.x this is still not supported."	CAFxX@…
1.10.x	1293	nginx http proxy stops sending request data after first byte of server response is received	nginx-module	1.10.x	enhancement		new	2017-06-13T19:11:44Z	2017-06-14T03:49:00Z	"I have an upstream service that accepts large input files via PUT method. The server sends the first part of its response (response code + headers) before the client has completely sent all of its request data. Questionable design choices aside, I believe that this is within the RFC2616 specification:

{{{
If an origin server receives a request that does not include an
Expect request-header field with the ""100-continue"" expectation,
the request includes a request body, and the server responds
with a final status code before reading the entire request body
from the transport connection, then the server SHOULD NOT close
the transport connection until it has read the entire request,
or until the client closes the connection. Otherwise, the client
might not reliably receive the response message. However, this
requirement is not be construed as preventing a server from
defending itself against denial-of-service attacks, or from
badly broken client implementations.
}}}

Because this happens on the back-end, and there may be some client-side buffering still going on, it is a little difficult to observe. Using a mix of pcaps and gdb on nginx, I can see that it stops sending data at pretty much the exact time that the server sends its response:

{{{
curl -k -vvv --header 'Accept: application/json' -i --max-time 3600 --header 'X-Detect-Content-Type: true' --header 'Content-Type: application/x-gzip' --noproxy '*' -u xxx:yyy -T ./input.tar.gz -XPUT 'http://localhost:8001/qq/xxxxxxxxxxxxxxxxxxxxx/yyyyyyyyyyyy/?zzzzzzzzzzzzzzzzzzzzzz'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 -::- -::- -::- 0* About to connect() to slcao604 port 8001 (#0)

    Trying 127.0.0.1...
    Connected to localhost (127.0.0.1) port 8001 (#0)
    Server auth using Basic with user 'xxx'
    > PUT /qq/xxxxxxxxxxxxxxxxxxxxx/yyyyyyyyyyyy/?zzzzzzzzzzzzzzzzzzzzzz HTTP/1.1
    > Authorization: Basic ..............................................==
    > User-Agent: curl/7.29.0
    > Host: localhost:8001
    > Accept: application/json
    > X-Detect-Content-Type: true
    > Content-Type: application/x-gzip
    > Content-Length: 6900473
    > Expect: 100-continue
    >
    < HTTP/1.1 100 Continue
    } [data not shown]
    3 6738k 0 0 3 224k 0 1992 0:57:44 0:01:55 0:55:49 0
}}}

A gdb session of nginx shows that the functions that write data to the upstream are firing as the transfer progresses:

{{{
Breakpoint 7, ngx_http_upstream_send_request_body (do_write=1, u=0x20459f0, r=0x2044650)
at src/http/ngx_http_upstream.c:1959
1959 rc = ngx_http_read_unbuffered_request_body(r);
(gdb) continue
Continuing.

Breakpoint 7, ngx_http_upstream_send_request_body (do_write=1, u=0x20459f0, r=0x2044650)
at src/http/ngx_http_upstream.c:1959
1959 rc = ngx_http_read_unbuffered_request_body(r);
(gdb) continue
Continuing.

Breakpoint 7, ngx_http_upstream_send_request_body (do_write=1, u=0x20459f0, r=0x2044650)
at src/http/ngx_http_upstream.c:1959
1959 rc = ngx_http_read_unbuffered_request_body(r);
(gdb) continue
Continuing.
}}}

As soon as the response comes back from the server, the curl data transfer suddenly stalls out:

{{{
< HTTP/1.1 200 OK
< Server: nginx/1.10.2
< Date: Tue, 13 Jun 2017 00:38:04 GMT
< Content-Type: application/json;charset=UTF-8
< Transfer-Encoding: chunked
< Connection: keep-alive
<
{ [data not shown]
3 6738k 0 4 3 224k 0 127 15:05:34 0:29:58 14:35:36 0
}}}

In the gdb session, I see do_write go to zero directly after this:

{{{
Breakpoint 7, ngx_http_upstream_send_request_body (do_write=0, u=0x20459f0, r=0x2044650)
at src/http/ngx_http_upstream.c:1959
1959 rc = ngx_http_read_unbuffered_request_body(r);
(gdb) continue
Continuing.

Breakpoint 7, ngx_http_upstream_send_request_body (do_write=0, u=0x20459f0, r=0x2044650)
at src/http/ngx_http_upstream.c:1959
1959 rc = ngx_http_read_unbuffered_request_body(r);
(gdb) continue
}}}

Tuning the lingering_close option does not appear to resolve this issue. Setting lingering_close to ""on"" causes the request data to pause immediately after the 200 OK response first byte is received, and then flush it all out after the upstream server has finished sending its entire response body.

I have done most of my testing against nginx 1.10.2, but I tried 1.13.1, and it seemed to exhibit the same behavior.

I am attaching a debug-level error.log from one of the transfers. I think that the interesting part starts around line 1083, when the server's response comes back.

My nginx.conf:

{{{
user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log debug;
pid        /var/run/nginx.pid;


events {
    worker_connections  4096;
}


http {
    vhost_traffic_status_zone;
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format logstash '$remote_addr - $remote_user [$time_local] '
        '""$request"" $status $body_bytes_sent $bytes_sent '
        '""$http_referer"" ""$http_user_agent"" '
        '$request_time $request_length '
        '$upstream_addr '
        '$upstream_response_time '
        '$upstream_http_x_trans_id '
        '$http_x_forwarded_for '
        '$ssl_protocol/$ssl_cipher';
    access_log /var/log/nginx/access.log logstash;

    #sendfile        on;

    keepalive_timeout  65;

    # throttling config
    # end of throttling config

    include /etc/nginx/conf.d/*.conf;

}
}}}

My /etc/nginx/conf.d/proxy.conf:

{{{
upstream storagebackend {
        server node1.example.com:17999 weight=10;

        check interval=3000 rise=2 fall=3 timeout=1000 type=http;
        check_http_send ""GET /healthcheck HTTP/1.1\r\nHost: localhost\r\n\r\n"";
        check_http_expect_alive http_2xx;
}

server {
        listen 8001;
        server_name vip.example.com;
        client_body_timeout 900s;
        client_max_body_size 0;
        client_header_timeout 900s;
        # Testtttttttttttttt
        lingering_close off;

        location / {
                proxy_pass http://backend;
	 	proxy_http_version 1.1;
	 	proxy_request_buffering off;
	 	proxy_buffering off;
	 	proxy_max_temp_file_size 0;
                proxy_read_timeout 900s;
		proxy_send_timeout 900s;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
                allow 127.0.0.1;
        }
}

server {
    listen 9990;
    server_name stats;
    location /status {
        vhost_traffic_status_display;
        vhost_traffic_status_display_format html;
    }
}
}}}"	swimfrog@…
1.11.x	1162	Adding HTTP Forward Proxy support in core like apache	nginx-core	1.11.x	enhancement		new	2016-12-22T17:21:10Z	2016-12-22T17:21:10Z	"To replace completely apache on there major feature, while be importante to implement forward proxy feature (not has  proxypass hack but also supporting connect method, and proxy authentication in nginx core .

Ok you can use nginx as forward proxy in using proxypass with variable...
https://ef.gy/using-nginx-as-a-proxy-server

But they not support CONNECT Method natively...

And you cannot use with proxy authentification (407 vs 401 HTTP code) natively...

like :
- https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxyrequests
- https://httpd.apache.org/docs/2.4/mod/mod_proxy_connect.html

And 

- http://stackoverflow.com/questions/7577917/how-does-a-http-proxy-utilize-the-http-protocol-a-proxy-rfc


To do that i think adding CONNECT method like https://github.com/chobits/ngx_http_proxy_connect_module in core...

And some modification in http://nginx.org/en/docs/http/ngx_http_auth_basic_module.html module to use 407 in place of 401 in response plus reading authentication in `Proxy-Authorization` header in place of `Authorization` header ...

Plus a options like proxyrequests to activate interpretation of request like:
{{{
GET http://www.baidu.com/ HTTP/1.1
Host: www.baidu.com
Proxy-Connection: keep-alive
}}}

Like `proxy_pass http://$http_host$uri$is_args$args;`

And interprete correctly the `Proxy-Connection: keep-alive` header..."	Mathieu CARBONNEAUX
1.11.x	1058	недокументированный редирект?	documentation	1.11.x	defect		accepted	2016-08-24T09:54:57Z	2016-08-24T12:20:54Z	"при запросе URL без концевого слэша всегда происходит 301 редирект на тот же URL со слэшем в конце

пример конфига:
location /dir {
                alias   /www/dir;
}

тоже самое происходит и в таком варианте:
location /dir/ {
                alias   /www/dir/;
}


Однако, в документации такое поведение, вроде бы,  описано только для локэйшнов с *_pass, либо я не там искал, но нашёл я только вот это:

''Если location задан префиксной строкой со слэшом в конце и запросы обрабатываются при помощи proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass или memcached_pass, происходит специальная обработка. В ответ на запрос с URI равным этой строке, но без завершающего слэша, будет возвращено постоянное перенаправление с кодом 301 на URI с добавленным в конец слэшом.'' 

пример готовой конфигурации

        location /ig {
                alias   /www/ig_build;
        }

$curl -I http://localhost:90/ig/infografika
HTTP/1.1 301 Moved Permanently
Server: nginx/1.11.3
Date: Wed, 24 Aug 2016 09:52:10 GMT
Content-Type: text/html
Content-Length: 185
Location: http://localhost:90/ig/infografika/
Connection: keep-alive


Также проверял на версии 1.4.2, всё тоже самое.

Если директории нет - то сразу возвращает 404, но если она есть, а запрос был без слэша  - возникает редирект.
"	roman.golova@…
1.11.x	1059	syntax check error when an upstream is used in proxy_pass using both http and https and is defined after	nginx-core	1.11.x	defect		new	2016-08-25T16:17:57Z	2016-11-08T11:33:39Z	"First case, upstream is defined before (alphabetically) its usage in proxy_pass, both in http and https:

[root@TEST_VPNA conf.d]# cat a_1.conf
    upstream backend  {
        server 10.3.1.110:8443;
    }
[root@TEST_VPNA conf.d]# cat a_2.conf
   server {
        listen                               443 ssl;
        server_name                          TRUC.domain.com;

        location /one {
            proxy_pass                       http://backend;
        }
        location /two {
            proxy_pass                       https://backend;
        }
 }

[root@TEST_VPNA conf.d]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Now if the upstream definition is read after, nginx tries to resolve the upstream name:

[root@TEST_VPNA conf.d]# mv a_1.conf a_3.conf

[root@TEST_VPNA conf.d]# nginx -t
nginx: [emerg] host not found in upstream ""backend"" in /etc/nginx/conf.d/a_2.conf:9
nginx: configuration file /etc/nginx/nginx.conf test failed


Note that there is no problem if upstream is used only in https (or http):

[root@TEST_VPNA conf.d]# cat a_2.conf
   server {
        listen                               443 ssl;
        server_name                          TRUC.domain.com;

        location /one {
            proxy_pass                       https://backend;
        }
        location /two {
            proxy_pass                       https://backend;
        }
 }
[root@TEST_VPNA conf.d]# cat a_3.conf
    upstream backend  {
        server 10.3.1.110:8443;
    }
[root@TEST_VPNA conf.d]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

There is no use case using the same upstream in both http and https but it can prevent nginx to run in case of configuration mistake.
"	nicolas.jombart@…
1.11.x	1152	Custom error_page doesn't work for HTTP error 413	other	1.11.x	defect		reopened	2016-12-09T13:50:01Z	2023-08-03T16:38:05Z	"Native Nginx error page instead of custom is shown for 413 errors (404 works fine).


…
http {
…
  uwsgi_intercept_errors on;
  error_page 404 /error.404.html;
  error_page 413 /error.413.html;
…
  server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    root /www;
…
    location /error {
      internal;
    }
…
    location / {
      uwsgi_pass unix:/run/uwsgi.sock
    }
  }
}"	splewako@…
1.11.x	985	request_id variable, needs more documentation	documentation	1.11.x	enhancement		new	2016-05-25T04:45:51Z	2016-05-25T11:37:15Z	"i'd like to use request_id, but there are several things i need to know about it before i can use it, and the documentation does not answer them:

1. is that raw bytes? or a hexadecimal number as a string?
2. how long (how many characters) is it?

i need to know these because for example, maybe i want to put it into a query-string, or a html-attribute... so i need to know if i have to base64-encode it manually, or that's not needed, and so on.

i think the documentation should tell us more about this variable. i understand i can look into the source code to see how it is generated, but that does not guarantee that it will not change with the next version."	gabor.farkas@…
1.11.x	990	ssl_stapling_file does not work with multiple certificates	nginx-module	1.11.x	enhancement		reopened	2016-05-31T21:23:27Z	2017-07-02T01:22:20Z	"NGINX version 1.11.0 introduced the ability to use multiple certificates (ticket #814).

Changeset 6550:51e1f047d15d did not make any changes to enable multiple ssl_stapling_file directives.

As only one ssl_stapling_file directive can be specified, only one specified ssl_certificate will have a stapled OCSP response that matches the serial number of the certificate.

As no checking appears to be done to make sure the serial number in an ssl_stapling_file OCSP response matches that of a certificate, additional certificates will be served with stapled OCSP responses that are not valid for those certificates."	John Cook
1.11.x	1004	try_files outside of location{} triggered when no location{} matches	documentation	1.11.x	enhancement		new	2016-06-24T19:21:04Z	2016-06-24T19:21:04Z	"Please, specify in: http://nginx.org/en/docs/http/ngx_http_core_module.html#try_files

That try_files inside server{} and outside of location{} is triggered only when request does not match any location{} block.

It is very useful functionality and not mentioned at all.

I found out about it from http://stackoverflow.com/questions/13138318/nginx-try-files-outside-location/14993257#14993257"	stackoverflow.com/users/1336044/martin
1.11.x	1036	Add tcpi_total_retrans to tcp_info variables	other	1.11.x	enhancement		new	2016-08-01T10:17:30Z	2016-12-22T01:51:12Z	"I would like to be able to assess the impact of changes to transport protocol parameters more easily. Per session retransmission counters are really valuable for this measurement, where the global counters are too coarse.

Please add at least 'tcpi_total_retrans' to the existing list of tcp_info variables. Ideally the complete list would be made available, or a way to use the full set exposed by the kernel without further code changes.

I can think of use cases for several of the other data points also. Super-set here:
https://github.com/torvalds/linux/blob/master/include/uapi/linux/tcp.h

Thanks..."	craigt
1.11.x	1060	limit_req_zone add longer periods	nginx-module	1.11.x	enhancement		new	2016-08-29T19:00:33Z	2018-02-16T15:54:20Z	"Hi,
limit_req_zone supports only r/s and r/m. But to effectively stop bruteforce login attempts it would help r/h and r/d (requests per hour, requests per day).
Simple, elegant, effective."	Emanuelis
1.11.x	1082	rpmlint issues centos7	nginx-package	1.11.x	enhancement	thresh	assigned	2016-09-23T08:47:49Z	2023-06-16T20:39:28Z	"It would be nice if soumeone could fix those warnings.

$ rpmlint nginx-1.11.4-1.el7.ngx.x86_64.rpm
nginx.x86_64: W: no-version-in-last-changelog
nginx.x86_64: W: invalid-license 2-clause BSD-like license
nginx.x86_64: W: only-non-binary-in-usr-lib
nginx.x86_64: W: no-manual-page-for-binary nginx
nginx.x86_64: W: no-manual-page-for-binary nginx-debug
nginx.x86_64: E: unknown-key (MD5
nginx.x86_64: W: dangerous-command-in-%post chmod
1 packages and 0 specfiles checked; 1 errors, 6 warnings"	mjtrangoni@…
1.11.x	1083	"Enable gzip compression only for non ""text/html"" content"	nginx-module	1.11.x	enhancement		new	2016-09-24T17:05:21Z	2018-10-04T15:34:50Z	"I want to enable gzip HTTP (ngx_http_gzip_module) compression but only for static content (JS, CSS) and not for HTML.

HTTP compression can be exploited by BREACH or HEIST attacks. These attacks makes it possible to ""guess"" SSL encrypted secrets when the content is compressed.

Therefore I want to compress only the content that:
1. does not change on user input (attackers guess) and hence mitigates the possibility to use the attack,
2. does not contain any sensitive data (JS and CSS are public for anyone).

However according to the documentation:
""Responses with the “text/html” type are _always_ compressed.""
(see http://nginx.org/en/docs/http/ngx_http_gzip_module.html#gzip_types )

This means that even when I set ""gzip_types"" to ""application/javascript text/css"" I automatically enable attackers to guess any sensitive/secret data contained in HTML (eg. email, credit card number, session ID in hyper-links, CSRF tokens).

Can you make it possible to enable gzip compression only on certain supplied MIME types but not ""text/html"" (unless it is on the list too)?

Something like ""gzip_force_default_types"" setting that is ""on"" by default to keep backwards compatibility."	sustmi@…
1.11.x	1091	Add missing client certificate field variables	nginx-module	1.11.x	enhancement		new	2016-09-30T13:46:48Z	2017-02-20T23:39:10Z	"Hi

We make pretty extensive use of client cert auth and it'd be massively helpful if NGINX were to expose all the issuer and subject elements as we sometimes use this as a fine-grained logic filter. Our A/V team wants to use NGINX to replace their existing Apache HTTPD layer and this is a barrier to that right at the moment

NGINX currently exposes the issuer and the subject DN but it'd be great to also expose the rest for issuer and also for subject:

C - Country
L - Location
O - Organisation
OU - Organisational Unit

I would imagine (perhaps naively) that the majority of the logic will exist already to do this so I would hope it's not too much work. 

If this is of interest but too much work, I can either try dusting off my C skills (i which case i'd perhaps ask for a quick pointer/link to relevant docs) or i might be able to persuade someone here at work to do the dev.

Cheers
"	Neil Craig
1.11.x	1104	. (dot) is not allow for syslog tag	nginx-core	1.11.x	enhancement		new	2016-10-13T16:01:49Z	2016-10-14T13:35:38Z	"When parsing our ngnix configuration file, nginx complains:

[emerg] syslog ""tag"" only allows alphanumeric characters and underscore

The only non-alphanumeric character in the tag is '.'. We have '.' in every other syslog streams, and our filters rely on this character."	thausler786@…
1.11.x	1119	Gzip_types support pattern matching	other	1.11.x	enhancement		new	2016-10-24T07:31:46Z	2022-09-14T02:47:25Z	"We have a backend api which will return response content-type with  different mime types and joint with different version, for example the pattern may like this `application/vnd.search+json; version=v5`. For we have different mime types and different versions and may have over 50 combinations, if we configure nginx.cnf file with the exactly type, it will result to a very large configuration file. 
So can we support pattern matching say that we can set `application/vnd.search+json` in gzip_types and it will match `application/vnd.search+json` followed by all the versions, or set `application/vnd.search+json;*` to match all the possible types"	jshen.thoughtworks.com@…
1.11.x	1151	Use sched_getaffinity() and CPU_COUNT() for ngx_ncpu on Linux	nginx-core	1.11.x	enhancement		new	2016-12-09T11:44:00Z	2017-07-06T14:58:41Z	"For better integration with systemd cgroups and containers like docker I humbly suggest using an approach similar to the one employed by nproc from coreutils to determine the number of cpus. 

Currently, when using a systemd unit (or docker) and limiting nginx to a few cores, setting worker_processes would result in nginx spawning one cpu per core in the system. I suggest to chagne this to use only the amount of CPUs actually available. 

Example:
nginx.conf
{{{
worker_processes auto;
}}}

/etc/systemd/system/nginx.service.d/cpu-affinity.conf
{{{
[Service]
CPUAffinity=0,1
}}}

Should only spawn two processes instead of four. The same goes when using docker, e.g.:
`docker run --cpuset-cpus 0,1 --rm -t -i -v /etc/nginx/nginx.conf:/etc/nginx/nginx.conf nginx`
should spawn 4 workers (not that the default image will only ever start a single worker). 

I wrote a small patch that accomplishes this, not being a C coder or familiar with the nginx code base it's very hacky. It does work however. "	nmeyer-otto@…
1.11.x	1164	Option to turn off TLS protocols errors in the logs	other	1.11.x	enhancement		new	2016-12-27T12:14:50Z	2016-12-27T12:14:50Z	I recently changed my nginx configuration to only accept TLSv1.2 as per the latest security recommendations. But it seems spammers has only access to TLSv1.0 as a result my error logs is floored with TLS protocol errors. Just like log_not_found, there needs to be an option to turn off logging of these types of errors, so I don't need to empty the logs each day. Thank you.	jerrygrey@…
1.11.x	1179	Allow upstreams to be resolved using internal ngx resolver instead of getaddrinfo()	nginx-core	1.11.x	enhancement		new	2017-01-16T10:45:39Z	2017-04-28T19:35:34Z	"`ngx_http_upstream_server` currently uses `ngx_parse_url()` to parse the upstream server. `ngx_parse_url()` uses `ngx_inet_resolve_host()` to resolve the upstream server address, which in turn uses getaddrinfo() to resolve the hostname.

This causes nginx to only use the system resolvers to resolve upstream hostnames during startup. 

It would be nice if it is possible to have `ngx_parse_inet_url()` consider/use any already configured resolvers configured by ngx_http_core's `resolver` directive."	hrak@…
1.11.x	1188	"Send ""immutable"" keyword in Cache-Control when ""expires max"""	nginx-module	1.11.x	enhancement		new	2017-01-26T19:19:18Z	2020-11-23T14:02:27Z	"Per the documentation for ngx_http_headers_module:

   The max parameter sets “Expires” to the value “Thu, 31 Dec 2037 23:55:55 GMT”,
   and “Cache-Control” to 10 years.

At Facebook's urging, Firefox implemented an additional keyword to Cache-Control for immutable assets:
https://code.facebook.com/posts/557147474482256/this-browser-tweak-saved-60-of-requests-to-facebook/
https://hacks.mozilla.org/2017/01/using-immutable-caching-to-speed-up-the-web/

Adding this ""immutable"" keyword to the Cache-Control header for ""expires max"" requests would improve cache effectiveness for Firefox users.

The change itself is trivial:

 {{{
--- src/http/modules/ngx_http_headers_filter_module.c.dist      Thu Jan 26 11:14:29 2017
+++ src/http/modules/ngx_http_headers_filter_module.c   Thu Jan 26 11:15:23 2017
@@ -301,7 +301,7 @@
     if (expires == NGX_HTTP_EXPIRES_MAX) {
         e->value.data = (u_char *) ""Thu, 31 Dec 2037 23:55:55 GMT"";
         /* 10 years */
-        ngx_str_set(&cc->value, ""max-age=315360000"");
+        ngx_str_set(&cc->value, ""max-age=315360000, immutable"");
         return NGX_OK;
     }
 
}}}"	fazalmajid@…
1.11.x	1242	nginx stub_status exhancement	nginx-module	1.11.x	enhancement		new	2017-04-09T14:29:42Z	2017-04-09T14:29:42Z	"Патч добавляет новый функционал - подсчет количества различных http кодов и вывод в stub_status. Это очень удобно, чтобы снимать метрики без необходимости парсинга access log. status_code_def это счетчик кодов статуса не подпавших под список подсчитываемых кодов. Добавляется новая опция конфигурации --with-http_stub_status_extended

Active connections: 2 
server accepts handled requests
 18 18 20 
Reading: 0 Writing: 1 Waiting: 1 
status_code_def: 0
status_code_100: 0
status_code_101: 0
status_code_102: 0
status_code_200: 11
status_code_201: 0
status_code_202: 0
status_code_203: 0
status_code_204: 0
status_code_205: 0
status_code_206: 0
status_code_207: 0
status_code_300: 0
status_code_301: 0
status_code_302: 0
status_code_303: 0
status_code_304: 0
status_code_305: 0
status_code_307: 0
status_code_400: 0
status_code_401: 0
status_code_402: 0
status_code_403: 7
status_code_404: 1
status_code_405: 0
status_code_406: 0
status_code_407: 0
status_code_408: 0
status_code_409: 0
status_code_410: 0
status_code_411: 0
status_code_412: 0
status_code_413: 0
status_code_414: 0
status_code_415: 0
status_code_416: 0
status_code_417: 0
status_code_422: 0
status_code_423: 0
status_code_424: 0
status_code_425: 0
status_code_426: 0
status_code_428: 0
status_code_429: 0
status_code_431: 0
status_code_434: 0
status_code_444: 0
status_code_449: 0
status_code_451: 0
status_code_500: 0
status_code_501: 0
status_code_502: 0
status_code_503: 0
status_code_504: 0
status_code_505: 0
status_code_506: 0
status_code_507: 0
status_code_508: 0
status_code_509: 0
status_code_510: 0
status_code_511: 0
"	sergey.smitienko@…
1.11.x	1279	Implement FIB selection for upstream connections in proxy and stream modules.	nginx-module	1.11.x	enhancement		new	2017-05-25T14:50:30Z	2023-05-15T18:43:33Z	"It is possible to set alternative fib for listening sockets via ""setfib"" option in ""listen"" directives.
It would be convenient to have similar functionality for outgoing connections in ngx_http_proxy_module and ngx_stream_proxy_module."	Sergey Akhmatov
1.11.x	1288	upstream server port defaults to port 80 even for https: proxy_pass	nginx-core	1.11.x	enhancement		new	2017-06-02T15:05:53Z	2017-06-26T10:21:14Z	"Hi

I'm building a multi-tenant caching reverse proxy based on NGINX and i've hit an issue today. 

Essentially the issue boils down to the default for the port of an upstream server being 80 even when the proxy_pass to the upstream is https. It'd seem reasonable to me to amend the default port to 80 for http proxy_pass and 443 for https proxy_pass. Users would still be free to define their own port manually. I also think that this would fit the principal of least surprise - i believe that most users would expect the upstream server to use :443 if they specify https in their proxy_pass.

The other option would be to use a variable, e.g. $server_port but NGINX doesn't seem to support variables in upstreams right now - that would be useful for other reasons too.

I'd be keen to hear any workarounds if they exist and i would love to hear your thoughts on this. Happy to explain my use case in more detail if needed, i avoided writing it down just to keep the ticket more succinct.

Cheers
Neil"	Neil Craig
1.12.x	1383	Error if using proxy_pass with variable and limit_except	nginx-core	1.12.x	defect		accepted	2017-09-18T11:28:35Z	2022-02-20T21:17:02Z	"Hi nginx guys,

i use a nginx in front of a varnish server.
I purge my varnish via purge method.

Nginx uses the following VHost config:
{{{
server {
    listen       *:80 default_server;

    location / {
        limit_except GET POST {
            allow 127.0.0.1/32;
            deny all;
        }

        set $upstream http://127.0.0.1:8080;

        if ($http_user_agent = 'mobile') {
            set $upstream http://127.0.0.1:8080;
        }

        proxy_pass              $upstream;
        proxy_set_header        Host $host;
        proxy_set_header        X-Forwarded-For $remote_addr;
    }
}
}}}

Suggested: From not localhost i only can request GET/HEAD/POST, localhost can do everything.

From remote it works as expected:
{{{
root@test:~# curl -X PURGE -I EXTIP
HTTP/1.1 403 Forbidden
Server: nginx
Date: Mon, 18 Sep 2017 10:39:23 GMT
Content-Type: text/html
Content-Length: 162
Connection: keep-alive
Vary: Accept-Encoding
}}}

But from localhost: 
{{{
root@test:~# curl -X PURGE -I http://127.0.0.1
HTTP/1.1 500 Internal Server Error
Server: nginx
Date: Mon, 18 Sep 2017 10:39:06 GMT
Content-Type: text/html
Content-Length: 186
Connection: close
}}}

Nginx error log tells me:
{{{
==> /var/log/nginx/error.log <==
2017/09/18 12:39:06 [error] 2483#2483: *2 invalid URL prefix in """", client: 127.0.0.1, server: , request: ""PURGE / HTTP/1.1"", host: ""127.0.0.1""
}}}


Without using Variables in VHost:
{{{
server {
    listen       *:80 default_server;

    location / {
        limit_except GET POST {
            allow 127.0.0.1/32;
            deny all;
        }

        proxy_pass              http://127.0.0.1:8080;
        proxy_set_header        Host $host;
        proxy_set_header        X-Forwarded-For $remote_addr;
    }
}
}}}

Works as expected:
{{{
root@test:~# curl -X PURGE -I http://127.0.0.1
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 18 Sep 2017 10:45:35 GMT
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding
}}}


Other tests with a variable proxy_pass e.g. using the get method instead of purge also fails with same error.



Please take a look why nginx fails when combining limit_except with proxypass and variables.
Thanks"	Haxe18@…
1.12.x	1458	ngx_http_ssl_module http block config bug	nginx-core	1.12.x	defect		new	2018-01-10T13:26:06Z	2018-01-10T17:04:12Z	"sbin/nginx  -p.
nginx: [emerg] BIO_new_file(""./conf/./conf/server.crt"") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('./conf/./conf/server.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)

Nginx would not find correct certificate file, when there is two https server in config, and  no certificate file in server level, but in http level, with relative path

The function ngx_conf_full_name would change name's data to new value, and config value inheritd from prev level, it would add prefix in first server, and add prefix again in second server.

config:

http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    keepalive_timeout  65;

    # HTTPS server
    #
    ssl_certificate      server.crt;
    ssl_certificate_key  server.key;
    server {
        listen       8443 ssl;
        server_name  localhost;

        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  5m;

        ssl_ciphers  HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers  on;

        location / {
            root   html;
            index  index.html index.htm;
        }
    }
    server {
        listen       8444 ssl;
        server_name  localhost;

        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  5m;

        ssl_ciphers  HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers  on;

        location / {
            root   html;
            index  index.html index.htm;
        }
    }
}

I have patch to attach to this ticket, use local variable when calling  ngx_conf_full_name in ngx_event_openssl.c

Gao Yan
China Baidu
Thx"	crasyangel.lhy@…
1.12.x	1437	Optimize locality for listening sockets with the help of SO_INCOMING_CPU	nginx-core	1.12.x	enhancement		new	2017-11-24T15:05:32Z	2017-11-24T15:05:32Z	"To achieve the best cpu locality for listening sockets with SO_REUSEPORT we can set socket option SO_INCOMING_CPU with cpu number taken from worker_affinity.
The included patch works with linux kernel 4.4+ and takes first active cpu for worker from affinity in case of multiple cpus set for worker."	vadimjunk@…
1.12.x	1505	Milliseconds and dynamic time support for *_cache_valid	other	1.12.x	enhancement		new	2018-03-19T14:01:00Z	2022-08-03T18:57:56Z	"Hello,

At this moment cache doesn't work with milliseconds like:
proxy_cache_valid 200 200ms;
nginx: [emerg] invalid time value ""200ms""

Useful will be also to set time via variable:
set $valid_time ""200ms"";
proxy_cache_valid 200 $valid_time;

Any chance for it? :)"	avkarenow@…
1.13.x	1463	Build in --builddir throws error on nginx.h	nginx-core	1.13.x	defect		accepted	2018-01-18T15:30:17Z	2022-02-18T09:04:13Z	"When building with --builddir, an error is thrown during compilation.
{{{
> [...]
> Running Mkbootstrap for nginx ()
> chmod 644 ""nginx.bs""
> ""/foo/bar/perl5/bin/perl"" -MExtUtils::Command::MM -e 'cp_nonempty' -- nginx.bs blib/arch/auto/nginx/nginx.bs 644
> gmake[2]: *** No rule to make target `../../../../../src/core/nginx.h', needed by `nginx.c'. Stop.
> gmake[2]: Leaving directory `/home/user/build/src/http/modules/perl'
> gmake[1]: *** [/home/user/build//src/http/modules/perl/blib/arch/auto/nginx/nginx.so] Error 2
> gmake[1]: Leaving directory `/home/user/nginx-1.13.8'
> gmake: *** [build] Errror 2
}}}


gmake --version
GNU Make 3.81
Copyright (C) 2006 Free Software Foundation, Inc.

gcc --version
gcc (GCC) 5.3.0

cpp --version
cpp (GCC) 5.3.0

"	asymetrixs@…
1.13.x	1329	Blocking STALE requests when using fastcgi_cache_background_update	other	1.13.x	defect		new	2017-07-21T09:05:43Z	2017-07-21T20:09:33Z	"When using fastcgi_cache_background_update nginx will immediately send the STALE response to the client but it will keep client connection open until fastcgi upstream responds. The effect is that the client is waiting until upstream closes its connection.

I first thought it was due to ticket #1249 but apparently this is another problem: before 1.13.1 version the effect was that client received the STALE response (headers+body) after upstream closed the connection, resulting in a high TTFB and fast content download time. After 1.13.1 version, the client receives the STALE response (headers+body) immediately, but the connection is closed after upstream closes its own connection, resulting in a low TTFB but a high content download time.

It is easy to reproduce with a PHP script like this:

{{{
#!div style=""font-size: 80%""
Code highlighting:
  {{{#!php
<?php

echo date(""c"");

sleep(5);
}}}

I'm also using nginx within a Docker container and bridged networking.

Attached you will find the debug log for the request."	nixelsolutions@…
1.13.x	1348	proxy_cache_background_update has problem with slice module	other	1.13.x	defect		new	2017-08-04T07:35:10Z	2023-10-08T12:31:06Z	"When slice is not enabled, and proxy_cache_background_update enabled, the variable of upstream_cache_status will getting back to ""HIT"" after the revalidation.

But when we both enable proxy_cache_background_update and slice, the variable of upstream_cache_status will always be ""STALE"", no matter whether revalidation happends."	RocFang@…
1.13.x	1402	Not invalidate cahe if fastcgi_cache_background_update is on	nginx-core	1.13.x	defect		new	2017-10-24T15:22:34Z	2017-10-24T15:22:34Z	"Config of virtual host is:

{{{
log_format service_json '{ '
'\""request\"": \""$request\"", '
'\""status\"": \""$status\"", '
'\""body_bytes_sent\"": \""$body_bytes_sent\"", '
'\""http_referer\"": \""$http_referer\"", '
'\""uri\"": \""$uri\"", '
'\""args\"": \""$args\"", '
'\""upstream_cache_status\"": \""$upstream_cache_status\"", '
'\""upstream_response_time\"": \""$upstream_response_time\"", '
'\""request_time\"": \""$request_time\"", '
'\""http_user_agent\"": \""$http_user_agent\"", '
'\""cs\"": \""$upstream_cache_status\"" }';

fastcgi_cache_path /var/www/owox-bot-finder/cache levels=1:2 keys_zone=fastcgi_cache:64m max_size=2G inactive=2m;
fastcgi_cache_background_update on;


server {
    listen 80 default_server;
    server_name finder;
    root /var/www/finder;

    location / {

        fastcgi_pass   127.0.0.1:9000;

        fastcgi_split_path_info ^(.+\.php)(/.*)$;
        fastcgi_param SCRIPT_FILENAME $document_root/index.php;
        fastcgi_param HTTPS off;


		fastcgi_cache fastcgi_cache;
		fastcgi_cache_key ""$request_method|$scheme|$host|$uri"";
		fastcgi_cache_use_stale updating error timeout invalid_header http_500 http_503;

		fastcgi_cache_valid 200 304 404 30s;
		fastcgi_cache_valid 301 30s;

		fastcgi_cache_lock on;
		fastcgi_cache_lock_timeout 25s;

	}

    #return 404 for all php files as we do have a front controller
    location ~ \.php$ {
        return 404;
    }

    error_log /var/log/nginx/project_error.log;
    access_log /var/log/nginx/project_access.log service_json;
}
}}}

Case:
1) Make {{{curl ""127.0.0.1/test/?return=200""}}} 
backend response {{{200 ok}}} 
nginx response {{{200 ok}}} AND {{{$upstream_cache_status=MISS}}}

2) Make {{{curl ""127.0.0.1/test/?return=200""}}} 
backend response {{{-}}}
nginx response {{{200 ok}}} AND {{{$upstream_cache_status=HIT}}}

3) Wait until cache expired

4) Make {{{curl ""127.0.0.1/test/?return=302""}}} 
nginx response {{{200 ok}}} AND {{{$upstream_cache_status=STALE}}}
backend respose {{{302 moved temporarily}}} 

5) Make {{{curl ""127.0.0.1/test/?return=302""}}} 
nginx response {{{200 ok}}} AND {{{$upstream_cache_status=STALE}}}
backend respose {{{302 moved temporarily}}} 

6) Make {{{curl ""127.0.0.1/test/?return=301""}}} 
nginx response {{{200 ok}}} AND {{{$upstream_cache_status=STALE}}}
backend respose {{{301 moved permanently}}} 

7) Make {{{curl ""127.0.0.1/test/?return=301""}}} 
nginx response {{{301 moved permanently}}} AND {{{$upstream_cache_status=HIT}}}
backend respose {{{-}}} 

So, if fastcgi_cache_background_update is on, cache expired and background response of upstream does not match with caching rules cache never invalidate.

"	artemsuv@…
1.13.x	1465	configure: use -iquote for $ngx_module_incs	other	1.13.x	defect		new	2018-01-21T00:37:56Z	2018-01-27T21:17:19Z	"- nginx-1.13.8
- Darwin pro.local 16.7.0 Darwin Kernel Version 16.7.0: Mon Nov 13 21:56:25 PST 2017; root:xnu-3789.72.11~1/RELEASE_X86_64 x86_64

I found a problem when trying to compile ngx_brotli on macOS/OSX
which might be rooted in nginx/configure (or at least needs some help there)

see also:
- [https://github.com/google/ngx_brotli/issues/64]
- [https://github.com/phusion/passenger/issues/2017]
- [https://github.com/nginx/nginx/blob/master/auto/module]

{{{
./configure --add-module=../ngx_brotli
make
}}}


---

in ngx_brotli/src/ngx_http_brotli_filter_module.c:

{{{
#include <brotli/encode.h>
}}}

pulls in </opt/local/include/brotli/encode.h>
instead of ""../ngx_brotli/deps/brotli/include/brotli/encode.h""
(I have installed ""brotli"" from macports which installs an incompatible version of brotli/encode.h)

This clearly is a bug in ngx_brotli, which should read:
{{{
#include ""brotli/encode.h""
}}}


But for this to have the desired efect, the CFLAGS have to be changed from:

{{{
-I ../ngx_brotli/deps/brotli/include
}}}
to

{{{
-iquote ../ngx_brotli/deps/brotli/include
}}}

as of my first guess the include path flag comes from
ngx_brotli/config:
{{{
ngx_module_incs=""$brotli/include""
}}}

(please excuse, I'm not yet that dig deep in the nginx module config api)

I think this should read something like:
{{{
ngx_module_incs_quote=""$brotli/include""
}}}

which should generate CFLAGS
{{{
-iquote ../ngx_brotli/deps/brotli/include
}}}



I patched ngx_brotli/src/ngx_http_brotli_filter_module.c

from:
{{{
#include <brotli/encode.h>
}}}

to (angle brackets replaced by quotes):
{{{
#include ""brotli/encode.h""
}}}

I then manually compiled the file,
fisrt using the CFLAGS as given, aka:

{{{
 -I ../ngx_brotli/deps/brotli/include
}}}

then modifing CFLAGS to -iquote

{{{
-iquote ../ngx_brotli/deps/brotli/include
}}}


first compile fails:

{{{
nginx-1.13.8 $ sh -x c-I
+ /usr/bin/cc -c -pipe -O -Wall -Wextra -Wpointer-arith -Wconditional-uninitialized -Wno-unused-parameter -Wno-deprecated-declarations -Werror -g -Wno-error -Wno-deprecated-declarations -I src/core -I src/event -I src/event/modules -I src/os/unix -I /opt/local/include -I /usr/local/src/openssl-1.1.0f/.openssl/include -I objs -I src/http -I src/http/modules -I src/http/v2 -I ../ngx_brotli/deps/brotli/include -I /opt/local/lib/ruby2.5/gems/2.5.0/gems/passenger-5.1.12/src -o objs/addon/src/ngx_http_brotli_filter_module.o ../ngx_brotli/src/ngx_http_brotli_filter_module.c
../ngx_brotli/src/ngx_http_brotli_filter_module.c:272:28: warning: implicit
      declaration of function 'BrotliEncoderInputBlockSize' is invalid in C99
      [-Wimplicit-function-declaration]
        ctx->brotli_ring = BrotliEncoderInputBlockSize(ctx->encoder);
}}}


my manually modified compile works (replaced -I by -iquote)

{{{
nginx-1.13.8 $ sh -x c-iquote 
+ /usr/bin/cc -c -pipe -O -Wall -Wextra -Wpointer-arith -Wconditional-uninitialized -Wno-unused-parameter -Wno-deprecated-declarations -Werror -g -Wno-error -Wno-deprecated-declarations -I src/core -I src/event -I src/event/modules -I src/os/unix -I /opt/local/include -I /usr/local/src/openssl-1.1.0f/.openssl/include -I objs -I src/http -I src/http/modules -I src/http/v2 -iquote ../ngx_brotli/deps/brotli/include -I /opt/local/lib/ruby2.5/gems/2.5.0/gems/passenger-5.1.12/src -o objs/addon/src/ngx_http_brotli_filter_module.o ../ngx_brotli/src/ngx_http_brotli_filter_module.c
}}}



---


I did set this to major priority (you might even want to raise this to critical)
because pulling in wrong header files can lead to really bad and hard to diagnose bugs.






























"	foonlyboy@…
1.13.x	1467	Problem of location matching with a given request	documentation	1.13.x	defect	Yaroslav Zhuravlev	accepted	2018-01-24T07:17:47Z	2020-08-14T18:26:52Z	"Hi, guys. I've got a problem with location request and regexp, 'cause the nginx is not finding match like it describes here: https://nginx.ru/en/docs/http/ngx_http_core_module.html#location

My request is: 

http://localhost:8080/catalog/css/asdftail

My conf is:
{{{
server {
    listen 8080;

    location ~ ^/catalog/(js|css|i)/(.*)$
    {
            return 405;
    }
    location / {
            location ~ ^.+tail$ {
                    return 403;
            }
            return 402;
    }
}
}}}
My problem is:
With my request, my conf must return me 405 error, but it return me 403 error, 'cause nginx starts to check regexp location from ""the location with the longest matching prefix is selected and remembered."", not from top of config - ""Then regular expressions are checked, in the order of their appearance in the configuration file.""

If my conf likes this:

{{{
server {
    listen 8080;

    location ~ ^/catalog/(js|css|i)/(.*)$
    {
            return 405;
    }
    
    location ~ ^.+tail$ {
            return 403;
    }

    location / {

            return 402;
    }
}
}}}
or this:
{{{
server {
    listen 8080;

    location catalog/ {
        location ~ ^/catalog/(js|css|i)/(.*)$
        {
                return 405;
        }
    }
    location / {
            location ~ ^.+tail$ {
                    return 403;
            }
            return 402;
    }
}
}}}
Then all works like in manual."	samilko.ka@…
1.13.x	1607	mirror + limit_req = writing connections	nginx-core	1.13.x	defect		accepted	2018-08-11T14:50:53Z	2018-08-12T21:19:38Z	"Hello,
Nginx seems to have a bug with mirror+limit_req
Configuration:

All servers could be the same for testing purposes (127.0.0.1)
Frontend server

{{{
limit_req_zone $binary_remote_addr zone=one:10m rate=5r/s;
location = /url1
{
  mirror /url2;
  proxy_pass http://127.0.0.1/test;
}
location = /url2
{
  internal;
  limit_req zone=one burst=10;
  proxy_pass http://127.0.0.1/test2;
}
location = /status { stub_status on; }
}}}


Backend server 

{{{
location = /test { return 200; }
}}}


Mirror server

{{{
location = /test2 { return 200; }
}}}

Now run:

{{{
# for i in {1..1000}; do curl http://127.0.0.1/url1 >/dev/null & sleep 0.05; done
}}}
Wait for completion of all requests and see writing connections:

{{{
# curl http://127.0.0.1/status
Active connections: 271 
server accepts handled requests
 2001 2001 2001 
Reading: 0 Writing: 271 Waiting: 0
# sleep 120
# netstat -atn | grep 127.0.0.1:80 | grep -v CLOSE_WAIT | wc -l
270
# service nginx reload
# pgrep -f shutting
# netstat -atn | grep 127.0.0.1:80 | grep -v CLOSE_WAIT | wc -l
0
# curl http://127.0.0.1/status
Active connections: 271 
server accepts handled requests
 2002 2002 2002 
Reading: 0 Writing: 271 Waiting: 0 
}}}

When /url1 doesn't have limit_req, but /url2 has, number of writing connections from stub status begins to grow. Watching netstat, I can also see CLOSE_WAIT connections growing. I did't find any impact on requests processing at least when the number of connections is quite low. Actually, after reloading nginx there seems to be no real connections (writing). But this breaks nginx monitoring. Restart of nginx only resets writing connections number.
If both /url1 and /url2 have limit_req, or /url1 only has limit_req - all is OK.

We use amd64 debian stretch, with the nginx-extras package from debian buster (rebuilt on stretch)."	urusha@…
1.13.x	1294	Add version-information resource	other	1.13.x	enhancement		new	2017-06-15T09:19:23Z	2017-06-15T09:19:23Z	"In Windows version there are no executable details like product name or version:

[https://msdn.microsoft.com/en-us/library/windows/desktop/aa381058(v=vs.85).aspx VERSIONINFO] "	Sataur@…
1.13.x	1302	New variables $ssl_client_sha256_fingerprint and/or $ssl_client_sha512_fingerprint for ngx_http_ssl_module	nginx-module	1.13.x	enhancement		new	2017-06-27T13:19:05Z	2017-06-27T13:19:05Z	"$ssl_client_fingerprint value is too weak (sha1) for us.

$ssl_client_sha256_fingerprint and/or $ssl_client_sha512_fingerprint will be better"	dmitry.rudenko.moneta.ru@…
1.13.x	1353	"http and stream on the same ""listen"" should conflict"	other	1.13.x	enhancement		new	2017-08-08T08:53:58Z	2022-11-23T22:38:13Z	"
here's sample config
{{{
user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {

    default_type  application/octet-stream;
    access_log  off;

        server {
                listen       80;
                server_name  localhost;

                location / {
                        return 200;
                }

        }

}

stream {

        server {
                listen  80;
                proxy_pass  127.0.0.1:8080;
        }


}

}}}


it does not fail on ""nginx -t""

{{{
# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# 

}}}


however, nginx refuses to start

{{{
# nginx
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)

}}}


I think, it would be nice to catch that error during ""nginx -t"" run"	chipitsine@…
1.13.x	1360	enhancement: auto-reload map includes	nginx-module	1.13.x	enhancement		new	2017-08-19T19:52:26Z	2017-08-19T19:52:26Z	"When using an include file with the map directive, it would be ideal if nginx automatically reloaded its configuration when the include file is modified.

Use-case: currently I use the map functionality to rewrite a 'current' URL for downloads to the latest version as produced by buildbot. As nginx is running in a docker container, there isn't any way to reload the nginx configuration without root privileges (at least that I'm aware of).

E.g. buildbot runs a script to generate `currentImages.map.nginx`:
{{{
/nightly-images/x86_64/current-anyboot /nightly-images/x86_64/haiku-nightly-hrev51366-x86_64-anyboot.zip;
}}}

If instead, I use a symlink in `/nightly-images/x86_64`, `ln -s haiku-nightly-hrev51366-x86_64-anyboot.zip current-anyboot`, then when downloading the file using the current link, the downloaded file has the name of the symlink, rather than the target.

My configuration is along the lines of:
{{{
http {
  map $uri $new {
    include /haiku-files/nightly-images/currentImages.map.nginx;
  }
  server {
    if ($new) {
      return 302 $new;
    }
  }
}
}}}

With Apache2 and [https://httpd.apache.org/docs/current/rewrite/rewritemap.html RewriteMap], it automatically reloads the map configuration when the include file is modified. I can't see any way to achieve the same with nginx, which will hamper moving away from Apache2."	jessicah@…
1.13.x	1369	Add proxy_detect_mime setting	nginx-core	1.13.x	enhancement		new	2017-08-28T13:37:25Z	2017-08-28T13:52:13Z	"Right now, nginx using proxy_pass will not touch the content-type and it is the upstream responsibility to sent the correct content-type.

The problem is that not always we control the upstream (external apps), have very long deployment cycles to get the proper fix in the upstream (complex upstream apps, external teams, etc) , or the upstream already have LOTs of old data that can't be easily fixed (huge S3 bucket, old DB without proper type column)

A workaround could be simply do a ""proxy_hide_header Content-Type;"" and then add a ""proxy_set_header content-type $mime"" or ""default_type $mime;"", with a map like 

map $request_uri  $mime {
 default  application/octet-stream;
 ~*\.jpe?g  image/jpeg;
 ~*\.gif  image/gif;
 ~*\.png  image/png;
}

i didn't test this solution, as it feels dirty and totally cloning the nginx mime-type detection in maps regex ... so why not expand it a little to cover this also!?

Adding a new proxy_detect_mime (off by default) would do that automatically, drop any upstream content-type header, and apply the mime detection to the uri

Most people will not use it, but might be a life saver for people that needs it"	https://stackoverflow.com/users/1100117/higuita
1.13.x	1388	Implement TLS Dynamic Record Sizing (CloudFlare patch ready)	other	1.13.x	enhancement		new	2017-09-26T09:21:19Z	2021-08-24T19:33:09Z	"Hi,

Instead of iterating everything CloudFlare has already written, their blog post is here:

https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency/

And the patch is here:

https://raw.githubusercontent.com/cloudflare/sslconfig/master/patches/nginx__1.11.5_dynamic_tls_records.patch

In short, it enables nginx to, instead of having just a static ssl_buffer_size, have the initial requests fit in the least amount of TCP-segments possible, and size it up to 3x segment size, on to the full ssl_buffer_size later. This can greatly speed up TLS.

Daniël ""FinalX"" Mostertman"	Eihrister@…
1.13.x	1393	please add ngx_google_perftools_module to centos 7 rpm	nginx-package	1.13.x	enhancement		new	2017-10-08T17:21:16Z	2017-12-06T16:39:25Z		chipitsine@…
1.13.x	1407	Should application/javascript be text/javascript in mime.types	other	1.13.x	enhancement		reopened	2017-10-27T10:53:53Z	2023-11-20T01:18:24Z	"I think the 'content-type' header for js files should have media type 'text/javascript' (not 'application/javascript')

nginx version: nginx/1.12.1"	henrik.gemal.dk@…
1.13.x	1417	Nginx won't start if hostname isn't valid	other	1.13.x	enhancement		new	2017-11-04T14:40:11Z	2023-05-16T22:55:07Z	"This is the exact same issue as described in #1040

We use nginx in a container to route web services to various internal container services.  As nginx currently works, if a hostname is unknown for a proxy, nginx refuses to start.

nginx: [emerg] host not found in upstream ""gerrit_cgit_1"" in /etc/nginx/conf.d/cgit.conf:12

{{{
server {
    listen 80;
    listen [::]:80 ipv6only=on;

    server_name cgit.haiku-os.org git.haiku-os.org;
    access_log off;
    error_log off;
    return      301 https://$server_name$request_uri;
}

server {
    listen 443 ssl;
    listen [::]:443 ssl ipv6only=on;

    server_name cgit.haiku-os.org git.haiku-os.org;
    client_max_body_size 100m;
    ssl_certificate /etc/letsencrypt/live/cgit.haiku-os.org/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/cgit.haiku-os.org/privkey.pem;
    location / {
        proxy_bind $server_addr;
        proxy_pass http://gerrit_cgit_1:80;
    }
}
}}}

In a container environment, if the gerrit_cgit_1 container isn't running, the host won't resolve.

Instead of ""refusing to start"" due to one vhost not working, ideally, nginx should offer up HTTP 503 for service unavailable.

There is a larger story around re-checking hostnames, but this change would mean that ""some services are functional"" (other vhosts that do resolve) vs ""everything is broken until you modify your config files because a container is not running""

If nginx doesn't want this to be the standard configuration, maybe add a ""proxy_unavailable warn"" vs ""proxy_unavailable error"" or something?"	kallisti5@…
1.13.x	1421	worker_rlimit_nofile description is not clear	documentation	1.13.x	enhancement		new	2017-11-07T11:54:38Z	2017-11-09T10:03:18Z	"Currently documentation for ""worker_rlimit_nofile"" configuration parameter says 
{{{
Changes the limit on the maximum number of open files
(RLIMIT_NOFILE) for worker processes.
}}}
and correspondingly 
{{{
Изменяет ограничение на максимальное число открытых файлов 
(RLIMIT_NOFILE) для рабочих процессов.
}}}

It's not clear whether the limit is applied for each worker separately, or it's the maximum number of files which can be opened by all workers in total.

There is the same issue with ""worker_rlimit_core"" parameter."	vsg@…
1.13.x	1422	Support IPv6 zone identifiers in URLs, e.g. for proxy_pass	nginx-core	1.13.x	enhancement		new	2017-11-07T20:34:49Z	2023-06-11T22:12:17Z	"Now that RFC 6874 has matured and is a proposed standard, I'd like to reinstate the idea of implementing the support for IPv6 zone identifiers, e.g. for the proxy_pass directive. This would allow using IPv6 link-local upstreams.
In https://trac.nginx.org/nginx/ticket/623 this was rejected because the RFC was too young. Now as there are only for rejected errata and no movement for a while, I suggest revisiting this decision."	https://stackoverflow.com/users/1047642/hendrik-m-halkow
1.13.x	1459	Can't vary on request headers set by proxy_set_header (rev. proxy mode)	nginx-core	1.13.x	enhancement		accepted	2018-01-15T14:42:54Z	2018-01-16T12:39:31Z	"Hi

We're using NGINX in reverse proxy mode for an internal traffic management service and I noticed that NGINX doesn't vary the cached object on request headers which we calculate and add in NGINX itself via proxy_set_header. This causes a major problem for our service as it's multi-tenant. I think it'd be logical and expected if NGINX ''did'' vary on request headers set by proxy_set_header. I have tested and setting the headers via more_set_input_headers and by setting the variable directly (and in Lua) but these also don't work, sadly. 

I have included a reduced test case which hopefully illustrates the situation (a few comments help explain). Output from testing (against local/Docker) is:

{{{
# curl -k https://127.0.0.1:8443/a\?vv1\=1 -i
HTTP/1.1 200 OK
Server: nginx/1.13.8
Date: Mon, 15 Jan 2018 14:33:54 GMT
Content-Type: text/plain
Content-Length: 25
Connection: keep-alive
Cache-Control: public,max-age=30
Vary: vvrh1
vvrh1-val-rec: val is 1
Edge-Cache-Status: EXPIRED
Origin-Response-Status: 200
Origin-IP: 127.0.0.1:9000

2018-01-15T14:33:54+00:00%                                                                                                                                                               

# curl -k https://127.0.0.1:8443/a\?vv1\=1 -i
HTTP/1.1 200 OK
Server: nginx/1.13.8
Date: Mon, 15 Jan 2018 14:33:55 GMT
Content-Type: text/plain
Content-Length: 25
Connection: keep-alive
Cache-Control: public,max-age=30
Vary: vvrh1
vvrh1-val-rec: val is 1
Edge-Cache-Status: HIT

2018-01-15T14:33:54+00:00%                                                                                                                                                               

# curl -k https://127.0.0.1:8443/a\?vv1\=2 -i
HTTP/1.1 200 OK
Server: nginx/1.13.8
Date: Mon, 15 Jan 2018 14:33:58 GMT
Content-Type: text/plain
Content-Length: 25
Connection: keep-alive
Cache-Control: public,max-age=30
Vary: vvrh1
vvrh1-val-rec: val is 1
Edge-Cache-Status: HIT

2018-01-15T14:33:54+00:00%
}}}

I'd expect a cache miss on the final response because the query string argument ""vv1"" has changed and this would mean that proxy_request_header would set a different value for the ""vvrh1"" request header. To illustrate that this mechanism works, once the cached object has expired, we see:

{{{
# curl -k https://127.0.0.1:8443/a\?vv1\=2 -i
HTTP/1.1 200 OK
Server: nginx/1.13.8
Date: Mon, 15 Jan 2018 14:39:12 GMT
Content-Type: text/plain
Content-Length: 25
Connection: keep-alive
Cache-Control: public,max-age=30
Vary: vvrh1
vvrh1-val-rec: val is 2
Edge-Cache-Status: val EXPIRED
Origin-Response-Status: 200
Origin-IP: 127.0.0.1:9000

2018-01-15T14:39:12+00:00%                                                                                                                                                               

# curl -k https://127.0.0.1:8443/a\?vv1\=2 -i
HTTP/1.1 200 OK
Server: nginx/1.13.8
Date: Mon, 15 Jan 2018 14:39:15 GMT
Content-Type: text/plain
Content-Length: 25
Connection: keep-alive
Cache-Control: public,max-age=30
Vary: vvrh1
vvrh1-val-rec: val is 2
Edge-Cache-Status: val HIT

2018-01-15T14:39:12+00:00%
}}}

Might this be something which could be fixed (if not, is there a workaround you can think of? Or have I made a mistake?

Cheers"	Neil Craig
1.13.x	1480	Automatic Let's Encrypt certificate provisioning and renewal	other	1.13.x	enhancement		new	2018-02-13T23:14:01Z	2020-06-01T03:05:00Z	"I would like to request built-in support for automatic provisioning and renewal of Let's Encrypt certificates.

Currently, setting up and auto-renewing Let's Encrypt's free certificates for nginx using certbot or other tools is somewhat complex and difficult. It would be helpful for admins if this process could be made as simple as possible, by providing a one-line, built-in declarative command in the nginx.conf file.

In the longer term, a future version of nginx could provide automatic https setup by default for all new websites, and even automatically enable encryption on existing websites. That would support the goal of making the Web 100% HTTPS.

(Currently, caddy supports HTTPS out-of-the-box, and Apache supports HTTPS with the built-in https://httpd.apache.org/docs/trunk/mod/mod_md.html plugin.)"	arthuredelstein@…
1.13.x	1483	client_max_body_size vs. auth_request unexpected behaviour	nginx-module	1.13.x	enhancement		new	2018-02-16T16:48:35Z	2018-02-16T20:32:18Z	"Hi there,

I configured an upload location (to use the client_body_in_file_only feature). Additionally, I am using auth_request for that location to authorize uploads. When configuring the client_max_body_size for the upload location, I noticed that I have to repeat it in the internal auth location in order to become effective, i.e. uploads exceeding the default of 1MB would fail because the size of original (but removed) request body in the auth request, is checked against the limit of the auth location.

I don't know whether this is a bug. For me, it was at least unexpected behaviour because the request body for the auth request is empty.

Kind regards,
Christoph


{{{
server {
	listen 80 default_server;
	listen [::]:80 default_server;

	listen 443 ssl default_server;
	listen [::]:443 ssl default_server;

	ssl_certificate /etc/nginx/ssl/cert.pem;
	ssl_certificate_key /etc/nginx/ssl/key.pem;

	root /var/www/html;

	index index.html index.htm;


    location /upload {
        auth_request /auth;
        limit_except POST { deny all; }

        client_body_temp_path /dev/shm/upload;
        client_body_in_file_only on;

        client_max_body_size 1000M;

        proxy_set_header Request-Body-File $request_body_file;
        proxy_set_header Content-Length """";

        proxy_set_body """";

        proxy_pass http://localhost:8080/upload;
    }

    location = /auth {
        internal;

        client_max_body_size 1000M;

        proxy_set_header Content-Length """";
        proxy_set_header X-Original-URI $request_uri;

        proxy_set_body """";

        proxy_pass http://localhost:8080/auth;
    }

    location / {
        proxy_pass   http://localhost:8080;
    }
}
}}}"	chschmitt@…
1.13.x	1500	ngx_hash_t can have only lower case key	other	1.13.x	enhancement		accepted	2018-03-07T00:47:25Z	2018-03-07T18:14:21Z	"ngx_hash_init convert all the keys in lower case, so when use ngx_hash_find it returns null.
Below is the code line in ngx_hash.c.
 key = ngx_hash(key, ngx_tolower(data[i]));
I think, you can make it generic which supports case sensitive keys."	lazylad91@…
1.13.x	1506	bind() in configuration test is too cautious	nginx-core	1.13.x	enhancement		new	2018-03-19T18:08:56Z	2020-04-25T23:11:37Z	"Currently, nginx does too much when testing configuration (`nginx -t`). While in some cases it's good idea to test everything that's possible to test, there are a lot of use cases where it's not:

1. I want to test configuration file syntax on different machine
2. I want to run `nginx -t` as a non-root user
3. I don't want nginx -t to clobber socket when it may run simultaneously with starting nginx daemon by other subsystem (chance is quite low, still not sure why it's needed)

So it would be nice if there was a flag that does configuration file syntax check but doesn't rely on running in the same system: don't bind sockets, don't try to open logs and pid files, and maybe other things that I don't know yet.

(tested on 1.12.2, sorry if it's fixed in master)

What do you think?"	tailhook@…
1.13.x	1530	Origin frame (RFC 8336) support?	nginx-module	1.13.x	enhancement		new	2018-04-16T10:50:00Z	2018-04-16T10:50:00Z	"Hi

I just wanted to check if nginx is likely to support the http/2 ORIGIN frame in future (now that the RFC, 8336, has been ratified). RFC: https://tools.ietf.org/html/rfc8336

We have had use cases in the past where e.g. some origins which would be coalesced due to same IP/cert have known problems with h2 and thus must not be used as an h2 coalesced origin. We had to implement workarounds to break the same IP/cert conditions which involved significant re-engineering and cost. The ORIGIN frame would have prevented such work so i'm keen to know if nginx may support it at some stage.

Cheers
Neil"	Neil Craig
1.13.x	1535	proxy_bind and resolver IP version mismatch	other	1.13.x	enhancement		new	2018-04-19T11:56:06Z	2022-01-18T22:32:44Z	"If proxy_bind is used and we're proxying to a hostname which lists both IPv4 and IPv6 addresses, the request can randomly fail depending on what address the resolver decides to pick.
This can be confusing to diagnose (and makes proxy_bind look like it's broken) since the proxy can appear to work, but rather unreliably.

I found the following which states there is no workaround to the problem: http://nginx.2469901.n2.nabble.com/How-Nginx-behaves-with-quot-proxy-bind-quot-and-DNS-resolver-with-non-matching-ip-versions-between-b-td7592529.html

I'm still seeing this problem on v1.13.3 so it doesn't seem to have been resolved yet.

Most domains which have an IPv6 address will also list an IPv4 address.  This is particularly problematic if we're binding on an IPv6 address because there's also no way to force the resolver to only give IPv6: https://forum.nginx.org/read.php?10,270086

I can't think of any reason why you'd ever want the bind and upstream to not be on the same IP version (otherwise it's guaranteed to fail), so it makes a lot of sense if this could be addressed.


----


I also did try forcing everything to IPv4, but it didn't seem to work for me - maybe I've made a mistake somewhere?  Config looks like:


{{{
stream { server {
  listen 5555;
  resolver 1.1.1.1 ipv6=off;
  proxy_pass example.com:80;
  proxy_bind 0.0.0.0;
}}
}}}


Executing `nc 0 5555`, every now and then, it fails and I get the following in the error log:

2018/04/19 13:51:08 [crit] 18368#18368: *330 bind(0.0.0.0) failed (22: Invalid argument) while connecting to upstream, client: 127.0.0.1, server: 0.0.0.0:5555, upstream: ""[2606:2800:220:1:248:1893:25c8:1946]:80"", bytes from/to client:0/0, bytes from/to upstream:0/0
"	zingaburga@…
1.13.x	1536	grpc-web (grpc for browsers)	nginx-module	1.13.x	enhancement		new	2018-04-20T19:00:26Z	2018-10-24T11:34:23Z	"Now that grpc is build in, can the nginx team collaborate with the grpc team and the kubernetes team to implement grpc-web (grpc for browsers) directly into nginx please 

https://github.com/grpc/grpc-web/blob/master/net/grpc/gateway/nginx/grpc_gateway_module.c

https://github.com/grpc/grpc-web/issues/177

https://github.com/kubernetes/ingress-nginx/issues/2391"	gertcuykens@…
1.13.x	1573	adding text/css to the default list for the charset_types directive	nginx-module	1.13.x	enhancement		new	2018-06-13T17:23:15Z	2018-06-13T17:23:15Z	"I would like to suggest adding text/css to the default list for the charset_types directive.

Since the W3C is now introducing the custom element feature, so there may also be a growing trend of non-ASCII markups used in CSS files as well as HTML files, things like this will be seen in the future:

x-測
{
}

So the encoding of CSS files is no longer unimportant. Although adding @charset rule in the beginning of a CSS file works for HTML documents, but if the user directly visits non-ASCII CSS files, they probably encounter messy code. So I suggest adding text/css to the default list for the charset_types directive."	Zhang-Junzhi@…
1.13.x	1617	preread data ignored when SSL is terminated	other	1.13.x	enhancement		new	2018-08-23T07:14:06Z	2018-08-28T04:42:14Z	"Using e.g. the ssl_preread module in combination with a listen directive that terminates SSL results in discarding the preread data.

I've attached a patch that fixes this by chaining an OpenSSL BIO that first returns any data in the ngx_connection_t's buffer field."	James Callahan
1.13.x	1619	test configuration ignoring certificates andkkeys	other	1.13.x	enhancement		new	2018-08-23T12:06:03Z	2018-08-23T12:06:03Z	"When testing a configuration file like this:

{{{
nginx -t -c ""/nginx.conf""
}}}

it would be most useful to ignore the certificate and key files.

What I suggest is an option that allows to ignore those.

Reason for that is, the test fails, since the files cannot be found when run during continuous integration for example (when the specific files are not present yet). 

I would like to just use it to check the syntax of nginx.conf."	normoes@…
1.14.x	1579	Mirror subrequests ignore the keepalive flag	other	1.14.x	defect		new	2018-06-18T11:22:43Z	2022-10-11T21:38:08Z	"I'm attempting to use the mirror directive to soak test a new backend service, and am experiencing issues due to the upstream connections being closed after each mirror subrequest. This makes the test impossible, as we hit port exhaustion in < 60s.

I've built a test case using a simple Go server and ab for load testing, attached as test-site.conf and run-test.sh. This test case demonstrates that keepalive functionality works fine for the upstream, except in the case of mirror (and auth_request) subrequests.

I've also done some source diving, and discovered that this can be worked around by changing ngx_http_mirror_module.c:171 to set sr->discard_body = 1 instead of sr->header_only = 1, though I have no idea what side effects this might cause."	predatory.kangaroo@…
1.14.x	1965	$request_time less than $upstream_response_time	nginx-core	1.14.x	defect		accepted	2020-04-29T05:33:25Z	2023-08-03T19:37:58Z	"nginx logformat:
log_format main escape=json '{ ""http_x_forwarded_for"": ""[$http_x_forwarded_for]"", '
'""remote_addr"": ""$remote_addr"", '
'""remote_user"": ""$remote_user"", '
'""time_local"": ""[$time_local]"", '
'""request_method"": ""$request_method"", '
'""request_host"": ""$scheme://$host"", '
'""request_host_1"": ""$host"", '
'""service_line"": ""itservice.api"", '
'""request_uri"": ""$uri"", '
'""query_string"": ""$query_string"", '
'""server_protocol"": ""$server_protocol"", '
'""status"": ""$status"", '
'""body_bytes_sent"": ""$body_bytes_sent"", '
'""http_referer"": ""$http_referer"", '
'""http_user_agent"": ""$http_user_agent"",'
'""request_time"": ""$request_time"", '
'""upstream_addr"": ""[$upstream_addr]"", '
'""req_id"": ""$request_id"", '
'""upstream_response_time"": ""$upstream_response_time"" '
' }';


nginx log:
{ ""http_x_forwarded_for"": ""[]"", ""remote_addr"": ""192.168.11.130"", ""remote_user"": """", ""time_local"": ""[29/Apr/2020:01:11:33 +0800]"", ""request_method"": ""GET"", ""request_host"": ""https://xxx.abc.com"", ""request_host_1"": ""xxx.abc.com"", ""service_line"": ""itservice.api"", ""request_uri"": ""/api/v1/sensitive-info/batch/getUserInfo"", ""query_string"": ""batchNumber=xxx&userId=xxx&dataType=1"", ""server_protocol"": ""HTTP/1.1"", ""status"": ""200"", ""body_bytes_sent"": ""113"", ""http_referer"": """", ""http_user_agent"": ""Apache-HttpClient/4.5.10 (Java/1.8.0_211)"",""request_time"": ""0.011"", ""upstream_addr"": ""[192.168.10.182:80]"", ""req_id"": ""6bdcc5ce837247323599d37aaceba33c"", ""upstream_response_time"": ""0.012""  }

issue:
upstream_response_time: 0.012
request_time: 0.011
In this log, the requset_time is less than upstream_response_time. Why does this happen?"	learn0208@…
1.14.x	2625	nginx proxy_pass variable DNS resolution not updated when there is another proxy_pass with same domain and without variable	nginx-module	1.14.x	defect		new	2024-04-04T14:30:38Z	2024-04-09T16:25:11Z	"according to the documentation of nginx dns disccovery, when using proxy_pass with a variable for the domain, the variable should be reresolved upon DNS change of IP
however when there are two proxy_pass locations, one with variable and another without variable for the same domain name, it appears that the proxy_pass with variable does not re-resolves the DNS IP changes of the domain name and stays with the IP at the beginning even if DNS changed the domain IP

**To reproduce on Redhat 8**
1. install nginx: sudo yum install nginx
2. start local dns that can read /etc/hosts entries: sudo systemctl start systemd-resolved
3. modify the nginx configuration file /etc/nginx/nginx.conf

  3a. add to the log_format $upstream_addr e.g.

{{{
    log_format  main  '$remote_addr - $remote_user [$time_local] ""$request"" '
                      '$status $body_bytes_sent ""$http_referer"" '
                      '""$http_user_agent"" ""$http_x_forwarded_for"" ""$upstream_addr""';
}}}
  3b. add two proxy_pass, one with variable resolver of 127.0.0.53 with valid of 1 second and the other without variable

{{{
        location /test1 {
            resolver 127.0.0.53 valid=1s;
            set $testbackend https://test.local;
            proxy_pass $testbackend;
        }

        location /test2 {
            proxy_pass https://test.local;
        }
}}}
4. add to /etc/hosts an entry for test.local pointing to 127.0.0.200
127.0.0.200 test.local
5. use dig to check the DNS resolution of test.local: dig @127.0.0.53 test.local
check IP is resolved to 127.0.0.200
6. restart nginx for configuration to take effect: sudo systemctl start nginx
7. use curl to connect to the test1 location: curl localhost:80/test1
8. check nginx access log last lines, there should be upstream address of 127.0.0.200:443
tail -20 /var/log/nginx/access.log
...
::1 - - [04/Apr/2024:15:07:46 +0300] ""GET /test1 HTTP/1.1"" 502 4020 ""-"" ""curl/7.61.1"" ""-"" ""**127.0.0.200**:443""
upstram IP is 127.0.0.200
9. change the /etc/hosts entry for test.local to have a different IP 127.0.0.201 instead of the previous 127.0.0.200
127.0.0.201 test.local
10. use dig to check the DNS resolution of test.local: dig @127.0.0.53 test.local
check IP is resolved to 127.0.0.201
11. wait for 30 seconds so the valid time for the DNS entry should expire for the resolver
12. use curl to connect to the test1 location: curl localhost:80/test1
13. check nginx access log last lines, the upstream address should change to 127.0.0.201:443 but it will remain as before
tail -20 /var/log/nginx/access.log
...
::1 - - [04/Apr/2024:15:17:27 +0300] ""GET /test1 HTTP/1.1"" 502 4020 ""-"" ""curl/7.61.1"" ""-"" ""**127.0.0.200**:443"""	lkgendev@…
1.14.x	1697	mail proxy: ManageSieve protocol support	nginx-module	1.14.x	enhancement		new	2018-12-30T20:57:15Z	2022-08-04T20:45:23Z	"I'm representing the small FOSS community of [https://github.com/Mailu/Mailu Mailu]. Our software uses Nginx as reverse proxy for our dockerized mail services. We utilize the nginx proxy as the single authentication front-end.

In this setup we would also like to expose the [https://tools.ietf.org/html/rfc5804 ManageSieve] protocol so that server side filter scripts can be enabled by mail clients such as Thunderbird. It would be a nice addition to the mail proxy protocols.

Note that the mentioned uname -a is not applicable, since we are running inside a docker container. (Alpine based)"	muhlemmer@…
1.14.x	1710	ngx_http_dav_module: Allow to configure some anti-overwrite	nginx-module	1.14.x	enhancement		new	2019-01-16T10:20:17Z	2019-01-16T10:20:17Z	"Maybe I didn't found an existing way to do so... I'd like to protect against overwrite on my ngx_http_dav_module exposed directory, and wasn't able to do so.

I even tried to use a minutely cron to change the file owner to root (yes..), but nginx does not care and re-change the owner and rights on PUT :)"	JulienPalard@…
1.14.x	1883	nginx -t doesn‘t complain about wrongly formatted server_name directive	other	1.14.x	enhancement		new	2019-11-04T08:19:23Z	2019-11-04T08:19:23Z	"Recently I was setting up a server and I made a mistake:

{{{

server_name:  www.example.com, test.example.com;

}}}

The comma is wrong, but I didn‘t notice. I ran nginx -t and it reported „Syntax is ok“ .
Afterwards I spent some time to find out why it didn‘t work.

Suggestion:
If the the value of a directive appears to be a comma separated list, nginx -t shoudl throw a warning."	nirtuj@…
1.14.x	2182	Nginx doesn't delete temp cache files after a crash	nginx-module	1.14.x	enhancement		new	2021-05-13T17:27:47Z	2021-05-18T01:36:45Z	"When Nginx cache module is downloading an object from an upstream, it saves it into a temp file, and once the download is finished, Nginx renames the temp file to a cache one. Nginx does not track these temp files in any persistent storage, nor Cache manager detects them. Thus, if Nginx crashes or gets killed when it's downloading objects, these temp files stays on disk and nothing cleans them up. Since most of modern linux uses systemd, it's a default behavior, if after sending TERM to a process, it does not finish within 60 sec, systemd sends KILL.

There should be some housekeeping mechanism cleaning these stale files up.

Example of these files, it was captured on May 12:
-rw------- 1 nginx nginx 52035584 May 10 12:06 d43aaa174b966c213a8b65a8d53e8c01.0000935649
-rw------- 1 nginx nginx 52117504 May 10 12:06 d43aaa174b966c213a8b65a8d53e8c01.0000935663
-rw------- 1 nginx nginx 52592640 May 10 12:06 d43aaa174b966c213a8b65a8d53e8c01.0000935679
-rw------- 1 nginx nginx 52248576 May 10 12:06 d43aaa174b966c213a8b65a8d53e8c01.0000935687
-rw------- 1 nginx nginx 41664512 May 10 12:06 d43aaa174b966c213a8b65a8d53e8c01.0000935688
-rw------- 1 nginx nginx 45858816 May 10 12:06 d43aaa174b966c213a8b65a8d53e8c01.0000935701
-rw------- 1 nginx nginx 41746432 May 10 12:06 d43aaa174b966c213a8b65a8d53e8c01.0000935705
-rw------- 1 nginx nginx 42139648 May 10 12:06 d43aaa174b966c213a8b65a8d53e8c01.0000935708
-rw------- 1 nginx nginx 37421056 May 10 12:06 d43aaa174b966c213a8b65a8d53e8c01.0000935714
-rw------- 1 nginx nginx 36159488 May 10 12:06 d43aaa174b966c213a8b65a8d53e8c01.0000935720
-rw------- 1 nginx nginx 36683776 May 10 12:06 d43aaa174b966c213a8b65a8d53e8c01.0000935730
-rw------- 1 nginx nginx 36847616 May 10 12:06 d43aaa174b966c213a8b65a8d53e8c01.0000935734
-rw------- 1 nginx nginx 40304640 May 10 12:06 d43aaa174b966c213a8b65a8d53e8c01.0000935739
-rw------- 1 nginx nginx 24592384 May 10 12:06 d43aaa174b966c213a8b65a8d53e8c01.0000935743
-rw------- 1 nginx nginx 27017216 May 10 12:06 d43aaa174b966c213a8b65a8d53e8c01.0000935746
-rw------- 1 nginx nginx 30097408 May 10 12:06 d43aaa174b966c213a8b65a8d53e8c01.0000935754"	ifel@…
1.15.x	1598	Windows Path Length Limitation issue	nginx-core	1.15.x	defect		accepted	2018-07-23T11:09:30Z	2023-01-11T20:15:44Z	"Windows by default have its PATH length limit as 255 characters. On accessing a file with path more than 255 characters, nginx throws an error saying ""The system cannot find the file specified"".

CreateFile() ""C:\nginx-1.13.12/client-data/patch-resources/linux/redhat/offline-meta/7/7Client/x86_64/extras/os/repodata/245f964e315fa121c203b924ce7328cd704e600b6150c4b7cd951c8707a70394f/245f964e315fa121c203b924ce7328cd704e600b6150c4b7cd951c8707a70394f-primary.sqlite.bz2"" failed (3: The system cannot find the path specified)

Refer : https://docs.microsoft.com/en-us/windows/desktop/fileio/naming-a-file"	bharathi355@…
1.15.x	1716	http2 ssl verify certificate failed should close tcp connection	other	1.15.x	defect		new	2019-02-01T03:11:24Z	2019-02-01T03:11:24Z	"when http2 ssl verify certificate failed, nginx send err response with 400 instead of closeing connection in the ngx_http_process_request function. client can send request continuously when all requests in this connection will recv a err response.
on the contrary, http1.1 with keepalive will close connection when verify certificate failed. "	xujunHW@…
1.15.x	1738	NGINX Not Honoring proxy_cache_background_update with Cache-Control: stale-while-revalidate Header	other	1.15.x	defect		reopened	2019-03-02T12:36:42Z	2020-06-17T12:23:48Z	"We are running NGINX in front of our backend server.

We are attempting to enable the proxy_cache_background_update feature to allow NGINX to async updates to the cache and serve STALE content while it does this.

However, we are noticing that it still delivers STALE content slowly as if it's not serving from the cache. The time it takes to deliver a response to the client after an item expires is very slow and clearly not served from cache - you can tell it's going to the backend server, getting an update '''and''' serving the client in the same request.

Here is our configuration from NGINX:


{{{
proxy_ignore_headers Expires;
proxy_cache_background_update   on;
}}}


Our backend server is delivering the following headers:


{{{
HTTP/1.1 200 OK
Date: Thu, 28 Feb 2019 21:07:09 GMT
Server: Apache
Cache-Control: max-age=1800, stale-while-revalidate=604800
Content-Type: text/html; charset=UTF-8
}}}

When attempting an expired page fetch we do correctly notice a STALE response in the header:


{{{
X-Cache: STALE
}}}

However, when providing this response it is very slow as if it's contacted the backend server and done it in real-time.

NGINX version:

{{{
$ nginx -v
nginx version: nginx/1.15.9
}}}

It seems that nginx is honoring serving stale content (as we have tested) but it also updates the cache from the backend on the same request/thread thus causing the slow response time to the client. I.e. it seems to be totally ignoring the ''proxy_cache_background_update   on;'' directive and not updating in the background on a separate subrequest (async).

We have also tried with 
{{{
proxy_cache_use_stale updating;
}}}

However, the same behavior happens. As far as I'm aware, there is also no need to use ''proxy_cache_use_stale updating;'' when the backend sets a Cache Control: stale-while-revalidate header. The issue seems to be that it honors serving STALE content but it is also updating the cache on the same thread as the request comes in - i.e. it's simply ignoring proxy_cache_background_update on;
"	Ian Stephens
1.15.x	2060	Nginx doesn't take case http_502 as unsuccessful attempt in ngx_http_grpc_module	nginx-module	1.15.x	defect		accepted	2020-10-16T10:00:37Z	2020-10-19T17:24:25Z	"From the nginx document [http://nginx.org/en/docs/http/ngx_http_grpc_module.html], syntax ""grpc_next_upstream error timeout http_502;"" is valid and case http_502 will take as unsuccessful attempt. However, Nginx doesn't take case http_502 as unsuccessful attempt in fact.

Below is an example. A grpc client sent request to nginx server every seconds, and nginx kept sending request to the upstream server which returned 502 and the other one in round-robin way. Nginx didn't take case http_502 as unsuccessful attempt. 

nginx config file:

{{{
upstream testserver {
  server 10.46.46.161:9999 max_fails=1 fail_timeout=60; # another nginx server which can retrun responses with error code 502.
  server 10.46.46.160:9999; # a server which can retrun normal responses with status code 200.
}

server {
  listen 8888 http2;
  location /com.company.test {
    grpc_pass grpc://testserver;
    grpc_next_upstream error timeout http_504 http_502 non_idempotent;
  }
}
}}}



access log file:
{{{
[11:24:40 +0000]|| ""POST /com.company.test/order HTTP/2.0""|| 502|| 150|| ""-""|| grpc-java-netty/1.17.2|| -|| 0.000|| 0.001|| 10.46.46.161:9999|| 502
[11:24:41 +0000]|| ""POST /com.company.test/order HTTP/2.0""|| 200|| 28|| ""-""|| grpc-java-netty/1.17.2|| -|| 0.003|| 0.003|| 10.46.46.160:9999|| 200
[11:24:42 +0000]|| ""POST /com.company.test/order HTTP/2.0""|| 502|| 150|| ""-""|| grpc-java-netty/1.17.2|| -|| 0.000|| 0.001|| 10.46.46.161:9999|| 502
[11:24:43 +0000]|| ""POST /com.company.test/order HTTP/2.0""|| 200|| 28|| ""-""|| grpc-java-netty/1.17.2|| -|| 0.005|| 0.005|| 10.46.46.160:9999|| 200
[11:24:44 +0000]|| ""POST /com.company.test/order HTTP/2.0""|| 502|| 150|| ""-""|| grpc-java-netty/1.17.2|| -|| 0.001|| 0.000|| 10.46.46.161:9999|| 502
[11:24:45 +0000]|| ""POST /com.company.test/order HTTP/2.0""|| 200|| 28|| ""-""|| grpc-java-netty/1.17.2|| -|| 0.005|| 0.004|| 10.46.46.160:9999|| 200
[11:24:46 +0000]|| ""POST /com.company.test/order HTTP/2.0""|| 502|| 150|| ""-""|| grpc-java-netty/1.17.2|| -|| 0.000|| 0.001|| 10.46.46.161:9999|| 502
[11:24:47 +0000]|| ""POST /com.company.test/order HTTP/2.0""|| 200|| 28|| ""-""|| grpc-java-netty/1.17.2|| -|| 0.003|| 0.003|| 10.46.46.160:9999|| 200
}}}
"	AbacusHu@…
1.15.x	1624	support json return type in stub_status	other	1.15.x	enhancement		new	2018-08-29T02:04:23Z	2018-08-29T02:04:23Z	support json return type in stub_status for easier parsing.	gnowxilef@…
1.15.x	1629	use variable in for proxy_ssl in stream module	other	1.15.x	enhancement		new	2018-09-05T23:12:32Z	2018-10-09T08:47:34Z	I'd like to be able to use a variable to decide on proxy_ssl when using the stream module.	James Callahan
1.15.x	1631	feature request: support ALTSVC frame	nginx-core	1.15.x	enhancement		new	2018-09-08T15:37:16Z	2018-09-08T15:37:16Z	"http/2 has a ALTSVC frame feature that can tell client to send first request to another server.
https://tools.ietf.org/html/rfc7838#section-4
while alt-svc header only work after first response.

ngnx don't support ALTSVC frame now. request support for that feature."	wan-qy@…
1.15.x	1639	Add support for writing PROXY protocol v2 to upstream	nginx-core	1.15.x	enhancement		new	2018-09-19T16:29:22Z	2024-12-10T12:07:43Z	"With nginx 1.13.11, support for ''reading'' version 2 of the PROXY protocol (the binary variant) was added. However, nginx also allows to ''write'' the PROXY protocol to a TCP upstream with the ""proxy_protocol on;"" setting in a server block. However, it seems like this is always version 1. (Implemented as ngx_proxy_protocol_write in ngx_proxy_protocol.c.)

It would be great if version 2 would be supported as well. Maybe configurable by specifying an integer in the configuration ""proxy_protocol 2;"". Are there any plans to implement this? "	ko.cloudflare.com@…
1.15.x	1651	client_body_in_file_only/client_body_temp_path file permissions	nginx-core	1.15.x	enhancement		new	2018-10-10T14:04:29Z	2024-09-01T05:17:36Z	"Hello,
I have a setup where I have the Nginx server and application server running on different user accounts. I want to be able to use client_body_in_file_only/client_body_temp_path in order to save the file to disk and forward the $request_body_file to the application. However the file permissions are always 0600 making the application unable to read the file at all.
Looking in the Nginx source code - this is currently an unsupported scenario: https://github.com/nginx/nginx/blob/1305b8414d22610b0820f6df5841418bf98fc370/src/http/ngx_http_request_body.c#L468 creates the temporary file with the default permissions which get later expanded to 0600 (unless request_body_file_group_access is set - but unfortunately that property is not settable)."	mrarm.slack@…
1.15.x	1666	Add MSG_ZEROCOPY support	other	1.15.x	enhancement		new	2018-10-26T11:54:17Z	2018-10-26T11:54:17Z	"Please would you add support for zerocopy networking  (MSG_ZEROCOPY/SO_ZEROCOPY)

https://www.kernel.org/doc/html/latest/networking/msg_zerocopy.html

Apologies, providing a patch is beyond my level of code fluency in this case."	craigt
1.15.x	1668	Channel-Bound Cookies Implementation in nginx	other	1.15.x	enhancement		new	2018-11-05T10:27:54Z	2018-11-07T17:03:52Z	"Hi,
I've just had a look at this post about a Chrome security vulnerability that allows to steal cookies, and since a possible mitigation to this technique of stealing cookies would be having TLS Channel-Bound Cookies (http://www.browserauth.net/channel-bound-cookies), I was wondering if there is any plan to implement this feature into nginx. 

It would be particularly useful in a reverse-proxy configuration, so that nginx could validate the cookie before sending to the backed app."	aleroot@…
1.15.x	1675	OCSP stapling not working in stream area	other	1.15.x	enhancement		new	2018-11-17T18:30:39Z	2021-02-19T10:56:28Z	"I have set up a mail-proxy setup and wanted to use OCSP stapling for the public certificates, which is not working.
Whatever I tried, I just get the following error:
nginx: [emerg] ""ssl_stapling_file"" directive is not allowed here


stream {
  log_format basic '$remote_addr [$time_local] '
                   '$protocol $status $bytes_sent $bytes_received '
                   '$session_time $ssl_cipher' ;
  access_log /var/log/nginx/stream.log basic buffer=32k;
  
  map $ssl_preread_server_name $ssl_multiplexer {
      ~smtp 		127.0.0.1:8040;
      ~imap 		127.0.0.1:8042;
      ~pop3 		127.0.0.1:8043;
      default		127.0.0.1:8042;
  }
  server {
    listen          192.168.0.99:443;
    ssl_preread on;
    proxy_pass $ssl_multiplexer;
    proxy_protocol on;
  }
  server {
    listen 127.0.0.1:8040 ssl proxy_protocol;
    ssl_certificate      smtp.chain.pem;
    ssl_certificate_key  smtp.privkey.pem;
    ssl_protocols        TLSv1.2 TLSv1.3;
    ssl_ciphers TLS-CHACHA20-POLY1305-SHA256:TLS-AES-256-GCM-SHA384:TLS-AES-128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers on;
    ssl_session_cache		shared:Stream:10m;
    ssl_trusted_certificate	/etc/ssl/certs/ca-certificates.crt;
    proxy_ssl on;
    proxy_pass 192.168.0.99:465;
    proxy_ssl_name smtp.example.com;
    proxy_ssl_server_name on;
    proxy_ssl_protocols TLSv1.3;
  }
  server {
    listen 127.0.0.1:8042 ssl proxy_protocol;
    ssl_certificate      imap.chain.pem;
    ssl_certificate_key  imap.privkey.pem;
    ssl_protocols        TLSv1.2 TLSv1.3;
    ssl_ciphers TLS-CHACHA20-POLY1305-SHA256:TLS-AES-256-GCM-SHA384:TLS-AES-128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers on;
    ssl_session_cache		shared:Stream:10m;
    ssl_trusted_certificate	/etc/ssl/certs/ca-certificates.crt;
    proxy_ssl on;
    proxy_pass 192.168.0.99:993;
    proxy_ssl_name imap.example.com;
    proxy_ssl_server_name on;
    proxy_ssl_protocols TLSv1.3;
  }
  server {
    listen 127.0.0.1:8043 ssl proxy_protocol;
    ssl_certificate      pop3.chain.pem;
    ssl_certificate_key  pop3.privkey.pem;
    ssl_protocols        TLSv1.3 TLSv1.2;
    ssl_ciphers TLS-CHACHA20-POLY1305-SHA256:TLS-AES-256-GCM-SHA384:TLS-AES-128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers on;
    ssl_session_cache		shared:Stream:10m; 
    ssl_trusted_certificate	/etc/ssl/certs/ca-certificates.crt;
    proxy_ssl on;
    proxy_pass 192.168.0.99:995;
    proxy_ssl_name pop3.example.com;
    proxy_ssl_server_name on;
    proxy_ssl_protocols TLSv1.3;
  }
}

How can I enable OCSP stapling for this Stream-Servers?
I wanted to use OCSP-Must-staple Certificates, which are actually not working.
"	Tributh@…
1.15.x	1719	Enhance proxy_cache_min_uses directive	other	1.15.x	enhancement		new	2019-02-06T08:37:53Z	2020-06-07T04:27:03Z	"When using nginx for caching large high traffic media it may have sense to use proxy_cache_min_uses directive. But currently it accepts only numeric constant, so it's functionality is limited.
There are valid use cases requiring some smart values for proxy_cache_min_uses.

Let's assume we have a nginx proxy with caching on SSD. These kind of drives tend to wear out quickly when used in high-write applications. To save IO it might be beneficial to cache only realy frequent requests and keep cache as hot as possible.

From my point of view setting this directive automatically to the value '''above''' least recent used entry score stored in appropriate cache_zone will do the trick. As the result it will reduce cache writes for infrequent requests and keep it red-hot for it size."	vadim.lazovskiy@…
1.15.x	1732	Warn for large request bodies	other	1.15.x	enhancement		new	2019-02-25T20:30:28Z	2019-02-25T20:30:28Z	"Currently, this warning floods our log files:

`2019/02/25 11:17:46 [warn] 73578#101263: *13497288 a client request body is buffered to a temporary file /var/tmp/nginx/client_body_temp/0000000957, `[snip]

We keep `client_body_buffer_size` low and `client_max_body_size` high so we can accept large uploads with minimal memory usage and let the kernel's disk cache sort out what to keep in memory. As a result, this warning describes an event that is perfectly normal and reasonable. The noise obscures more important messages. 

On the other hand, there's nothing to indicate danger if clients are sending request bodies that are near the limit specified by `client_max_body_size` without unnecessarily committing a lot of memory to uploads.

What I'd suggest is, first take out this warning completely. Then add a new setting called (for example) `client_warn_body_size`. When the body size reaches this setting, emit a new warning such as ""a client request body is larger than client_warn_body_size (%d bytes)"". That way the sys admin can see when the clients are approaching `client_max_body_size` and either raise `client_max_body_size` or adjust the client before the limit becomes apparent to users. Perhaps my proposed `client_warn_body_size` could default to match whatever `client_body_buffer_size` is set to."	Leif Pedersen
1.15.x	1737	HTTP/2 HPACK full encoding support	other	1.15.x	enhancement		new	2019-03-01T15:23:15Z	2019-03-01T15:23:15Z	"It would be really appreciated if Nginx implemented HTTP/2 HPACK full encoding. 

If this has been implemented then I'm soryr, I could find no mention of it on it being implemented, except a few email and twitter threads (https://twitter.com/nginxorg/status/918181204234526721) telling it's not implemented. "	Avamander
1.15.x	1763	HTTP/2 prioritization is intermittent and often ineffective	other	1.15.x	enhancement		new	2019-04-11T14:51:37Z	2021-04-09T15:16:13Z	"The core support for prioritization for HTTP/2 is solid and attempts to prioritize but it appears that the data flow through Nginx itself prevents it from actually prioritizing quite often.

For prioritization to be effective, the downstream (browser-facing) part of the connection has to have minimal buffering beyond the HTTP/2 prioritization logic and the upstream (origin/files/data source) needs to buffer enough data for every stream to be able to always fill the downstream connection with data from the highest current-priority request (or balance as weighting defines).

Chrome builds an exclusive dependency list so there is only ever 1 request that is at the top of the tree and it is requested to get 100% of the bandwidth. At times higher priority requests will come in and be inserted at the front of the queue (every stream has the exclusive flag set). That makes it reasonably easy to test.  

There is a test page [https://github.com/pmeenan/http2priorities/tree/master/stand-alone here] that exercises Chrome's prioritization by warming up the connection with a few serialized requests, queuing 30 low-priority requests, waiting a bit and then queuing 2 high-priority requests serially. When prioritization is working well, the 2 high priority requests will interrupt the existing data flow and complete quickly (optimally starting in 1RTT is all of the buffering is perfect). All of the requests will use 100% of the badwidth and be downloading exclusively unless interrupted by a higher-priority request (no interleaving of data across requests). When prioritization is not working well the high-priority requests will be delayed (one or both) and you may also see interleaving across requests.

THe waterfalls below are from WebPageTest using Chrome (data from the raw netlog on the client side). The light parts of the bars are when the stream is idle and the dark parts of the bars are when header or data frames are flowing.

[https://www.webpagetest.org/result/190410_5G_881addc1aad96d3fc1f804cd3e017450/ Here] is what it looks like with h2o which has well-functioning prioritization out of the box with no server tuning:
[[Image(https://www.webpagetest.org/waterfall.php?test=190410_5G_881addc1aad96d3fc1f804cd3e017450&run=1&cached=&step=1&cpu=0&bw=0&width=700)]]

Since Nginx doesn't natively support pacing the downstream connection like h2o, it requires a bit of server tuning to minimize the downstream buffering. Specifically, BBR congestion control needs to be used to eliminate bufferbloat and tcp_notsent_lowat needs to be configured to reduce TCP buffers bloating. More details on why are available [https://blog.cloudflare.com/http-2-prioritization-with-nginx/ here].

Even with the system configured to minimize downstream buffering, the results with Nginx are [https://www.webpagetest.org/result/190410_P7_505118a003e1402b566c4b702c189a5b/ inconsistent] and sometimes it [https://www.webpagetest.org/result/190410_P7_505118a003e1402b566c4b702c189a5b/2/details/#waterfall_view_step1 works] as expected but [https://www.webpagetest.org/result/190410_P7_505118a003e1402b566c4b702c189a5b/1/details/#waterfall_view_step1 fails] often:

[[Image(https://www.webpagetest.org/waterfall.php?test=190410_P7_505118a003e1402b566c4b702c189a5b&run=1&cached=&step=1&cpu=0&bw=0&width=700)]]

In this test case the image is served from local disk (SSD) and epoll is not enabled. We have seen situations where the results differ based on if the data is coming from disk, proxy to a TCP connection or proxy to a local unix domain socket as well as if epoll is enabled or not. Sometimes the interleaving across requests is a lot more visible.

In this specific example, it is clear that the responses are all available very quickly with a very thin line near the beginning of each request for the HEADERS frame with the responses but the actual DATA frames are not being prioritized well. The exclusive streams are being interleaved even though the response data is available on the server MUCH faster than downstream consumes it and the ordering of the high-priority streams intermittently gets delayed behind the low priority streams. 

WE have seen the same issue going back to 1.14.x and see it in production on a lot of large Nginx deployments."	patmeenan@…
1.15.x	1768	Request for documentation: `--with-http_degradation_module`	documentation	1.15.x	enhancement		new	2019-04-16T09:07:05Z	2019-04-23T14:37:24Z	"http://nginx.org/en/docs/configure.html

The `--with-http_degradation_module` compile flag does not currently have a linked document, whereas the majority of other `ngx_*` modules have a linked document.

Please consider an appropriate document.

Thank you in advance."	petecooper@…
1.15.x	1775	Allow $hostname as part of name in server_name	nginx-core	1.15.x	enhancement		new	2019-04-30T10:58:23Z	2019-05-21T14:50:13Z	"From https://nginx.org/en/docs/http/ngx_http_core_module.html#server_name : 
>> If the directive’s parameter is set to “$hostname” (0.9.4), the machine’s hostname is inserted.

Once nginx's $hostname variable is known at the time of launch, can we  use it to compile the full name of the server name? For example:
{{{
server_name static.$hostname
server_name $hostname.example.com
}}}

"	bes.internal@…
1.15.x	1785	Support access to environment variables in config file	other	1.15.x	enhancement		new	2019-05-30T19:50:23Z	2020-06-19T07:24:38Z	"Currently nginx doesn't directly support access to environment variables in the config file. This makes it difficult to use it in a 12-factor-app style setup, where the docker container / VM / machine image use environment variables for configuring things such as static assets.

One way to work around this is to use `envsubst` to generate the config file on the fly. However, that means the target config file needs to be writable.

Another one would be to use something like ngx_http_lua_module. The process is two step: first, prevent the variable from being filtered out:

{{{
env MY_STATIC_ASSETS_PATH;
}}}

Then, set a variable using nginx syntax:

{{{
http {
...
  server {
    location /static {
      # set var using Lua
      set_by_lua $static_assets 'return os.getenv(""MY_STATIC_ASSETS_PATH"")';
      alias ""$static_assets""
      ...
    }
  }
}
}}}

Since using the lua module seems heavy-handed for something like this, I'd like to propose a set_from_env directive that does exactly that. The result would be:

{{{
http {
...
  server {
    location /static {
      # set var using built in directive
      set_from_env $static_assets MY_STATIC_ASSETS_PATH;
      alias ""$static_assets""
      ...
    }
  }
}
}}}"	spion-h4@…
1.15.x	1788	stream proxy_pass ipv6 first	nginx-module	1.15.x	enhancement		new	2019-06-01T15:13:44Z	2022-01-27T14:20:25Z	"When A and AAAA recoreds are returned, any one of them would be taken in a round-robin fashion. Thus do hope a parameter could be added here, just to claim explicitly the ipv6 or ipv4 priority.

server {
    ssl_preread on;
    resolver 1.1.1.1;
    proxy_pass $ssl_preread_server_name;
}"	pyrrhudite@…
1.15.x	1809	Allow stream with `ssl_preread on` to forward to http without leaving nginx	other	1.15.x	enhancement		new	2019-07-12T23:38:42Z	2019-07-12T23:38:42Z	"Currently, having multiple services on the same port that depend on ALPN means that every incoming connection creates another internal TCP connection, which means sockets get used up faster than they need to and every packet gets sent a second time through the kernel.

Instead, if nginx could transfer ownership of the socket from the stream module to the http module without proxying, this would speed up this use case and reduce its resource usage."	ben.lubar@…
1.16.x	1841	Dynamic access log and rewrites	nginx-core	1.16.x	defect		new	2019-08-27T21:54:38Z	2019-09-02T09:26:08Z	"This is a follow up to #1051, which I still can reproduce in a current version.

When using dynamic access logs (that is variables in the file name) together with any sort of rewrites (in this case custom error pages) no access log is written when requesting something inexistant and errors like the following appear:

{{{
2019/08/27 23:17:45 [error] 8713#8713: *3 testing file ""/opt/sits/nginx/nginx116/html/"" existence failed (2: No such file or directory) while logging request, client: 127.0.0.1, server: localhost, request: ""GET /plgr HTTP/1.0""
}}}

A minimal nginx config to reproduce is attached, as well as the debug output of a sample request.

After adding some debug messages it turned out that the error message is misleading and not actually caused by something inaccessible. Instead an empty path name is passed internally due to an extra loop run (that is there is nothing more to check), causing the error.

Here is the relevant (custom) debug output:

{{{
2019/08/27 23:17:45 [debug] 8713#8713: *3 http log handler
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_http_log_script_write(): enter
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_http_log_script_write(): ngx_http_map_uri_to_path(): path=""/opt/sits/nginx/nginx116/html/50x.html""(len:39), root=len:30
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_http_log_script_write(): after_truncate: path=""/opt/sits/nginx/nginx116/html/""(len:39)
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_open_file_wrapper(): enter
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_openat_file_owner(): enter
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_openat_file_owner(): ngx_openat_file(name=""opt"", fd=#12, mode=67584, create=0, access=0)
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_open_file_wrapper(): for_loop_end: p=""sits/nginx/nginx116/html/""(addr:0000000001CC0DE5), cp=""/sits/nginx/nginx116/html/""(addr:0000000001CC0DE4), end=""""(addr:0000000001CC0E07)
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_openat_file_owner(): enter
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_openat_file_owner(): ngx_openat_file(name=""sits"", fd=#14, mode=67584, create=0, access=0)
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_open_file_wrapper(): for_loop_end: p=""nginx/nginx116/html/""(addr:0000000001CC0DEA), cp=""/nginx/nginx116/html/""(addr:0000000001CC0DE9), end=""""(addr:0000000001CC0E07)
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_openat_file_owner(): enter
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_openat_file_owner(): ngx_openat_file(name=""nginx"", fd=#12, mode=67584, create=0, access=0)
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_open_file_wrapper(): for_loop_end: p=""nginx116/html/""(addr:0000000001CC0DF0), cp=""/nginx116/html/""(addr:0000000001CC0DEF), end=""""(addr:0000000001CC0E07)
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_openat_file_owner(): enter
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_openat_file_owner(): ngx_openat_file(name=""nginx116"", fd=#14, mode=67584, create=0, access=0)
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_open_file_wrapper(): for_loop_end: p=""html/""(addr:0000000001CC0DF9), cp=""/html/""(addr:0000000001CC0DF8), end=""""(addr:0000000001CC0E07)
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_openat_file_owner(): enter
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_openat_file_owner(): ngx_openat_file(name=""html"", fd=#12, mode=67584, create=0, access=0)
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_open_file_wrapper(): for_loop_end: p=""""(addr:0000000001CC0DFE), cp=""/""(addr:0000000001CC0DFD), end=""""(addr:0000000001CC0E07)
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_openat_file_owner(): enter
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_openat_file_owner(): ngx_openat_file(name="""", fd=#14, mode=2048, create=0, access=0)
2019/08/27 23:17:45 [debug] 8713#8713: *3 ngx_openat_file_owner(): ngx_openat_file(): failed (invalid file)
2019/08/27 23:17:45 [error] 8713#8713: *3 testing file ""/opt/sits/nginx/nginx116/html/"" existence failed (2: No such file or directory) while logging request, client: 127.0.0.1, server: localhost, request: ""GET /plgr HTTP/1.0""
2019/08/27 23:17:45 [error] 8713#8713: *3 testing file ""/opt/sits/nginx/nginx116/html/"" existence failed (2: No such file or directory) while logging request, client: 127.0.0.1, server: localhost, request: ""GET /plgr HTTP/1.0""
2019/08/27 23:17:45 [debug] 8713#8713: *3 run cleanup: 0000000001CCF878
}}}

Please note that this only happens with some sort of rewrites, not when for example accessing htdocs directly.

Why this access check is executed at all during access log writing is another question, though."	Peter Pramberger
1.16.x	1808	Inconsistent encoding in rewrites	other	1.16.x	defect		new	2019-07-11T13:21:37Z	2020-05-22T11:18:55Z	"Some characters get decoded in rewrites and can cause some trouble in other places like in proxy_pass.

== Demo setup ==
{{{
location / {
   rewrite /foo/(.*) /bar/$1 redirect;
}

}}}


== Exmple data ==
The URL http://localhost/foo/%5B%5D%7B%7D%C2%A7%24%25%26%3F will return the following location header from the webserver:
Location: http://localhost/bar/[]{}%C2%A7$%25&?

As you can see some chars (like []) got decoded but not all of them. 
It seems that only the reserved characters of RFC 3986 will be decoded.
"	Timo Hoffmann
1.16.x	1850	Content of the variable $sent_http_connection is incorrect	other	1.16.x	defect		accepted	2019-09-15T22:49:59Z	2023-04-04T15:39:39Z	"There is a suspicion that the content of the variable $sent_http_connection is incorrect.

Example
Expected: keep-alive
Actually: close

Host: anyhost
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0
Accept: image/webp,*/*
Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Connection: keep-alive
Referer: http://anyhost/catalog/page/
Cookie: PHPSESSID=vkgt1iiofoav3u24o54et46oc7
Pragma: no-cache

HTTP/1.1 200 OK
Server: nginx
Date: Sun, 15 Sep 2019 22:28:53 GMT
Content-Type: image/jpeg
Content-Length: 21576
Last-Modified: Wed, 06 Dec 2017 15:38:23 GMT
Connection: keep-alive
ETag: ""5a280eef-5448""
X-Content-Type-Options: nosniff
Accept-Ranges: bytes

log_format test
	'$remote_addr - $remote_user [$time_local] '
	'$status $bytes_sent $request_time $pipe $connection $connection_requests $http_connection $sent_http_connection '
	'""$request"" '
	'""$http_referer"" ""$http_user_agent"" '
	'""$gzip_ratio""';

123.123.123.123 - - [16/Sep/2019:01:28:53 +0300] 200 21844 0.000 . 13117169 3 keep-alive close ""GET /images/anypicture.jpg HTTP/1.0"" ""http://anyhost/catalog/page/"" ""Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0"" ""-"""	pug40@…
1.16.x	2148	proxy_ssl_verify does not support iPAddress subjectAlternativeName	nginx-module	1.16.x	enhancement		accepted	2021-03-10T07:28:19Z	2021-03-31T14:45:35Z	"Module ngx_http_proxy_module proxy_ssl_trusted_certificate ignores x509 extension ipAddress

location config:
proxy_pass https://10.10.10.10:8443;
proxy_ssl_certificate  /nginx/certs/chain.pem;
proxy_ssl_certificate_key /nginx/certs/client.key;
proxy_ssl_trusted_certificate /nginx/certs/proxied_server.pem;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;


When specifies
proxy_pass https://10.10.10.10:8443;
there is an error in error.log and 502 Bad gateway in curl

2021/03/09 23:22:34 [error] 18566#0: *1 upstream SSL certificate does not match ""10.10.10.10"" while SSL handshaking to upstream, client: 127.0.0.1, server: localhost, request: ""GET / HTTP/1.1"", upstream: ""https://10.10.10.10:8443/"", host: ""localhost""


but when specifies
proxy_pass https://somehost:8443;
then it works

certificate:
$> openssl x509 -text -in /nginx/certs/proxied_server.pem
...
X509v3 Subject Alternative Name:
  DNS:somehost, IP Address:10.10.10.10
..."	gavriluk@…
1.17.x	1857	libmaxminddb / geoip2 implementation as nginx essetial modules	nginx-module	1.17.x	enhancement		new	2019-09-26T09:21:22Z	2023-08-03T15:43:28Z	"Seems be geoip legacy deprecated about time ago,
https://support.maxmind.com/geolite-legacy-discontinuation-notice/

3rd party already implemented and i think nginx must accept and covert it up as new geoip for open source community.

https://github.com/leev/ngx_http_geoip2_module

Thanks"	semnanweb@…
1.17.x	1902	Can not use ssl_trusted_certificate to verify Clients	other	1.17.x	defect		new	2019-12-13T16:13:58Z	2019-12-13T16:13:58Z	"In my config, I set the following to validate client certificates
        ssl_verify_client               on;
        ssl_trusted_certificate         /usr/local/nginx/ssl/ca.crt;
        ssl_crl                         /usr/local/nginx/ssl/crl.pem;

The server fails to start with error: nginx: [emerg] no ssl_client_certificate for ssl_verify_client

If I change the configuration to the following, the server starts.
        ssl_verify_client               on;
        ssl_client_certificate          /usr/local/nginx/ssl/ca.crt;
        ssl_crl                         /usr/local/nginx/ssl/crl.pem;

I am not using OSCP or stapling, just verification against a CA/CRL.

Reading thru the Docs, the description for both of the options 'ssl_trusted_certificate' and 'ssl_client_certificate' are the same. ""Specifies a file with trusted CA certificates in the PEM format used to verify client certificates and OCSP responses if ssl_stapling is enabled."" The only difference is if the list of certificates is sent to the client.
"	jkman340@…
1.17.x	1904	sendfile with io-threads - nginx mistakenly considers premature client connection close if client sends FIN at response end	nginx-core	1.17.x	defect		accepted	2019-12-17T18:59:16Z	2020-10-06T16:02:38Z	"Hi,
The scenario is as follows:

1. Nginx is configured to work with sendfile and io-threads.
2. Client sends a request, and after receiving the entire content it sends a FIN-ACK, closing the connection.
3. Nginx occasionally considers the transaction as prematurely closed by the client even though the FIN-ACK packet acks the entire content.

The effect i've seen is that ""$body_bytes_sent"" holds partial data (up to the last ""successful"" sendfile call) and ""$request_completion"" is empty. I guess there are other effects though these are the one i'm using, so they popped up.

From what i've managed to understand from the code it looks like the scenario is that the read_event_handler ""ngx_http_test_reading"" is called before the completed task from the io-thread is handled by the main thread, effectively making Nginx think the client connection close happened earlier.

I've managed to reproduce it or latest nginx with rather simple config, but it's time sensitive so it doesn't happen on each transaction. I saw that using a bigger file with rate-limit increases the chances.

Config:
{{{
worker_processes  1;

events {
    worker_connections 1024;
}

http {
    keepalive_timeout 120s;
    keepalive_requests 1000;

    log_format main ""$status\t$sent_http_content_length\t$body_bytes_sent\t$request_completion"";
    access_log  logs/access.log  main;
    error_log  logs/error.log  info;

    aio threads;
    sendfile on;
    limit_rate 10m;

    server {
        listen 0.0.0.0:1234 reuseport;

        location = /test-sendfile-close {
            alias files/10mb;
        }
    }
}
}}}

* files/10mb is a file of size 10MB, created with ""dd"" (dd if=/dev/zero of=files/10mb bs=10M  count=1)

I then tail -F the access log and the error log file, and send these requests from the same machine:
{{{
while true; do wget -q ""http://10.1.1.1:1234/test-sendfile-close""; done
}}}

The output i get in error log and access log (in this order) in case of a good transaction is:
{{{
2019/12/17 14:52:34 [info] 137444#137444: *1 client 10.1.1.1 closed keepalive connection
200	10485760	10485760	OK
}}}

But every few transactions i get this output instead:
{{{
2019/12/17 14:52:38 [info] 137444#137444: *7 client prematurely closed connection while sending response to client, client: 10.1.1.1, server: , request: ""GET /test-sendfile-close HTTP/1.1"", host: ""10.1.1.1:1234""
200	10485760	3810520	
}}}
As you can see, the reported sent bytes is lower than the actual value, and the request_completion is empty.

I understand that the closer the client is to Nginx the higher chances this could happen, but it's not just a lab issue - we've seen this in a field trial with clients in a distance of ~30ms RTT, with higher load of course.

If there is need for any other information, or anything else - i'll be glad to provide it.
I appreciate the help, and in general - this great product you've built!

Thank you,
Shmulik Biran"	Shmulik Biran
1.17.x	1861	Feature Request: Support `error_log off`	other	1.17.x	enhancement		new	2019-10-03T09:40:35Z	2019-10-03T09:40:35Z	"Since everybody is using it wrong [1] I thought it might be better to make it a feature. This would also not rely on the existence of /dev/null (eg: Windows). Also, `off` is already supported for access_log!

[1] https://github.com/search?q=%22error_log+off%3B%22&type=Code"	albertvaka@…
1.17.x	1879	RHEL / CentOS 8 repository	other	1.17.x	enhancement	thresh	assigned	2019-10-22T07:30:15Z	2022-01-20T09:35:55Z	"I notice nginx repository for RHEL / CentOS 8 is available.

On EL-8 nginx is provided as a module.
8.0 have nginx:1.14
8.1 will also have nginx 1.16

Some other module have dependencies on this module (at least the PHP module)
which make the use of nginx repository impossible.

It will be nice to also provide it as a modular repository, not as a simple repository.
"	remicollet@…
1.17.x	1824	Bypassing cache if worker failed to allocate node in cache keys zone ?	nginx-module	1.17.x	enhancement		new	2019-07-30T08:10:10Z	2020-01-21T06:53:37Z	"Using http file cache.
Seems that if a worker failed to allocate node in cache keys,
then it tries forcely expire cache and allocate node again.
But at the same time if another worker takes the cache first,
the worker will return 500 error with below log

""could not allocate node in cache keys zone""

I think it would be better just bypassing cache and read data from upstreams, instead of returning 500 error.

I tested simple patch below,
Not sure whether it proper approach
but seems it mitigates symptom.

{{{
diff -r e7181cfe9212 src/http/ngx_http_file_cache.c
--- a/src/http/ngx_http_file_cache.c    Tue Jul 23 15:01:47 2019 +0300
+++ b/src/http/ngx_http_file_cache.c    Tue Jul 30 17:07:51 2019 +0900
@@ -299,11 +299,7 @@
     ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
                    ""http file cache exists: %i e:%d"", rc, c->exists);

-    if (rc == NGX_ERROR) {
-        return rc;
-    }
-
-    if (rc == NGX_AGAIN) {
+    if (rc == NGX_AGAIN || rc == NGX_ERROR) {
         return NGX_HTTP_CACHE_SCARCE;
     }
}}}"	keyolk@…
1.18.x	2477	"proxy_redirect is missing feature for HTTP header ""Link"""	nginx-core	1.18.x	enhancement		new	2023-03-29T16:12:23Z	2023-03-29T22:36:27Z	"**PLEASE NOTE:** please ignore space chars in urls in following ticket (I had to ""remove"" all external links to be able to post this ticket, here: ""Maximum number of external links per post exceeded"")

**Used configuration**

Given is a reverse proxy configuration like
{{{#!text
location / {
    proxy_pass     https :// upstream/;
    proxy_redirect https :// upstream/ /;
}
}}}

As the documentation at `https :// nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect` explains, `proxy_redirect` will change Location headers redirect. This is required and good and works.

**HTTP Response of upstream server**

My wordpress 6 system now starts to send HTTP responses like:
{{{#!text
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 29 Mar 2023 15:31:02 GMT
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Link: <https :// upstream/index.php/wp-json/>; rel=""https://api.w.org/""
Link: <https :// upstream/index.php/wp-json/wp/v2/pages/69>; rel=""alternate""; type=""application/json""
Link: <https :// upstream/>; rel=shortlink
Vary: Accept-Encoding
Content-Encoding: gzip
}}}

**HTTP protocol standard**

Since I've never seen this type of 'link' header before, this was a real surprise to me. But it seems that it's a feature which exists for many years, now: ` https :// www.w3.org/wiki/LinkHeader `

**Problem description**

The problem is, that the many Link headers remain unchanged and are delivered (as they are) to the client. Of course, the client machine doesn't know anything about my upstream server and can't access it to load these linked urls.

The result is, that my nginx reverse proxy response to the client contains unchanged Link headers
{{{#!text
Link: <https :// upstream/index.php/wp-json/>; rel=""https://api.w.org/""
Link: <https :// upstream/index.php/wp-json/wp/v2/pages/69>; rel=""alternate""; type=""application/json""
Link: <https :// upstream/>; rel=shortlink
}}}

**Feature request**

So, `proxy_redirect` should (by default) substitute in headers
* `Location`
* `Link` [array]

Optionally, the substitution of Link headers could be configured with another switch (e.g. `proxy_redirect_links`) to on|off|default

"	jochenwezel@…
1.18.x	2284	Support RFC5424 log records	nginx-module	1.18.x	enhancement		new	2021-11-23T16:58:43Z	2021-11-23T17:46:06Z	"The `access_log` directive from the ngx_http_log_module component only supports emitting RFC3164 log records when logging to syslog.  RFC5424 has been the standard for syslog logging for twelve years now and should be supported.

I've found this on 1.18.x but can't find anything in the documentation to suggest it's changed since."	tom.cook.lovemyev.com@…
1.18.x	2012	Wrong header Connection, when keepalive is disabled	nginx-core	1.18.x	defect		accepted	2020-07-04T22:10:30Z	2020-07-09T14:13:58Z	"I disabled keepalive with directives keepalive_timeout 0 and keepalive_requests 0, but nginx continues to return header Connection: keep-alive.

Steps to reproduce
{{{
curl -v http://mydomain.org
}}}

Expected response
{{{
Server: nginx
Date: Sat, 04 Jul 2020 21:52:23 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 225
Connection: close
my-trace: myhost-abcd
}}}

Actual response
{{{
Server: nginx
Date: Sat, 04 Jul 2020 21:52:23 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 225
Connection: keep-alive
my-trace: myhost-abcd
}}}

My nginx -T output is in attached file.
"	seregaizsbera@…
1.18.x	2109	Content-Type header is dropped when HTTP2 is used( HTTP status 204 only)	nginx-core	1.18.x	defect		accepted	2020-12-14T08:59:42Z	2020-12-14T20:21:15Z	"When backend server returned HTTP status 204 with Content-Type header.

The Content-Type header is set and sent correctly when using HTTP1.1(over plain-text HTTP or over HTTPS).

However, when making a request using HTTP2 (over TLS), that header is not sent.

I can't see a reason for this and would guess that this is a bug in Nginx. Or am I missing something?"	ms2008vip@…
1.18.x	2219	Space escaping in unquoted strings	nginx-core	1.18.x	defect		new	2021-07-17T22:00:38Z	2021-07-19T09:59:25Z	"Hi,

I was toying with unquoted strings in nginx configuration files when I stumbled upon the following behaviour:

{{{
server_name hack03;
location /test/ {
    add_header X-Value omelette\ du\ fromage always;
}
}}}

I was expecting:
{{{
X-Value: omelette du fromage
}}}

I got:
{{{
$ nginx -t && nginx -s reload && curl -sI http://hack03/test/ | grep X-Value
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
X-Value: omelette\ du\ fromage
}}}

Is this the expected behaviour? It looks like a parsing bug that was never reported because anyone stumbling upon this would immediately switch to a quoted string and forget about it.
"	xavierog@…
1.18.x	2542	ssl_ecdh_curve is sometimes ignored in server blocks	nginx-module	1.18.x	defect		new	2023-09-06T17:52:56Z	2023-09-15T14:59:13Z	"Consider a scenario when a single IP `x.y.z.q` has two server blocks. Both server blocks listen on the same port and support TLS. One of those blocks is marked `default_server` and handles the non-SNI requests.

If both blocks define `ssl_ecdh_curve` then it has zero effect on the non-`default_server`. This is done without warning.

One of the possible implications of this is that a more secure configuration is silently ignored. (I stumbled upon this when trying to enable post-quantum key exchange algorithms.)

Understandably nginx can't (currently, even though server_name could be read before KeXs come into play) respect the directive in both blocks, but in that case the ignored one should throw a non-critical warning. Plus, it could be better-documented."	Avamander
1.18.x	2666	"""Content-Length: 1\t\r\n"" is not treated as a valid Content-Length"	documentation	1.18.x	defect		new	2024-07-10T17:20:43Z	2024-09-04T15:50:13Z	"when visiting nginx with ""Content-Length: 2\t\r\n"", it will return 400[1] with error log:
2024/07/10 10:08:36 [info] 91882#91882: *111 client sent invalid ""Content-Length"" header while reading client request headers, client: 127.0.0.1, server: ub122.lidaobing.com, request: ""POST / HTTP/1.1"", host: ""127.0.0.1""


when visit nginx with ""Content-Length: 2 \r\n"", it will work as expected[2]

the RFC 7230/9112 said SP or HTAB are both valid whitesapce[3]


[1]
{{{
$ echo -ne ""POST / HTTP/1.1\r\nHost: 127.0.0.1\r\nContent-Length: 2\t\r\n\r\n{}"" | nc 127.0.0.1 80
HTTP/1.1 400 Bad Request
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 10 Jul 2024 17:20:27 GMT
Content-Type: text/html
Content-Length: 166
Connection: close

<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx/1.18.0 (Ubuntu)</center>
</body>
</html>
}}}

[2]
{{{
$ echo -ne ""POST / HTTP/1.1\r\nHost: 127.0.0.1\r\nContent-Length: 2 \r\n\r\n{}"" | nc 127.0.0.1 80
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 10 Jul 2024 17:13:02 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive

6
'None'
0
}}}

[3] 
https://www. rfc-editor.org/rfc/rfc7230#appendix-B

OWS = *( SP / HTAB )
header-field = field-name "":"" OWS field-value OWS

https://www. rfc-editor.org/rfc/rfc9112#name-field-syntax

field-line   = field-name "":"" OWS field-value OWS
OWS = <OWS, see [HTTP], Section 5.6.3>
(in RFC 9110, Section 5.6.3)
OWS            = *( SP / HTAB ) 
                 ; optional whitespace
"	LI Daobing
1.18.x	2090	proxy_pass cannot have URI part if in some situations.	documentation	1.18.x	enhancement		new	2020-11-11T03:45:16Z	2020-11-11T03:45:16Z	"Directive ""proxy_pass"" cannot have URI part in some situation, but at present, the doc only record only a few cases in https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass, i.e. ""When location is specified using a regular expression, and also inside named locations. In these cases, proxy_pass should be specified without a URI.""

Besides the situations mentioned above, ""proxy_pass"" also should not have URI part inside ""if"" statement or ""limit_except"" block. It is better to supplement this."	zhoushulin1992@…
1.18.x	2199	Online documentation: converting rewrite rules typo	documentation	1.18.x	enhancement		new	2021-06-04T13:36:36Z	2022-05-24T14:29:06Z	"In https://nginx.org/en/docs/http/converting_rewrite_rules.html
is a typo in the converted rewrite rules:

**Another example. Instead of the “upside-down” logic “all that is not example.com and is not www.example.com”:**


{{{
    RewriteCond  %{HTTP_HOST}  !example.com
    RewriteCond  %{HTTP_HOST}  !www.example.com
    RewriteRule  (.*)          http://www.example.com$1
}}}


one should simply define example.com, www.example.com, and “everything else”:


{{{
    server {
        listen       80;
        server_name  example.com www.example.com;
        ...
    }

    server {
        listen       80 default_server;
        server_name  _;
        return       301 http://example.com$request_uri;
    }

}}}

The last above bit ...


{{{
 return       301 http://example.com$request_uri;
}}}


... should be ...


{{{
 return       301 http://www.example.com$request_uri;
}}}


... as that is what is requested in the apache rewrite rule ...


{{{
    RewriteRule  (.*)          http://www.example.com$1
}}}




Same applies to the old version part...

**On versions prior to 0.9.1, redirects can be made with:**


{{{
        rewrite      ^ http://example.com$request_uri?;
}}}


... should be ...


{{{
        rewrite      ^ http://www.example.com$request_uri?;
}}}

OR change the Apache rewrite rule into:


{{{
    RewriteRule  (.*)          http://example.com$1
}}}


"	bwakkie@…
1.18.x	2258	add_header directive: A colon added after the header name passes Nginx syntax validation and breaks the website once applied	nginx-core	1.18.x	enhancement		new	2021-10-15T14:47:19Z	2023-01-28T20:10:50Z	"This morning, I added the following directive to one of my proxy configurations, without noticing the colon (:) character produced by the Permissions-Policy generator I did use.

{{{
add_header
	Permissions-Policy:
	""accelerometer=(), autoplay=(), battery=(), camera=(), cross-origin-isolated=(self), eexecution-while-not-rendered=(self), execution-while-out-of-viewport=(self), fullscreen=(self), geolocation=(self), gyroscope=(), magnetometer=(), navigation-override=(self), payment=(), screen-wake-lock=(self), sync-xhr=(self), usb=(), web-share=(self), clipboard-read=(self), clipboard-write=(self), idle-detection=(self)""
	always;
}}}

As always, I verified the NGINX configuration before applying it, ''nginx -t'' reporting no issues. Then I have applied it, with no issues. Afterwards, the website was reported to be unavailable (Chrome reporting a ERR_HTTP2_PROTOCOL_ERROR error and cURL reporting ""curl: (92) HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1)"").

I finally managed to find the culprit (the colon) and restore website availability.

The incorrectly positioned colon should make the syntax validation fail."	guyr.evision.ca@…
1.18.x	2391	bad parsing of Content-Type (sub_filter_types)	nginx-module	1.18.x	enhancement		new	2022-09-13T15:56:51Z	2022-09-14T02:46:33Z	"Hello,

I think wefound a bug in the way the Content-Type is parsed in order to match sub_filter_types.

The problem was an href line wasn't rewritten.
After analysibng a lot of the html flux,
We find that the Content -Type line is as follow :

  Content-Type: text/html; charset:UTF-8; charset=UTF-8

It seems that nginx parses such lines as (.*); *charset=(.*)
because we had to put ""text/html; charset:UTF-8"" as the type in sub_filter_types to make it work (eg:

    sub_filter_types ""text/html; charset:UTF-8"" script/js  text/html text/css text/xml ;

It's not an urgent issue, as we found a solution (an ugly one);
but maybe you woulld be interested to know about the issue, and improve it.

Thanks


"	ticket.mmisolution.be@…
1.18.x	2567	sub_filter and gziped payload should trigger warning	nginx-core	1.18.x	enhancement		new	2023-11-23T11:48:19Z	2023-11-25T01:26:10Z	"Hi.

Reporting an issue that bit me quite some time ago. I don't have the setting to reproduce it because I solved it on my own and moved on. But here is the lesson learned:

I was busy reverse-proxying a website with a path prefix. (The kind of thing you do because you have only one machine, no control over DNS, and want to reuse the same port.) I encountered a Heisenbug because of default headers being passed around by the various tools I was using. But, bottom line:

The reverse proxying didn't work because I had a sub_filter declaration to correct the payloads to account for the prefix in the reverse-proxying. However, the backend gave gziped data. sub_filtering on compressed data is... doomed to failure.

So, for people encountering the same situation, and dealing with the same kind of Heisenbugs: please provide a warning in the logs for people attempting sub_filter on compressed data.

That would have helped me diagnose the problem MUCH MUCH MUCH faster, without having to dissect implicit behaviours related to compression."	gl-yziquel@…
1.18.x	2608	Request to Add Documentation Link on Trailing Slash Behavior in Reverse Proxy Setup	documentation	1.18.x	enhancement		new	2024-02-24T20:49:32Z	2024-02-24T20:49:32Z	"Hello! I've noticed an inconsistency that seems counterintuitive related to the behavior of proxy formatting based on the presence or absence of a trailing slash in the `proxy_pass` directive. Although this detail is documented, it is not immediately apparent or accessible on the initial pages that one might consult when setting up a reverse proxy.

Furthermore, the documentation page (https://nginx.org/en/docs/http/ngx_http_proxy_module.html#:~:text=If%20the%20proxy_pass,in%20some%20cases) that discusses this behavior, despite being relevant to reverse proxies, does not have any direct links from the main reverse proxy documentation page (https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/). I encountered a problem related to a trailing slash that took me two days to resolve, and my solution came from a blog post that referenced the detailed documentation. While I understand there may not be changes to the logic governing this behavior, could a direct link to the detailed page be added to the main reverse proxy documentation? This could significantly aid others in troubleshooting and understanding configuration nuances."	leerenix@…
1.19.x	2213	The get_handler of ngx_http_variable_t is overwritten by ngx_http_regex_compile if existing	nginx-module	1.19.x	defect		new	2021-06-29T06:23:33Z	2022-01-21T03:27:10Z	"I'm developing a dynamical NGINX module. I added a variable at preconfiguration stage and print its `get_handler` at postconfiguration stage. Because I added a named capture group in a regex location directive of nginx.conf, i.e. the variable would be set/defined (set access) at configuration stage, I used the `NGX_HTTP_VAR_CHANGEABLE` flag. However, I found this variable was empty when it's used in a subrequest location block. Then I checked its `get_handler` at postconfiguration stage and I found it was overrided.

So I think we should check its value of `get_handler` before setting it.

The key C++ code snippet (some omitted for brevity),


{{{
#define NGX_HTTP_VAR_sp_resid ""sp_resid""
static const ngx_str_t sp_resid_name = ngx_string(NGX_HTTP_VAR_sp_resid)
#define NGX_HTTP_EM_VAR_NAME(name) const_cast<ngx_str_t*>(&::name##_name)

/* The module context. */
static ngx_http_module_t ngx_http_em_module_ctx = {
	ngx_http_em_preconfiguration, /* preconfiguration */
	ngx_http_em_postconfiguration,	/* postconfiguration */

	ngx_http_em_create_main_conf,	/* create main configuration */
	ngx_http_em_init_main_conf,		/* init main configuration */

	NULL, /* create server configuration */
	NULL, /* merge server configuration */

	ngx_http_em_create_loc_conf,	/* create location-specific configuration */
	ngx_http_em_merge_loc_conf,	/* merge location configuration */
};


static ngx_int_t ngx_http_sp_resid_variable(ngx_http_request_t *req, ngx_http_variable_value_t *vv, uintptr_t data) {
	const auto ctx = ngx_http_em_get_module_ctx(req->main);
	logdf(""ctx@%p, req=%.*s?%.*s, spResId=%.*s"", ctx, ARGS_NGX_STR(req->uri), ARGS_NGX_STR(req->args), ARGS_NGX_STR(ctx->spResId));
	if (ctx == NULL) {
		vv->not_found = 1;
		return NGX_OK;
	}
	vv->valid = 1;
	vv->no_cacheable = 0;
	vv->not_found = 0;
	vv->data = ctx->spResId.data;
	vv->len = ctx->spResId.len;
	return NGX_OK;
}

ngx_int_t ngx_http_em_preconfiguration(ngx_conf_t *cf) {
	ngx_http_variable_t  *var;
	var = ngx_http_add_variable(cf, NGX_HTTP_EM_VAR_NAME(sp_resid), NGX_HTTP_VAR_CHANGEABLE | NGX_HTTP_VAR_NOCACHEABLE);
	if (var == NULL) {
		return NGX_ERROR;
	}
	var->get_handler = ngx_http_sp_resid_variable;
	return NGX_OK;
}

ngx_int_t ngx_http_em_postconfiguration(ngx_conf_t *cf) {
	// Restore v->get_handler = ngx_http_variable_not_found set
	// by ngx_http_regex_t *ngx_http_regex_compile(ngx_conf_t *cf, ngx_regex_compile_t *rc)
	// in nginx\nginx\src\http\ngx_http_variables.c
	auto vars_keys = cmcf->variables_keys->keys;
	auto keys = static_cast<ngx_hash_key_t*>(vars_keys.elts);
	for (auto idx = vars_keys.nelts; idx > 0;) {
		const auto& var_entry = keys[--idx]; // ngx_strncasecmp
		if (sizeof(NGX_HTTP_VAR_sp_resid) - 1 == var_entry.key.len &&
			0 == ngx_strncasecmp(PUChar(NGX_HTTP_VAR_sp_resid), var_entry.key.data, sizeof(NGX_HTTP_VAR_sp_resid) - 1)) {
			const auto var = static_cast<ngx_http_variable_t*>(var_entry.value);
			logdf(""var->get_handler=%p, ngx_http_sp_resid_variable=%p"", var->get_handler, ngx_http_sp_resid_variable);
			var->get_handler = ngx_http_sp_resid_variable; // workaround
		}
	}
}
}}}

nginx.conf

{{{

location / {
	proxy_pass $scheme://$host;
}

location ~ ""^/_api/(?<sp_resid>[[:xdigit:]]{8}(?:-[[:xdigit:]]{4}){3}-[[:xdigit:]]{12})/driveItem$"" {
	proxy_pass $scheme://$host;
}

location = /GetList {
	internal;
	subrequest_output_buffer_size 128k;
	proxy_set_header Content-Length """";
	proxy_set_header Accept-Encoding """";
	proxy_set_header Accept ""application/json;odata=nometadata"";
	proxy_pass $scheme://$host/_api/web/GetList(@a1)?@a1='$sp_resid'&%24expand=RootFolder;
}
}}}
"	sansanvang@…
1.19.x	2482	* is not evaluated to ::	nginx-core	1.19.x	enhancement		new	2023-04-13T13:51:53Z	2023-04-19T00:32:37Z	"asterisk (*) should be a wildcard for IPv6 [::] and IPv4 0.0.0.0 not Ipv4 only.
This is especially important since you stopped adhering to the global configuration and instead defaulting to ipv6only=on.
So users configure the system to do automatic dualstack, writing a dual stack address and nginx binds just IPv4 only.
This broke the dual stack support of more or less all the software that bundles nginx like gitlab and a lot of smaller webservers.
People especially on Linux are expecting dual stack to work out of the box without additional configuration.And since there is a fallback to IPv4 many even do not recognize that they have a problem."	wanneut@…
1.19.x	2010	Proxy protocol headers from stream module reported as broken by http module	nginx-core	1.19.x	defect		new	2020-06-30T14:45:55Z	2020-07-01T06:59:02Z	"Using a NGINX configuration where:
- a stream server listens on 127.0.0.1:8080, enables proxy protocol, and proxies to socket temp.sock;
- a stream server listens on socket temp.sock with proxy protocol enabled, keeps proxy protocol enabled and proxies to 127.0.0.2:8080;
- a http server listens on port 127.0.0.2:8080 with proxy protocol enabled, serving a static website with a single index.html file at the root;

Attempts to fetch ""http://127.0.0.1:8080/"" result in the connection getting closed without any response, and the error log indicating the following:

----------------------------
2020/06/30 16:40:50 [error] 5688#5688: *15 broken header: ""PROXY TCP4 127.0.0.1 unix:/path/redacted/temp.sock 52726 0
GET / HTTP/1.1
Host: 127.0.0.1:8080
User-Agent: HTTPie/0.9.8
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive

"" while reading PROXY protocol, client: 127.0.0.1, server: 127.0.0.2:8080
----------------------------
 
Tested with the NGINX git tree, commits be932e81 (works as expected) and 20389850 (breaks as above) as well as tags release-1.16.1 (works), release-1.80.0 (breaks) and release-1.19.0 (breaks)."	aaribaud@…
1.19.x	2048	Document that 'proxy_buffering off' disables caching	documentation	1.19.x	defect	Yaroslav Zhuravlev	assigned	2020-09-22T12:08:58Z	2020-11-03T18:06:25Z	"If we disable proxy_buffering it will disable caching in nginx. Why is that? I've spent an enormous amount of time to debug this problem while there is no mention of that anywhere in the documentation. Only after I've managed to localize this option (by enabling/disabling random options) I've realised what cases this and found that this was documented in 
https://trac.nginx.org/nginx/ticket/849#comment:1
and
https://stackoverflow.com/questions/9230812/nginx-as-cache-proxy-not-caching-anything

Still, I think such information should be mentioned in the documentation.

TIA.

"	peter.volkov@…
1.19.x	2127	ngx_http_realip_module changes $remote_addr which leads to wrong ips in X-Forwarded-For received by upstream service	nginx-module	1.19.x	defect		accepted	2021-01-25T08:20:07Z	2021-01-25T16:17:59Z	"I have a webapp under NGinx and another frontal load balancer, something like below (x.x.x.x = IP address):

Client(a.a.a.a) -> LB (b.b.b.b) -> NGX (c.c.c.c) -> WEBAPP (d.d.d.d)

Here is a snippet of my NGinx configuration:

location / {
    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header  X-Real-IP       $remote_addr;
    real_ip_header    X-Forwarded-For;
    set_real_ip_from  b.b.b.b;
    real_ip_recursive on;
}
The load balancer add X-Forwarded-For field with client IP
X-Forwarded-For = a.a.a.a
NGinx search for client real IP in X-Forwarded-For header by omiting LB IP (b.b.b.b) and change $remote_addr from b.b.b.b to a.a.a.a so proxy_set_header X-Real-IP $remote_addr become true (OK that's what I want !)
BUT, NGinx also complete X-Forwarded-For header with a.a.a.a IP instead of b.b.b.b
WEBAPP receive the following headers:
X-Forwarded-For = a.a.a.a, a.a.a.a
X-Real-IP = a.a.a.a
-> X-Forwarded-For should be a.a.a.a, b.b.b.b

So here I am loosing info about my load balancer.

Right now for getting proper ips in my webapp I need to use a workaround of setting X-forwarded-for as:
proxy_set_header  X-Forwarded-For ""$http_x_forwarded_for, $realip_remote_addr"";

What I need is the ability to set first proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for and then search for real IP and replace $remote_addr value. Or maybe another variable similar to $proxy_add_x_forwarded_for which retains the load balancer ip.
"	anveshagarwal@…
1.19.x	2268	http2 client set both host and :authority header, server throws 400 bad request error	nginx-module	1.19.x	defect		accepted	2021-10-29T11:32:44Z	2023-06-14T13:41:33Z	"when use http2 client. we both set host and :authority header. nginx server throw 400 bad request. the error log is 
*1 client sent duplicate host header: ""host: xxx"", previous value: ""host: 127.0.0.1:27710"" while reading client request headers, client: 127.0.0.1, server: _, host: ""127.0.0.1:27710""

this is very confused. need some help. "	xbkaishui@…
1.19.x	2291	Regex plus variable in Nginx `proxy_redirect`	documentation	1.19.x	defect		accepted	2021-12-02T19:07:07Z	2021-12-02T19:40:16Z	"It is not currently documented or apparent if it is possible to use a regex that also includes Nginx variables in `proxy_redirect`.

For example, none of these work:


{{{
proxy_redirect ~*https?://\\$proxy_host/(.*)$ /app1/$1
proxy_redirect ~*https?://\$proxy_host/(.*)$ /app1/$1
proxy_redirect ~*https?://$proxy_host/(.*)$ /app1/$1
}}}


This is described here in further detail: https://stackoverflow.com/q/70205048/7954504

The use-case for this is the scenario where one only wants to change Location header when the redirect location is for the internal app, not for an external redirect."	brsolomon-deloitte@…
1.19.x	2376	GRPC: upstream rejected request with error while reading response header from upstream	nginx-module	1.19.x	defect		reopened	2022-08-11T10:48:41Z	2022-08-16T16:51:47Z	"Hello,

Reporting an issue with GRPC where nginx is not correctly forwarding to client the response and RST_STREAM
when server early abort the http2 stream.

My setup:

[client] === grpc stream (with ssl) ===> [nginx] === grpc stream (cleartext) ===> [backend]


Description:

When the client is establishing a grpc (client/unidirectional) stream, it may happen that the backend close/reset the http2 stream before exhausting the client input stream, for example due to a timeout.

In this case, the backend send back the response and reset the stream:
{{{

grpc-gateway DEBUG Connection{peer=Server}: h2::codec::framed_write: send frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
grpc-gateway DEBUG Connection{peer=Server}: h2::codec::framed_write: send frame=Data { stream_id: StreamId(1) }
grpc-gateway DEBUG Connection{peer=Server}: h2::codec::framed_write: send frame=Headers { stream_id: StreamId(1), flags: (0x5: END_HEADERS | END_STREAM) }
grpc-gateway DEBUG Connection{peer=Server}: h2::codec::framed_write: send frame=Reset { stream_id: StreamId(1), error_code: CANCEL }
}}}


My issue is that the client never receive the response nor the http2 RST_STREAM from nginx.
The client discover that the stream have been reset only after the next client side http2 PING frame, or after the client try to send some new data. 

{{{
DEBUG Connection{peer=Client}: h2::codec::framed_write: send frame=Ping { ack: false, payload: [59, 124, 219, 122, 11, 135, 22, 180] }
DEBUG status_request{request_id=117db9f2-707c-4ccd-a824-f0f8d36c9fa3}:Connection{peer=Client}: h2::codec::framed_read: received frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
DEBUG status_request{request_id=117db9f2-707c-4ccd-a824-f0f8d36c9fa3}:Connection{peer=Client}: h2::codec::framed_read: received frame=Reset { stream_id: StreamId(1), error_code: INTERNAL_ERROR }
DEBUG hyper::proto::h2::client: client request body error: error writing a body to connection: send stream capacity unexpectedly closed
}}}


The only thing I see in the nginx log is those messages:
(error 8 is http2 CANCEL error code)

{{{
[error] 855#855: *73638 upstream rejected request with error 8 while reading response header from upstream, client: 10.0.33.7, server: grpc.qovery.com, request: ""POST /agent.Agent/AgentResponsePublish HTTP/2.0"", upstream: ""grpc://10.0.20.240:8081"", host: ""grpc.qovery.com:443""
10.0.33.7 - - [10/Aug/2022:21:48:29 +0000] ""POST /agent.Agent/AgentResponsePublish HTTP/2.0"" 200 0 ""-"" ""tonic/0.8.0"" 566 60.024 [qovery-prod-grpc-gateway-grpc] [] 10.0.20.240:8081 68 60.025 200 e0ee79d5fd24a5bbf39556e159cb3840  
}}}


If I remove nginx from my setup everything is working as expected.

The issue seems a duplicate of this one https://trac.nginx.org/nginx/ticket/1792, but as the ticket has been closed, I opened a new one with detailed information.
 

"	erebe@…
1.19.x	2475	access_log with if does not work when variable name starts with a number	nginx-core	1.19.x	defect		new	2023-03-22T21:01:40Z	2023-03-23T04:17:33Z	"The nginx.conf for reproducing looks like this:

{{{
error_log /dev/stdout info;
pid /tmp/nginx/nginx.pid;

events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    client_body_temp_path /tmp/nginx/;
    fastcgi_temp_path /tmp/nginx/fastcgi/;
    uwsgi_temp_path /tmp/nginx/uwsgi/;
    scgi_temp_path /tmp/nginx/scgi/;
    server {
        listen 8000 default_server;
        listen [::]:8000 default_server ipv6only=on;
        root /tmp/;
        server_name localhost;
        client_max_body_size 100m;
        set $123_test 0;
        access_log /tmp/access.log combined if=$123_test;
        #set $test_123 0;
        #access_log /tmp/access.log combined if=$test_123;
    }
}
}}}

When `$123_test` is set to 0 in the server block, it will still write to /tmp/access.log as if it does not evaluate to ""0"". However, if changing its name to `$test_123`, nothing will be logged by access_log.

It seems that the `if` in access_log only looks at the first character in this case. When the variable's name is changed to `$0123_test`, it will also log nothing, no matter its evaluation result.

I have read the documentation of ngx_http_log_module and I think that it looks like a bug, as variables starts with digits work well other places, such as in `if` block."	taoky@…
1.19.x	1977	Implement TLS 1.3 random record padding to mitigate BREACH	nginx-module	1.19.x	enhancement		new	2020-05-14T00:56:14Z	2022-08-31T00:15:40Z	"The TLS specification (RFC 8446) section 5.4 defines optional Record Padding: https://tools.ietf.org/html/rfc8446#section-5.4

As a security improvement, I suggest that nginx implement random record padding.

Random record padding would mitigate the BREACH attack (and other similar) vulnerabilities.

In OpenSSL, this is done using SSL_CTX_set_record_padding_callback: https://www.openssl.org/docs/man1.1.1/man3/SSL_set_block_padding.html"	Craig Andrews
1.19.x	1992	Websocket over HTTP/2 support	nginx-module	1.19.x	enhancement		new	2020-06-02T11:15:40Z	2024-05-12T08:19:06Z	"Hello, 

Will NGINX ever support Websocket over HTTP/2?

https://tools.ietf.org/html/rfc8441"	ckmichael8@…
1.19.x	2032	Odd image_filter behavior on site behind HTTP authentication	nginx-module	1.19.x	enhancement		new	2020-08-22T16:05:56Z	2020-08-31T01:21:21Z	"I've got a client that has a test site behind HTTP authentication and I'm noticing some odd behavior with images which pass through image_filter. As best as I can tell, if the client does not send the authentication credentials with its request, core nginx properly sends a 401 page, but then image_filter tries to filter that page as if it were an image, fails, and returns a 415 response. I've attached the full output of a debug log on such a request, but I'll highlight the lines I find interesting here:


{{{
2020/08/22 17:55:41 [info] 324862#324862: *23621 no user/password was provided for basic authentication, client: 172.68.141.20, server: live-test.gamerguides.com, request: ""GET /assets/guides/resize140x-/200/Bannerlord_Companion_Recruit_1.jpg HTTP/1.1"", host: ""live-test.gamerguides.com""
2020/08/22 17:55:41 [debug] 324862#324862: *23621 http finalize request: 401, ""/assets/guides/resize140x-/200/Bannerlord_Companion_Recruit_1.jpg?"" a:1, c:1
2020/08/22 17:55:41 [debug] 324862#324862: *23621 image filter
2020/08/22 17:55:41 [debug] 324862#324862: *23621 image filter: ""<h""
2020/08/22 17:55:41 [debug] 324862#324862: *23621 http special response: 415, ""/assets/guides/resize140x-/200/Bannerlord_Companion_Recruit_1.jpg?""
}}}

According to https://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_image_filter_module.c#L434 , image_filter logs the first two characters of the upstream file. Sure looks like those two characters are the start of an HTML page explaining the 401 error. And since the 401 status don't get to my browser, it doesn't try to redo the request with credentials attached.

So I think what needs to be happening is that if the upstream response is not a 200, the original response continues to be sent to the client. Otherwise image_filter can try to process the response body."	GarrettAlbright@…
1.19.x	2119	Add support for Maxmind's GeoIP2	nginx-module	1.19.x	enhancement		new	2021-01-11T16:18:19Z	2021-01-11T16:18:19Z	"Since Maxmind has announced plans to retire its ""legacy"" GeoIP in May 2022[1], it would be great if nginx could add support to GeoIP2 databases using either Maxmind's libmaxminddb[2] or via another supported library.

[1] https://dev.maxmind.com/geoip/legacy/downloadable/
[2] https://github.com/maxmind/libmaxminddb/"	HQuest@…
1.19.x	2120	Add Support for IP2Location and IP2Proxy BIN Database	nginx-module	1.19.x	enhancement		new	2021-01-13T01:51:51Z	2021-04-20T15:05:20Z	"Nginx is supporting single vendor for IP geolocation right now.

It is good if we can support IP2Location and IP2Proxy BIN database as well. They are from the same company and provide free LITE database for public.

IP2Proxy provides accurate proxy detection that we would like to use in the nginx level.

You can visit [https://lite.ip2location.com] and [https://www.ip2location.com] for more information."	topakeris@…
1.19.x	2131	NGNIX needs root cert in the chain for Client validation.	nginx-core	1.19.x	enhancement		new	2021-01-29T14:38:10Z	2021-01-29T14:38:10Z	"In a CA ca hierarchy sub-ca/issuing CA issues the cert to two device and mutual authentication needs to happen. Sub-CA/Issuing CA wont issue the root ca public cert.
CA-offline------> Sub-ca-------> ngnix server
                           |
                           V
                         client
When a client and ngnix server get the certificate from the subca. When client sent a request to authenticate it will fail with 400 BAD request. As soon as I add the rootCA public cert it authenticates.

Ngnix code used in IoT platform might require a manual process to copy the root ca public key to millions of devices. There should be a way to authenticate the client and server with the Sub-ca certificate itself."	vasu767@…
1.19.x	2132	ssl_ocsp / ssl_stapling for ngx_mail_ssl_module	nginx-module	1.19.x	enhancement		new	2021-02-02T12:22:50Z	2021-02-02T12:22:50Z	"`ngx_mail_ssl_module` is missing some features compared to `ngx_http_ssl_module`, particularly `ssl_ocsp` and `ssl_stapling`.  Would it be possible to port these to `nginx_http_ssl_module`?

Same goes for `ngx_stream_ssl_module` (but this is not my use case).

"	Geert Hendrickx
1.19.x	2161	Allow accessing arbitrary cookies.	nginx-core	1.19.x	enhancement		new	2021-04-11T15:09:51Z	2022-01-24T15:23:49Z	"This is a duplicate of https://trac.nginx.org/nginx/ticket/707, however after 6 years I think it deserves rethinking.

1. There are standard cookies that aren't `[a-zA-Z_]` such as the `__Secure-` and `__Host-` prefixes (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#attributes).
2. Parsing cookies is non-trivial. Cookies may or may not be quoted and these are generally treated the same. Furthermore it is easy to match values inside of cookies instead of the cookies themselves (eg `foo=bar=baz; bar=hello` and a regex `bar=foo`). Adhoc regexes are not an appropriate tool for cookie parsing.

I think it would be good to encourage robust parsing of cookies and encouraging the use of the `__Secure-` and `__Host-` headers by supporting them natively.

Obviously it isn't perfectly clear how to integrate this into nginx configs. Maybe there could be a specific `map` option for parsing cookies with arbitrary names? Or a new directive for similar. A bigger change to the language would be supporting arbitrarily-named variables. Perhaps a syntax such as `${cookie___Secure-foo}` or `$""cookie___Secure-foo""`."	Kevin Cox
1.19.x	2167	variable support in proxy_protocol stream module	nginx-module	1.19.x	enhancement		new	2021-04-22T13:18:25Z	2021-04-22T13:18:25Z	"Considering a use case when ngx_stream_ssl_preread_module (ssl_preread on) is used to detect and separate TLS from none-TLS connection, it would be great to have the ability to set proxy_protocol from map values that depend on parsed incoming TLS or none-TLS connection.

This would allow to pass original client IP to a HTTP backend that listens with proxy_protocol enabled.

Effectively this would allow to terminate TLS and none TLS connections on a single port without losing original client IP in logs."	s3rj1k
1.19.x	2216	Add .mjs to known JS MIME types	other	1.19.x	enhancement		new	2021-07-13T17:56:00Z	2025-01-03T00:54:21Z	"I was told to make a ticket here per the mailing list thread: http://mailman.nginx.org/pipermail/nginx-devel/2021-July/014176.html

.mjs is supported by Node.js, is in JS specification examples ( E.G. https://tc39.es/ecma262/#sec-hostresolveimportedmodule ), and WHATWG specification examples ( https://html.spec.whatwg.org/multipage/webappapis.html#integration-with-the-javascript-module-system ), and has an upcoming MIME update per IETF ( https://datatracker.ietf.org/doc/draft-ietf-dispatch-javascript-mjs/ ). It is also supported by various existing MIME databases and serves properly on various hosts like Github Pages without custom configuration."	Bradley Meck
1.19.x	2222	add_after_body concatenates (upstream proxied) gziped content with uncompressed local data	nginx-module	1.19.x	enhancement		new	2021-07-27T13:22:01Z	2021-08-02T17:40:40Z	"I've now spent several days debugging to find out that using Nginx's `add_after_body` directive, Nginx doesn't decompress and recompress upstream proxied responses when appending the results of the sub request (which yealds uncompressed data).

I'd like to think that this should either produce a warning (as clients tend to discard the uncompressed data appended to the compressed data) or should transparently (maybe optionally) decompress, append and then recompress the http stream to produce output that is actually understandable by clients.

It may also be possible to compress the response from `add_after_body` and concatenate the compressed response to the compressed upstream servers response (not quite sure about my understanding of gzip here).

In any way, I'd like to suggest to either emit a warning (none was given, even with `error_log ... debug;`) or form correct (as in readable by clients) responses.

Some excerpts from my config:

{{{
include conf.d/error_logging.include;
include conf.d/workers.include;

http {
    # Disabling both gzip and enabling gunzip works around the problem that nginx tries to concatenate 
    # the add_after_body content to gziped upstream responses, which leads to the appended uncompressed
    # content being ignored/discarded by the client.
    # gzip off;
    # gunzip on;
    
    include conf.d/access_logging.include;
    include conf.d/mime_types.include;

    server {
        listen 8000;
        server_name localhost;

        location /proxy_static {
            # returns uncompressed responses
            alias /srv/www/static;
            add_after_body """"; # disable add after body to these locations
        }
        add_after_body /proxy_static/custom_css_and_js.html;
        
        location / {
            # returns gzip compressed responses
            proxy_pass http://host.docker.internal:8001;
        }
    }
    
}
}}}"	dwt@…
1.19.x	2224	HTTP/2 in nginx does not use double-GOAWAY for graceful connection shutdown	nginx-module	1.19.x	enhancement		reopened	2021-07-30T20:51:53Z	2021-08-03T01:33:01Z	"As defined in RFC 7540 §6.8:

> A server that is attempting to gracefully shut down a connection SHOULD send an initial GOAWAY frame with the last stream identifier set to 2^31^-1 and a NO_ERROR code. This signals to the client that a shutdown is imminent and that initiating further requests is prohibited. After allowing time for any in-flight stream creation (at least one round-trip time), the server can send another GOAWAY frame with an updated last stream identifier. This ensures that a connection can be cleanly shut down without losing requests.

I see multiple nginx tickets where clients are blamed for not retrying. But I saw no mention of the RFC recommendation nor the latency impact caused by nginx's behavior. Statements like ""It does not seem to be possible to resolve this on nginx side"" seem inaccurate.
https://trac.nginx.org/nginx/ticket/1250
https://trac.nginx.org/nginx/ticket/1590
https://trac.nginx.org/nginx/ticket/2155

I've seen users having trouble with this when interacting with grpc-java in the past, but only now chose to file an issue. Historically it seems users have increased keepalive_requests to reduce the rate of failures. It is becoming a bit more noticeable now because grpc-java has improved its error reporting to distinguish the case where a failure was caused by abrupt GOAWAY, so it is easier to notice poorly-behaved servers. This came up this time as part of https://github.com/grpc/grpc-java/issues/8310, but I have a resolution available for that issue.

I understand that nginx would need to put some limits on the number of additional RPCs and the length to allow for additional RPCs. I also understand that nginx doing graceful GOAWAY does not remove the need for client-side retries."	ejona.google.com@…
1.19.x	2233	Packages for Debian Bullseye should include 32-bit x86 binaries	nginx-package	1.19.x	enhancement		new	2021-08-21T04:23:12Z	2023-05-06T02:17:52Z	"[http://nginx.org/packages/mainline/debian/dists/bullseye/nginx/ Binary packages are not being built for 32-bit x86 Bullseye], while [http://nginx.org/packages/mainline/debian/dists/buster/nginx/ they are for Buster]. It'd be nice if they were.

We use the 32-bit distribution for improved memory efficiency when tracking large numbers of proxy_cached thumbnails on small (~1GB RAM) SSD-based VMs (64-bit is ~40% larger; this impacts nginx keys_zone as well as inode cache).

Unlike #1777, [https://www.debian.org/releases/bullseye/ i386 (or rather i686) is a supported platform for Debian Bullseye]. While slowly declining in use vs. amd64, [https://popcon.debian.org/ popcon shows i386 as being 36 times more popular than arm64], which is a supported nginx binary platform.

#2217 didn't mention 32-bit. It did mention AMI availability. However, it seems [https://wiki.debian.org/Cloud/AmazonEC2Image/Buster these weren't available for 32-bit Buster], yet still has builds by Sergey (sb). Perhaps he has [https://wiki.debian.org/CrossCompiling a multiarch setup] which could be duplicated without much trouble?

Attempting to use a Bullseye-specific package results in it offering the distribution's own version. This may be problematic if it triggers when newer versions of nginx arrive, especially if the user is upgrading to Bullseye and misses the notices.

{{{
# apt update
...
Get:7 http://nginx.org/packages/mainline/debian bullseye InRelease [2,857 B]
...
N: Repository 'http://mirror.us.leaseweb.net/debian bullseye InRelease' changed its 'Version' value from '' to '11.0'
N: Repository 'http://mirror.us.leaseweb.net/debian bullseye InRelease' changed its 'Suite' value from 'testing' to 'stable'
...
N: Skipping acquire of configured file 'nginx/binary-i386/Packages' as repository 'http://nginx.org/packages/mainline/debian bullseye InRelease' doesn't support architecture 'i386'

# apt-get reinstall nginx/bullseye
...
Selected version '1.18.0-6.1' (Debian:11.0/stable [all]) for 'nginx'
Selected version '1.18.0-6.1' (Debian:11.0/stable [i386]) for 'nginx-core' because of 'nginx'
...

The following additional packages will be installed:
  geoip-database libgeoip1 libnginx-mod-http-geoip libnginx-mod-http-image-filter libnginx-mod-http-xslt-filter libnginx-mod-mail libnginx-mod-stream libnginx-mod-stream-geoip libxslt1.1 nginx-common nginx-core
Suggested packages:
  geoip-bin fcgiwrap nginx-doc ssl-cert
The following NEW packages will be installed:
  geoip-database libgeoip1 libnginx-mod-http-geoip libnginx-mod-http-image-filter libnginx-mod-http-xslt-filter libnginx-mod-mail libnginx-mod-stream libnginx-mod-stream-geoip libxslt1.1 nginx-common nginx-core
The following packages will be DOWNGRADED:
  nginx
0 upgraded, 11 newly installed, 1 downgraded, 0 to remove and 0 not upgraded.
Need to get 4,844 kB of archives.
After this operation, 10.8 MB of additional disk space will be used.
Do you want to continue? [Y/n] n
}}}"	Laurence 'GreenReaper' Parry
1.19.x	2254	cache loader ignores reopen signal	nginx-core	1.19.x	enhancement		new	2021-10-07T15:10:00Z	2021-11-12T12:36:07Z	"Hi,
in our environment we use logrotate to rotate the logs regularly. We call `nginx -s reopen` after the logrotate and most of the times it works OK. The only exception is when nginx was restarted and cache loader is reading all the information about cached data from disk.

When cache loader is running it ignores the `nginx -s reopen` and keeps the file descriptor to the original log. When the rotated log file is deleted the disk space can't be released and that may lead to full log partition. 

Steps to reproduce:

# cat /etc/logrotate.d/cdn_nginx_access
/var/log/nginx/access.log {
  create 0640 www-data adm
  daily
  missingok
  nocopytruncate
  notifempty
  rotate 0
  sharedscripts
  postrotate
    nginx -s reopen>/dev/null 2>&1
  endscript
  prerotate
      if [ -d /etc/logrotate.d/httpd-prerotate ]; then     run-parts /etc/logrotate.d/httpd-prerotate;   fi

  endscript
}

# cat /etc/logrotate.d/cdn_nginx_other

/var/log/nginx/error.log {
  create 0640 www-data adm
  daily
  missingok
  nocompress
  nocopytruncate
  notifempty
  rotate 0
  sharedscripts
  postrotate
    nginx -s reopen>/dev/null 2>&1
  endscript
  prerotate
      if [ -d /etc/logrotate.d/httpd-prerotate ]; then     run-parts /etc/logrotate.d/httpd-prerotate;   fi

  endscript
}
# systemctl restart nginx
# lsof /var/log | grep delet
# logrotate -f /etc/logrotate.d/cdn_nginx_access
# logrotate -f /etc/logrotate.d/cdn_nginx_other
# lsof /var/log | grep delet
nginx     26823 www-data    2w   REG    9,2      711665  1835015 /var/log/nginx/error.log.1 (deleted)
nginx     26823 www-data    4w   REG    9,2  5831379042  1835014 /var/log/nginx/access.log.1 (deleted)
nginx     26823 www-data    5w   REG    9,2      711665  1835015 /var/log/nginx/error.log.1 (deleted)

# ps -efww | grep 26823
www-data 26823 26626  0 13:16 ?        00:00:01 nginx: cache loader process"	mirek.chocholous.showmax.com@…
1.19.x	2275	Support Encrypted Client Hello	nginx-module	1.19.x	enhancement		new	2021-11-08T00:22:12Z	2023-11-20T23:40:07Z	"Current specification: https://datatracker.ietf.org/doc/draft-ietf-tls-esni/

Encrypted Client Hello removes a major source of information leakage when using TLS: the hostname. When combined with OCSP Must-Staple, the only information leaked over a TLS connection will be the source/dest IPs and traffic sizing (the latter of which can be mitigated with TLS 1.3's random padding). This offers a significant privacy improvement.

ECH has been implemented by Firefox, Cloudflare, H2O (in quictls), NSS, and BoringSSL.

ECH was previously known as ESNI (Encrypted SNI). The EFF promoted it in its Deeplinks blog: https://www.eff.org/deeplinks/2018/09/esni-privacy-protecting-upgrade-https"	Seirdy
1.19.x	2300	Link variable index from map module docs	documentation	1.19.x	enhancement		new	2022-01-05T12:53:21Z	2022-01-05T12:53:21Z	"It would be nice if there was a link from http://nginx.org/en/docs/http/ngx_http_map_module.html to http://nginx.org/en/docs/varindex.html, e.g. on the first ""variables"" text."	cweiske
1.19.x	2301	Add examples for core variables	documentation	1.19.x	enhancement		new	2022-01-05T13:39:55Z	2022-01-05T13:39:55Z	"I was searching the variable list (http://nginx.org/en/docs/varindex.html) and then core variables (http://nginx.org/en/docs/http/ngx_http_core_module.html#variables) and got confused because the textual descriptions do not always help.

My goal was to find the variable containing the file path requested by the browser (""/path/to/file"" when the browser accesses ""http://example.org/path/to/file?foo=bar"").

It would have helped me if the variables would have had examples. Please add them to the docs.

$request_filename: /var/www/webapp/public/icons/external.svg

$request_uri: /icons/external.svg?foo=bar
"	cweiske
1.19.x	2332	Include $request_id in error.log messages	nginx-core	1.19.x	enhancement		new	2022-03-10T11:54:40Z	2023-03-08T16:45:10Z	In order to make debugging easier it might make sense to include the $request_id variable in error.log messages when these are associated with incoming requests.	heikojansen@…
1.19.x	2350	Option to have set_real_ip_from use the proxied client ip when using proxy protocol.	documentation	1.19.x	enhancement		new	2022-05-09T23:14:40Z	2024-01-11T04:28:52Z	"I'm running nginx on kubernetes in the following configuration:

client -> cloudflare -> load balancer -> nginx ingress -> service

My load balancer runs the proxy protocol and sends traffic to nginx which is on a private network. I'd like trust the X-Forwarded-From header from Cloudflare, but I can't configure that because ""set_real_ip_from"" refers to the IP of the incoming connection to nginx from my load balancer. When I set ""set_real_ip_from"" to my private network, which the load balancer is on, ngx_http_realip_module trusts the X-Forwarded-From headers sent to it by my load balancer, which could be coming from anywhere, so it's very easily spoofable.

I'd like the option for ""set_real_ip_from"" to check the IP of the request forwarded to nginx when using the proxy protocol."	circlingthesun@…
1.19.x	2401	Deployment on Heroku: add options to handle SIGTERM	documentation	1.19.x	enhancement		new	2022-10-21T10:19:13Z	2022-10-21T10:19:37Z	"I am using Nginx on Heroku in Docker container along with Gunicorn/Supervisor.

When Heroku scales down it first sends SIGTERM to all processes in the container and after 30 seconds it sends SIGKILL. I don't think, that I can change this behavior.

After receiving SIGTERM Nginx terminates ungracefully not waiting for running requests.

I wonder if the possibility to terminate gracefully on SIGTERM signal could be added."	PetrDlouhy@…
1.19.x	2410	Add a doctype to autoindex HTML output	nginx-module	1.19.x	enhancement		accepted	2022-11-16T14:35:05Z	2023-07-20T15:03:12Z	"Currently the output of a directory by the autoindex module looks like this:

{{{
<html>
<head><title>Index of /jquery/3.6.0/</title></head>
<body>
<h1>Index of /jquery/3.6.0/</h1><hr><pre><a href=""../"">../</a>
<a href=""dist/"">dist/</a>                                              30-May-2022 11:52                   -
<a href=""external/"">external/</a>                                          30-May-2022 11:52                   -
<a href=""src/"">src/</a>                                               30-May-2022 11:52                   -
<a href=""AUTHORS.txt"">AUTHORS.txt</a>                                        30-May-2022 11:52               12448
<a href=""LICENSE.txt"">LICENSE.txt</a>                                        30-May-2022 11:52                1097
<a href=""README.md"">README.md</a>                                          30-May-2022 11:52                1996
<a href=""package.json"">package.json</a>                                       30-May-2022 11:52                3027
</pre><hr></body>
</html>
}}}

Would it be possible to update the output to be proper HTML5 (including a DOCTYPE)?

The reason for this that we're using a autoindex directory for an additional download source for Python packages. But using PIP 22.0.2 gives the following warning:

> DEPRECATION: The HTML index page being used (http://www.example.org/python-downloads/) is not a proper HTML 5 document. This is in violation of PEP 503 which requires these pages to be well-formed HTML 5 documents. Please reach out to the owners of this index page, and ask them to update this index page to a valid HTML 5 document. pip 22.2 will enforce this behaviour change. Discussion can be found at https://github.com/pypa/pip/issues/10825"	doerwalter@…
1.19.x	2434	Support dark mode in error pages	other	1.19.x	enhancement		new	2023-01-04T16:06:39Z	2023-01-04T16:06:39Z	"Currently the default `index.html` and `500.html` pages support dark mode. It would be good for accessibility if this is extended to other builtin pages, such as error pages and redirect pages in https://trac.nginx.org/nginx/browser/nginx/src/http/ngx_http_special_response.c.

Currently this is handled using CSS where it’s supported

{{{
html { color-scheme: light dark; }
}}}

It can also be supported using an HTML meta tag

{{{
<meta name=""color-scheme"" content=""light dark"" />
}}}"	Remco Haszing
1.19.x	2448	Restrict Request Response Cycle Length	nginx-core	1.19.x	enhancement		new	2023-02-03T03:36:28Z	2023-02-03T03:36:28Z	"Hi

I'd like to suggest an option to restrict request-response cycle lengths(in time) to explicit values. 

Currently we only have the option to restrict the 'time between reads or writes' from/to a client or upstream but not total time (something matching against $request_time variable)

"	sootie12.googlemail.com@…
1.1.x	146	Age header for proxy_http_version 1.1	nginx-core	1.1.x	enhancement	somebody	new	2012-04-07T16:12:23Z	2016-05-12T13:37:06Z	As far as I understand RFC 2616 a HTTP 1.1 proxy server must send an 'Age' header for responses generated from its own cache (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.6). However nginx 1.1.18 seems not to send this header in case it sends a reply generated from its cache and 'proxy_http_version' is set to '1.1'.	Christian Bönning
1.1.x	407	Cache X-Accel-Redirect responses (from fastcgi)	nginx-module	1.1.x	enhancement		new	2013-09-05T23:20:05Z	2022-01-17T15:44:06Z	"I've got a FastCGI (PHP-FPM) backend-server which will do some preprocessing and return an X-Accel-Redirect header to tell nginx to serve the file. I would like nginx to cache this response; not the actual response to the client but just the mapping from $request_uri to the headers returned by FPM. On subsequent requests nginx could get that reply from the cache without bothering FPM and resolve the X-Accel-Redirect and serve the file.

(In my case I would like to use X-Accel-Expires to tell nginx it can cache the request.)"	Jille Timmermans
1.21.x	2389	"""proxy_cache_background_update on"" ignored using subrequest (more exactly: nested subrequest)"	nginx-core	1.21.x	defect		reopened	2022-09-08T07:11:32Z	2023-10-24T09:26:55Z	"The ticket was originally opened for nginx/njs, but it is a problem with the Nginx subrequest mechanism.
See: https://github.com/nginx/njs/issues/573


=== Text from the original issue:

Requesting a location through njs subrequest that has caching enabled with proxy_cache_background_update on and proxy_cache_use_stale updating will never get stale data.

That seems to be a bug, because requesting the same location directly, the cache returns stale data.

In the provided example there are three locations:

{{{/longRunningRequest/}}}:
Has a js handler delaying the result for two seconds

{{{/longRunningRequestCached/}}}:
Has a cache that is valid for one second, allowing stale data while updating and proxy passes to /longRunningRequest/

{{{/longRunningRequestThroughNjs/}}}:
Has a js handler making a subrequest to /longRunningRequestCached/

Expected behavior:
{{{/longRunningRequestCached/}}} and {{{/longRunningRequestThroughNjs/}}} will always return immediately (except for the first request) and serving stale data if needed.

What actually happens:
{{{/longRunningRequestCached/}}} works as described above
{{{/longRunningRequestThroughNjs/}}} is blocking every time the cache entry has to be renewed.





=== Issue explanation by Dmitry Volyntsev:

>What you see, is the limitation of an nginx subrequest mechanism. r.subrequest() creates an internal nginx subrequest with NGX_HTTP_SUBREQUEST_BACKGROUND and NGX_HTTP_SUBREQUEST_IN_MEMORY flags.
>
>proxy_cache_background_update directive is also relies on subrequests. The main problem is: nginx does not support nested subrequest at this point.
>
>Spefically the code of the cache update below checks the r->background flag, which is 1 for r.subrequest() and does not try to make a background update


=== example:

nginx config:
{{{
    js_import myjs.js;

    server {

        location = /longRunningRequestThroughNjs/ {
                js_content myjs.requestToCachedLongRunningRequest;
        }

        location = /longRunningRequest/ {
                js_content myjs.simulateLongRunningRequest;
        }

        location = /longRunningRequestCached/ {
                proxy_connect_timeout 1s;
                proxy_read_timeout 12s;
                proxy_set_header Content-Length """";
                proxy_pass http://127.0.0.1/longRunningRequest/;
                proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie;
                proxy_hide_header Set-Cookie;
                proxy_buffering on;

                proxy_cache_use_stale http_404 error timeout invalid_header updating
                                        http_500 http_502 http_503 http_504;

                proxy_cache_background_update on;
                proxy_cache_lock on;
                proxy_cache_lock_age 50s;
                proxy_cache_lock_timeout 60s;
                proxy_cache_revalidate on;
                proxy_cache MYCACHE;
                proxy_cache_valid 200 1s;
                proxy_cache_key $http_accept_encoding$uri;

                add_header x-cache-status $upstream_cache_status;
        }
    }
}}}

myjs.js:
{{{
function delayed(r) {
  r.return(200);
}

function simulateLongRunningRequest(r) {
  setTimeout(delayed, 2000, r);
}

function requestToCachedLongRunningRequest(r) {
  r.subrequest('/longRunningRequestCached/')
    .then(reply => {
      r.return(200);
    })
    .catch(_ => r.return(401));
}

export default { simulateLongRunningRequest, requestToCachedLongRunningRequest };
}}}


"	dfex55@…
1.21.x	2639	Embedded trailer variables have no value	nginx-module	1.21.x	defect		new	2024-05-09T20:24:07Z	2024-05-13T10:38:13Z	"Embedded trailer variables ($sent_trailer_ & $upstream_trailer_) for ngx_http_upstream_module and ngx_http_headers_module do not have any value.

Using the directive: 
`add_trailer X-TestTrailer $sent_trailer_x_testtrailer`
OR
`add_trailer X-TestTrailer $upstream_trailer_x_testtrailer`

Is not working when the upstream server sends an Http trailer with the name `X-TestTrailer`. 

In testing we've found that add_trailer is working, but the embedded variables do not have values from the upstream response trailers.

**Reproducible POC**: 

Spin up a simple server that serves a trailer in every response. A simple node app like so works just fine:
{{{
var http = require('http');
var fs = require('fs');

var server = http.createServer(function(req, res) {
  console.log('request was made: ' + req.url);
  res.writeHead(200, {'Content-Type': 'application/json'});
  res.addTrailers({'X-TestTrailer': 'This is a trailer message'});
  res.end('End of Response');
});

server.listen(3000, '127.0.0.1');
console.log('listening on port 3000');
}}}

In the nginx.conf file under your server (should point to the sample app), add: 
`add_trailer X-TestTrailer $sent_trailer_x_testtrailer;`
AND
`add_trailer X-UpstreamTestTrailer $upstream_trailer_x_testtrailer;`

Start nginx 

Make a request directly to the app server ( you can use curl with the --raw argument to see trailers ) note the trailers are received.

Make the same request to the nginx proxy and note the trailers are not received."	kylesimon3@…
1.21.x	2668	client_body_buffer_size - Body Larger Than Buffer Size Is Omitted	other	1.21.x	defect		new	2024-07-14T11:42:26Z	2024-07-14T12:41:56Z	"I set client_body_buffer_size to 4MB and client_max_body_size to 150MB.
When I send a request with a 6 MB body (6MB sized argument), in access_by_lua_block, the body appears as empty.

    access_by_lua_block {

        ngx.req.read_body()
        local body_data = ngx.req.get_body_data()

        if body_data then
            ngx.log(ngx.ERR, ""Request body: "", body_data)
        else
            ngx.log(ngx.ERR, ""No request body found"")
        end
     }

I see the log ""No request body found"", even though the body was clearly not empty.

client_body_in_single_buffer is default (wasn't set).

When I increased client_body_buffer_size to 10MB, I had no issues with the 6MB body.

Why is that so? Why does the buffer size cause the body to appear as empty?"	shalhevetm@…
1.21.x	2419	new variable is needed(the port of nginx server which sending the request to back-end)	nginx-module	1.21.x	enhancement		new	2022-11-29T10:00:05Z	2022-11-29T10:00:05Z	"sometimes, in nginx access.log, $upstream_response_time shows back-end cost log time, but back-end log shows little time is cost.

so I need make a tcpdump to judge who is right. and when I got a package, to find it is too many request at same time, it is hard to find the package which is matching to specific access log. need sth. to take part of this filter.

and I find the port of nginx server which sending the request to back-end is a good filter if it is recorded in access.log, and I tried to find it in nginx document, but it seems not exists.

so a new embedded variable which stand for this port is needed."	gentle-king@…
1.22.x	2618	HTTP session is finalized early	nginx-module	1.22.x	defect		new	2024-03-15T08:04:11Z	2024-08-27T16:23:35Z	"Reproduce conditions:
1. HTTP POST with big data. It needs several write-socket operations.
2. HTTP response is received before the whole HTTP content is sent out.

Current behavior:
The HTTP session will be finalized before the HTTP request is sent out completely.

Expected behavior:
Nginx should not stop sending the HTTP request payload until all HTTP payload is delivered to the backend server, even if a response is received early.

"	xinyanglbeijing@…
1.22.x	2661	ssl_verify_client can't configured with result of map operation	nginx-module	1.22.x	defect		new	2024-06-26T20:54:51Z	2024-08-23T16:02:23Z	"PREEMPT_DYNAMIC Debian 6.1.90-1**//
=== Using nginx to terminate TLS connections for different domains 
I have the following configuration, to server different domains with individual TLS configuration:
{{{
stream {
  map $ssl_preread_server_name $name {
    default           server_one;
    two.example.org   server_two;
    three.example.org server_three;
  }  
  
  map $ssl_preread_server_name $authen_cert {
    default           ""/etc/nginx/public.cert"";
    two.example.org   ""None"";
    three.example.org ""None"";
  }  
  
  map $ssl_preread_server_name $svc {
    default           ""on"";
    two.example.org   ""off"";
    three.example.org ""off"";
  }

  server {
    listen 127.0.0.1:3128 ssl;
    ssl_protocols TLSv1.2 TLSv1.3; 
    ssl_certificate /etc/letsencrypt/live/$name/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/$name/privkey.pem;
    ssl_client_certificate $authen_cert ;
    ssl_verify_client $svc ; <==== error happens here
    proxy_pass 127.0.0.2:3128;
  }    
}
}}}

I receive the error-message: **//invalid value ""$svc"" in /etc/nginx/sites-enabled/default:27//**.

So I can't use the associated configuration to **ssl_client_certificate**, which renders my configuration unusable for targets needing cert auth.

Unfortunately if-conditions do also not work here, even, when I can read in documentation, that if-conditions should be available in server blocks.
"	scio-nescio@…
1.22.x	2676	Nginx over Docker goes to endless loop and never replies	nginx-core	1.22.x	defect		new	2024-08-06T18:29:25Z	2024-08-06T18:29:25Z	"Hi,
I am using nginx into a Docker over Windows. From time to time, nginx become not responsible, there is no logs in it, the only sign I can see for now is the nginx.exe process which is stuck at 25% of CPU.

nginx version is 1.22.4

What could be done to provide some logs ?
"	jeremy.carnus@…
1.22.x	2615	Don't proxy connection-specific headers by default	nginx-module	1.22.x	enhancement		new	2024-03-06T13:59:03Z	2024-03-06T13:59:03Z	"== Problem ==

If you set up a proxy to another server, the proxy connection can only be done with HTTP 1.0/1.1. In most cases, a browser will then connect to nginx with HTTP/2.0.

The proxy server will then likely respond with an Upgrade header. Nginx then passes this header to the client. But, in HTTP/2 these connection-specific headers are invalid and some clients will reject the message (according to the specs, all clients should be rejecting the message).
https://www.rfc-editor.org/rfc/rfc9113#section-8.2.2-1

== Solution ==

Even if the HTTP version is the same, these connection-specific headers apply to the connection between nginx and the proxy only. They should not have any relation to the client's connection and therefore should not be passed through.

Disabling passing of these headers by default will make configuration easier and result in users experiencing less issues when trying to create a proxy."	Sam Bull
1.22.x	2426	Nginx repository for debian doesn't have 1.22.1 deb for Buster	documentation	1.22.x	defect		new	2022-12-14T07:20:52Z	2022-12-15T05:31:03Z	"Hi, nginx 1.22.1 (current stable) was released on 19-Oct-2022 and contains a fix for a CVE (https://nginx.org/en/CHANGES-1.22) but is not available to Debian Buster users.

nginx.org's package repository for Debian contains a build for Debian Bullseye, but not for Debian Buster. Since Debian Buster is still supported as a LTS release until June 30, 2024 (https://wiki.debian.org/LTS), please release a .deb file for buster for v1.22.1 (and all future releases until June 30, 2024).

This link should not 404 after the build is complete and released:

https://nginx.org/packages/debian/pool/nginx/n/nginx/nginx_1.22.1-1~buster_amd64.deb

Thank you."	parkr@…
1.22.x	2429	Ship FHS compliant packages (/var/run > /run)	nginx-package	1.22.x	defect		new	2022-12-20T01:46:48Z	2023-04-06T11:52:39Z	"In 2015 Linux FHS was updated.
https://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html

And recently systemd started complaining about this on newer distro releases like RHEL 9.X.

https://access.redhat.com/solutions/4154291

I inspected official stable and mainline RPM package and found that it still uses /var/run. It should switch to /run for Linux distro packages. This is not actually a nginx issue but rather packaging issue. So pardon me if I mislabeled any components."	istiak101@…
1.22.x	2617	nginx 1.22 - sending GOAWAY to client after 60s	documentation	1.22.x	defect		new	2024-03-07T07:29:08Z	2024-03-07T07:59:07Z	"Hi nginx community,
I wanted to about the changes happened from nginx 1.18 to 1.20

https://github.com/nginx/nginx/commit/0f5d0c5798eacb60407bcf0a76fc0b2c39e356bb
""With this change, behaviour of HTTP/2 becomes even closer to HTTP/1.x,
and client_header_timeout instead of keepalive_timeout is used before
the first request is received.""

Usually 5G NFs establish two http2 connections, one used for requests and the other as standby that will be used when we reach max streams, so only keepalive http2 ping will be sent.

On older nginx say(1.18) above was achieved by using client_header_timeout and keepalive_timeout.

Now with the changes to 1.20,
could keep-alive timeout will help here to maintain the former behavior of nginx 1.18, where keepalive_timeout bring the connection if idle ( no control or data => no http2 ping)


{{{
# h2c connect https://<IP>:449;while true;do sleep 1;h2c ping;done
[2024-02-14 15:35:13] -> SETTINGS(0)
[2024-02-14 15:35:13] <- SETTINGS(0)
[2024-02-14 15:35:13] <- WINDOW_UPDATE(0)
[2024-02-14 15:35:13] -> SETTINGS(0)
[2024-02-14 15:35:13] <- SETTINGS(0)
[2024-02-14 15:35:14] -> PING(0)
[2024-02-14 15:35:14] <- PING(0)
[2024-02-14 15:35:15] -> PING(0)
[2024-02-14 15:35:15] <- PING(0)
[2024-02-14 15:35:16] -> PING(0)
[2024-02-14 15:35:16] <- PING(0)
[2024-02-14 15:35:17] -> PING(0)
[2024-02-14 15:35:17] <- PING(0)
[2024-02-14 15:35:18] -> PING(0)
[2024-02-14 15:35:18] <- PING(0)
[2024-02-14 15:35:19] -> PING(0)
[2024-02-14 15:35:19] <- PING(0)
[2024-02-14 15:35:20] -> PING(0)
[2024-02-14 15:35:20] <- PING(0)
[2024-02-14 15:35:21] -> PING(0)
[2024-02-14 15:35:21] <- PING(0)
[2024-02-14 15:35:22] -> PING(0)
[2024-02-14 15:35:22] <- PING(0)
[2024-02-14 15:35:23] -> PING(0)
[2024-02-14 15:35:23] <- PING(0)
[2024-02-14 15:35:24] -> PING(0)
[2024-02-14 15:35:24] <- PING(0)
[2024-02-14 15:35:25] -> PING(0)
[2024-02-14 15:35:25] <- PING(0)
[2024-02-14 15:35:26] -> PING(0)
[2024-02-14 15:35:26] <- PING(0)
[2024-02-14 15:35:27] -> PING(0)
[2024-02-14 15:35:27] <- PING(0)
[2024-02-14 15:35:28] -> PING(0)
[2024-02-14 15:35:28] <- PING(0)
[2024-02-14 15:35:29] -> PING(0)
[2024-02-14 15:35:29] <- PING(0)
[2024-02-14 15:35:30] -> PING(0)
[2024-02-14 15:35:30] <- PING(0)
[2024-02-14 15:35:31] -> PING(0)
[2024-02-14 15:35:31] <- PING(0)
[2024-02-14 15:35:32] -> PING(0)
[2024-02-14 15:35:32] <- PING(0)
[2024-02-14 15:35:33] -> PING(0)
[2024-02-14 15:35:33] <- PING(0)
[2024-02-14 15:35:34] -> PING(0)
[2024-02-14 15:35:34] <- PING(0)
Error while reading next frame: EOF
[2024-02-14 15:35:35] <- GOAWAY(0) << exactly 22s 

 

The issue is observed in both nginx 1.24 & nginx 1.22 ,

 

 

TEST 2 (with nginx 1.18)

$ /usr/sbin/nginx -v
nginx version: nginx/1.18.0 (1.18.0-1.el8)
grep client_header_timeout /etc/nginx/nginx.conf
        client_header_timeout           60s;
h2c connect https://<IP>:8447;while true;do sleep 60;h2c ping;done /* sending ping every 60s */
[2024-02-14 15:46:18] -> SETTINGS(0)
[2024-02-14 15:46:18] <- SETTINGS(0)
[2024-02-14 15:46:18] <- WINDOW_UPDATE(0)
[2024-02-14 15:46:18] -> SETTINGS(0)
[2024-02-14 15:46:18] <- SETTINGS(0)
[2024-02-14 15:47:18] -> PING(0)
[2024-02-14 15:47:18] <- PING(0)
[2024-02-14 15:48:18] -> PING(0)
[2024-02-14 15:48:18] <- PING(0)
[2024-02-14 15:49:18] -> PING(0)
[2024-02-14 15:49:18] <- PING(0)
[2024-02-14 15:50:19] -> PING(0)
[2024-02-14 15:50:19] <- PING(0)
[2024-02-14 15:51:19] -> PING(0)
[2024-02-14 15:51:19] <- PING(0)
[2024-02-14 15:52:19] -> PING(0)
[2024-02-14 15:52:19] <- PING(0)

In nginx-1.18 Atfer 60s ,the connction is not broke.  
}}}


Thanks"	drawte786@…
1.22.x	2642	proxy_cache_revalidate seems to prevent the cache manager to remove inactive cache objects	nginx-module	1.22.x	defect		new	2024-05-23T15:13:31Z	2024-05-23T16:48:22Z	"My config:

{{{
http {
...
proxy_cache_path /srv/cache/nginx_cache levels=1:2 keys_zone=cache:128m max_size=2048m inactive=10m;
...
server {
location / {
...
proxy_cache_valid 200 1m;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_lock on;
proxy_cache_background_update on;
proxy_cache_revalidate on;
...
}
...
}
...
}
}}}

Everything working as expected, but no files are removed from the disk cache.
The backend is not actually capable of honouring the revalidate mechanism just yet but I added it in there to be ready when needed.

Since I clearly remembered that the cache manager does actually remove the old, inactive files for other sites, I fiddled with the config enough times to conclude that the `proxy_cache_revalidate` option is the one stopping the removal of the old inactive cache objects.

My issue is that I could not find:
1. how often the cache manager goes trough the files to remove the old inactive objects (or clear up some space if min_free or max_size are used)
2. any link in any documentation between `proxy_cache_revalidate` and the cache manager process.

The technical issue is solved, but I would appreciate a pointer to a documentation to explain this better.

Thank you!"	teodorescuserban@…
1.23.x	2465	Execute system commands with njs (JavaScript)	other	1.23.x	enhancement		new	2023-03-08T07:08:05Z	2023-12-11T04:36:05Z	"Executing system commands in njs scripts could be very useful. Immediate use cases are:
1. Executing a media codec command to return the byte position within a media file at a specific time (milliseconds).
1. Executing an image resizing command to create smaller images (thumbnails) when original, full-size images are uploaded.

Additionally, providing input/output to system commands would provide us with a simple API to extend functionality beyond what njs natively provides.

Currently, Lua module [https://github.com/openresty/lua-resty-shell lua-resty-shell] provides a feature to execute commands. It would be great to have this in njs too."	anthumchris@…
1.23.x	2454	image_filter resize is not working correctly with some PNG files.(nginx is changing background color)	nginx-module	1.23.x	defect		new	2023-02-20T18:08:35Z	2023-12-19T15:51:35Z	"Hello, this is my configuration for image resizing 



{{{
location ~ (?<width>\d+)\/storage\/(?<folder>.+)\/(?<image>.+)$ {
    alias /var/www/html/web/storage/app/public/$folder/$image;
    image_filter resize $width $width;
    image_filter_jpeg_quality 80;
    image_filter_buffer 20M;
}
}}}

----

before resize : 
[[Image(https://preview.redd.it/pysc1bturyfa1.png?width=600&format=png&auto=webp&v=enabled&s=40849fda3d0835f3969028d240820061e668c52d)]]

After resize:
[[Image(https://preview.redd.it/w2ku9mvzryfa1.png?width=150&format=png&auto=webp&v=enabled&s=5b4406c6a180f50a1032355d13ac7f68be28c793)]]"	walidlaggoune159@…
1.23.x	2484	When reuseport is not present with listen 443 quic, only HTTP/2 works not HTTP/3	http/3	1.23.x	defect		reopened	2023-04-14T10:03:39Z	2024-09-30T10:17:19Z	"listen 443 quic;
If reuseport is not present with the above listen line, then only HTTP/2 works not HTTP/3. Add reuseport then it works fine. (chrome/firefox).

debug log:

{{{
2023/04/14 17:16:24 [debug] 188209#188209: *1 free: 0000AAAAFFA50830, unused: 2888
2023/04/14 17:16:24 [debug] 188209#188209: *1 free: 0000AAAAFFAB8130
2023/04/14 17:16:24 [debug] 188209#188209: quic recvmsg on 0.0.0.0:443, ready: 1
2023/04/14 17:16:24 [debug] 188209#188209: posix_memalign: 0000AAAAFF9D3E00:512 @16
2023/04/14 17:16:24 [debug] 188209#188209: malloc: 0000AAAAFFA50830:1250
2023/04/14 17:16:24 [debug] 188209#188209: *3 quic recvmsg: 220.233.6.16:13281 fd:10 n:1250
2023/04/14 17:16:24 [debug] 188209#188209: *3 http3 init session
2023/04/14 17:16:24 [debug] 188209#188209: *3 posix_memalign: 0000AAAAFFA50D20:512 @16
2023/04/14 17:16:24 [debug] 188209#188209: *3 add cleanup: 0000AAAAFFA50EE0
2023/04/14 17:16:24 [debug] 188209#188209: *3 event timer add: 10: 180000:11113282
2023/04/14 17:16:24 [debug] 188209#188209: *3 quic run
2023/04/14 17:16:24 [debug] 188209#188209: *3 quic packet rx long flags:c4 version:1
2023/04/14 17:16:24 [debug] 188209#188209: *3 quic packet rx init len:1232
2023/04/14 17:16:24 [debug] 188209#188209: *3 quic packet rx dcid len:8 80be092b143c86cd
2023/04/14 17:16:24 [debug] 188209#188209: *3 quic packet rx scid len:0 
2023/04/14 17:16:24 [debug] 188209#188209: *3 quic address validation token len:0 
2023/04/14 17:16:24 [debug] 188209#188209: *3 sendmsg: 107 of 107
2023/04/14 17:16:24 [debug] 188209#188209: *3 quic retry packet sent to 
2023/04/14 17:16:24 [debug] 188209#188209: *3 quic packet done rc:-4 level:init decr:0 pn:0 perr:0
2023/04/14 17:16:24 [debug] 188209#188209: *3 quic packet rejected rc:-4, cleanup connection
2023/04/14 17:16:24 [debug] 188209#188209: *3 reusable connection: 0
2023/04/14 17:16:24 [debug] 188209#188209: *3 run cleanup: 0000AAAAFFA50EE0
2023/04/14 17:16:24 [debug] 188209#188209: *3 event timer del: -1: 11113282
2023/04/14 17:16:24 [debug] 188209#188209: *3 free: 0000AAAAFFA50830
2023/04/14 17:16:24 [debug] 188209#188209: *3 free: 0000AAAAFF9D3E00, unused: 8
2023/04/14 17:16:24 [debug] 188209#188209: *3 free: 0000AAAAFFA50D20, unused: 40
2023/04/14 17:16:24 [debug] 188209#188209: quic recvmsg() not ready (11: Resource temporarily unavailable)
2023/04/14 17:16:24 [debug] 188209#188209: *1 http2 idle handler
2023/04/14 17:16:24 [debug] 188209#188209: *1 reusable connection: 0
2023/04/14 17:16:24 [debug] 188209#188209: *1 posix_memalign: 0000AAAAFF9CA1E0:4096 @16
2023/04/14 17:16:24 [debug] 188209#188209: *1 http2 read handler
2023/04/14 17:16:24 [debug] 188209#188209: *1 SSL_read: 158
2023/04/14 17:16:24 [debug] 188209#188209: *1 SSL_read: -1
2023/04/14 17:16:24 [debug] 188209#188209: *1 SSL_get_error: 2
}}}
"	skygunner@…
1.23.x	2490	the backup upstream response inherits the response value of the previous upstream that failed.	nginx-module	1.23.x	defect		new	2023-05-02T06:21:49Z	2023-09-22T22:05:37Z	"When an upstream configuration defining primary and backup servers is set up as follows,
it receiving a response with a status code defined in proxy_next_upstream and with ""Cache-Control: max-age=XX"" header from the primary server, it will be cached the responses from the backup server even that don't have the ""Cache-Control"" header.


{{{
upstream upstream_http {
    server unix:/run/nginx_1.sock max_fails=1 fail_timeout=10s;
    server unix:/run/nginx_2.sock max_fails=1 fail_timeout=10s backup;
}
}}}

primary upstream server's response:

{{{
HTTP/1.1 500 Internal Server Error
Server: -
Date: -
Content-Type: text/html
Content-Length: 174
Connection: keep-alive
Cache-Control: max-age=15
}}}

backup upstream server's response:
{{{
HTTP/1.1 200 OK
Server: -
Date: -
Content-Type: application/octet-stream
Content-Length: 30
Connection: keep-alive
}}}

Based on the debug log, it appears that when receiving the response from the backup server, it is marked as ""http cacheable: 1"", and is cached for the amount of time specified by the ""Cache-Control: max-age=15"" header on the primary server.

{{{
[debug] 8278#0: *1 http write filter: l:0 f:0 s:184
[debug] 8278#0: *1 http file cache set header
[debug] 8278#0: *1 http cacheable: 1
[debug] 8278#0: *1 http proxy filter init s:200 h:0 c:0 l:30
[debug] 8278#0: *1 http upstream process upstream
}}}

It seems that the initialization is insufficient when the upstream transitions because applying this patch prevents the backup response from being cached.


{{{
diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c
index 9cc202c9..1487e9ca 100644
--- a/src/http/modules/ngx_http_proxy_module.c
+++ b/src/http/modules/ngx_http_proxy_module.c
@@ -1626,6 +1626,9 @@ ngx_http_proxy_reinit_request(ngx_http_request_t *r)
     r->upstream->pipe->input_filter = ngx_http_proxy_copy_filter;
     r->upstream->input_filter = ngx_http_proxy_non_buffered_copy_filter;
     r->state = 0;
+    if (r->cache != NULL) {
+        r->cache->valid_sec = 0;
+    }
 
     return NGX_OK;
 }
}}}


Is there a better way to initialize to prevent each server in the upstream from affecting the response of the other servers?
※ I understand that it is not common for a status code 500 response to have a ""Cache-Control: max-age=XX"" header. However, I sometimes receive such responses in my nginx reverse proxy and I want to cache them as so-called negative cache.

I am attaching the configuration and debug log.

conf.

{{{
worker_processes  1;

events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    keepalive_timeout  65;

    upstream upstream_http {
        server unix:/run/nginx_1.sock max_fails=1 fail_timeout=10s;
        server unix:/run/nginx_2.sock max_fails=1 fail_timeout=10s backup;
    }

    server {
        listen unix:/run/nginx_1.sock;

        access_log /var/log/nginx/access_y.log;
        error_log  /var/log/nginx/error_1.log debug;

        location / {
            resolver 127.0.0.53;
            resolver_timeout 5s;
            proxy_http_version 1.1;
            proxy_pass ht tp://fail.example.com/$request_uri;
            proxy_set_header Connection """";
            proxy_set_header Host ""fail.example.com"";
            proxy_pass_header x-accel-expires;
        }
    }

    server {
        listen unix:/run/nginx_2.sock;

        access_log /var/log/nginx/access_y.log;
        error_log  /var/log/nginx/error_2.log debug;

        location / {
            resolver 127.0.0.53;
            resolver_timeout 5s;
            proxy_http_version 1.1;
            proxy_pass ht tp://success.example.com/$request_uri;
            proxy_set_header Connection """";
            proxy_pass_header x-accel-expires;
        }
    }
    proxy_cache_path /var/data/cache/ levels=1:2 use_temp_path=off keys_zone=cache_all:365516 inactive=720m max_size=539553;

    server {
        listen 80;
        server_name  localhost;
        access_log /var/log/nginx/access_y.log;
        error_log  /var/log/nginx/error_x.log debug;

        proxy_cache_lock on;
        proxy_cache_lock_timeout 10s;
        proxy_cache_revalidate on;
        proxy_cache_min_uses 1;
        proxy_set_header Connection """";
        proxy_http_version 1.1;
        proxy_next_upstream http_500 http_502 http_503 http_504 http_429 timeout;

        location / {
            proxy_cache cache_all;
            proxy_pass ht tp://upstream_http;
            add_header X-Cache-Status $upstream_cache_status;
        }
    }
}
}}}
"	soukichi@…
1.23.x	2395	`proxy_pass https://example` if `upstream example { server example.com; }` is defined, uses port `80`, not `443` per `https`	nginx-module	1.23.x	enhancement		new	2022-09-29T19:45:13Z	2022-09-29T20:18:45Z	"Hi all,

I would think that in absence of a port number specified with the value of the `server` directive in an `upstream` block, when the upstream is used in a URL, like with the `proxy_pass` directive, then the presence of `https` as URL scheme should absolutely imply the port associated with HTTPS -- //443//. Instead, Nginx attempts to connect to the upstream on port //80// specifically.

Is this an issue that should be addressed? Shouldn't Nginx use port 443 when the upstream is used in a URL with the `https` scheme, like in the value of `proxy_pass` et al?

The current mitigation, in my humble opinion, impairs readability and is unnecessarily confusing, not to mention may surprise the reader:

{{{
upstream example { server example.com:443; } # Reader: Isn't HTTPS 443 by default? Why is `:443` necessary here? (Nginx: because by default it's 80, my dear chap)
proxy_pass https://example;
}}}"	amn@…
1.23.x	2421	proxy_next_upstream_tries might be ignored with upstream keepalive	nginx-core	1.23.x	enhancement		accepted	2022-12-01T15:53:07Z	2024-06-21T09:53:17Z	"there is a bug with proxy_next_upstream_tries, which is ignored if Nginx is under load and the upstream server close the connection prematurely.
It seems to only occurs with a connection closed.
If too many requests are producing this error, It is possible to bring all the upstreams down for a certain time.

Here is a reproducer on docker, but the issue was noticed on debian9/11 as well with nginx 1.10<=1.23.0

https://github.com/tibz7/nginx_next_upstream_retries_bug

"	fischerthiebaut@…
1.23.x	2449	"Allow using OpenSSL 3.0 ""provider"" API instead of deprecated ""engine"" API"	nginx-core	1.23.x	enhancement		new	2023-02-06T18:53:53Z	2023-12-12T23:19:59Z	"I would like to use hardware encryption to protect my webserver's TLS privkey from theft.  My server has a TPM 2.0 chip which supports this, but it's unnecessarily difficult to configure it in nginx.  I'm proposing a change to make the process more user-friendly.

For background: old versions of OpenSSL supported HSMs and hardware offloading through the ENGINE API.  In nginx this is enabled through the `ssl_engine` directive: hxxps://github.com/Infineon/optiga-tpm-cheatsheet#nginx--curl

(Please s/hxxps/https/ due to Trac spam filter)

In OpenSSL 3.0 the authors introduced a ""Provider API"" which is intended to replace the old ENGINE API.  It works in a similar way.  When using the CLI to sign or create TPM-backed keys, you add `-provider tpm2 -provider base` to the arguments: hxxps://github.com/Infineon/optiga-tpm-cheatsheet#pem-encoded-key-object-2

I'm running Ubuntu 22.04 LTS on this webserver.  This Linux distribution has a tpm2-openssl package ( hxxps://github.com/tpm2-software/tpm2-openssl ) which conforms to the new OpenSSL 3.0 Provider API.  The CLI examples on the optiga-tpm-cheatsheet work right out of the box, with no extra configuration.  This makes it quick and easy to set up hardware-backed keys, just by installing a single package.  For instance, anyone running this distro on an x86 Linux PC can do:

    sudo apt install tpm2-openssl
    openssl genpkey -provider tpm2 -algorithm EC -pkeyopt ec_paramgen_curve:P-384 -out testkey.priv
    echo ""test"" | openssl pkeyutl -provider tpm2 -provider base -digest sha256 -inkey testkey.priv -sign -rawin -hexdump

But unfortunately, at this time nginx only supports the ENGINE API, not the Provider API.  In order to use hardware backed keys with nginx, users would need to compile, install, and configure the legacy tpm2tss ENGINE implementation, and keep it up to date themselves without help from the Debian/Ubuntu package maintainers.

I believe that with a small tweak to nginx, it would be possible for users to specify e.g.

    ssl_provider tpm2,base

to tell OpenSSL 3.x to use the tpm2-openssl Provider to support hardware-backed private keys in nginx."	nickrbogdanov@…
1.24.x	2610	Nginx returning 502 gateway error repeatedly	other	1.24.x	defect		new	2024-02-26T12:06:16Z	2024-02-26T12:06:16Z	"We are facing an HTTP 502 gateway error repeatedly while rendering a page. When we are fetching the page to be rendered from our presentation app with Nginx as a proxy, the response status is 502, but when we directly hit the presentation app without Nginx we get a response of 200.

We have tried the below troubleshooting steps but to no avail:

1. chunked_transfer_encoding off; in nginx.conf.template
2. proxy_set_header        Transfer-Encoding     """"; in domains.conf.template and server.conf
3. Tried putting chunked_transfer_encoding off; and proxy_set_header        Transfer-Encoding     """"; in server.conf, domains.conf.template, and nginx.conf.template
4. Tried version Nginx: 1.24.0 with lua-nginx-module: 0.10.26
5. Tried Nginx: 1.25.1 with lua-nginx-module: 0.10.26"	varun.khulbey@…
1.24.x	2660	SSI is included twice when empty response is loaded from proxy cache	nginx-core	1.24.x	defect		new	2024-06-19T12:33:53Z	2025-01-14T17:22:52Z	"Nginx includes a stub for an SSI twice when an empty response response from the upstream is loaded from proxy cache and there is at least one other SSI.

Consider the following example:

{{{
<html>
    <head><title>test</title></head>
    <body>
        <!--# include virtual=""/non_empty/"" -->

        <!--# block name=""fallback"" -->
            <div>I am a fallback</div>
        <!--# endblock -->

        <!-- START -->
        <!--# include virtual=""/empty/"" stub=""fallback"" -->
    </body>
</html>
}}}

* The response of `/non_empty/` will be 200 OK with non-empty response body
* The response of `/empty/` will be 200 OK with empty response body (`Content-Length: 0`)
* The response of `/empty/` is cached in a local `proxy_cache`

Then, the first response will be

{{{
<html>
    <head><title>test</title></head>
    <body>
        <div>i am not empty</div>



        <!-- START -->

            <div>I am a fallback</div>

    </body>
</html>
}}}

and the second response (using the proxy_cache) will be

{{{
<html>
    <head><title>test</title></head>
    <body>
        <div>i am not empty</div>



        <!-- START -->

            <div>I am a fallback</div>

            <div>I am a fallback</div>

    </body>
</html>
}}}

* Adding `wait=""yes""` to the include directives solves the issue.
* Removing the proxy_cache solves the issue.

The nginx debug log is attached. You can see that the stub is included two times.

{{{
2024/06/19 09:16:00 [debug] 7#7: *5 ssi stub output: ""/empty/?""
2024/06/19 09:16:00 [debug] 7#7: *5 ssi stub output: ""/empty/?""
}}}

The issue seems to be introduced with 1.23.4 and is still present in 1.27.0. Version 1.23.3 does not have this issue. 

The following project shows a complete example of the issue https://github.com/0x4a616e/nginx_ssi_issue/"	0x4a616e@…
1.24.x	2638	nginx fails to restart after upgrade or reinstall of nginx.org RPM package via dnf	documentation	1.24.x	defect		new	2024-05-04T19:28:27Z	2024-05-11T01:11:14Z	"This was observed during the upgrade from 1.24 to 1.26 on Rocky Linux 9.3, using the nginx.org nginx-stable repo.

During `dnf upgrade nginx` or `dnf reinstall nginx`, the command `/sbin/service nginx upgrade` is ran by the postuninstall RPM script. It fails, as can be seen by the output of ""Binary upgrade failed, please check nginx's error.log"" by dnf.

nginx's error.log contains this:

`[alert] 10140#10140: execve() failed while executing new binary process ""/usr/sbin/nginx"" (13: Permission denied)`

It seems to me that the %postun snippet in nginx.spec should use `systemctl try-restart nginx` instead?"	CEbhNwPM@…
1.24.x	2685	stub_status counter leak when killing old workers	nginx-core	1.24.x	defect		new	2024-09-02T21:51:55Z	2024-09-06T21:46:56Z	"Hello,

we've started killing (sending `SIGTERM`) ""old"" nginx workers (`nginx: worker process is shutting down`) as we have regular configuration changes and a lot of websocket connections.

Since we do this, the counters from `stub_status` are incorrect.

=== server status ===
{{{
> curl localhost/nginx_status
Active connections: 65369
server accepts handled requests
 1042173178 1042173178 5035465167
Reading: 0 Writing: 31968 Waiting: 5356
}}}

Adding up `Writing` and `Waiting` it's just `37324`. But even this count is too high. The correct number should be around this:

{{{
> ss | grep https | wc -l
7327
}}}

It's reproducable e.g. using `echo.websocket.org`:

=== nginx configuration ===

{{{
map $http_upgrade $connection_upgrade {
	default upgrade;
	''      close;
}

server {
	listen 1234;
	location / {
		proxy_set_header  Host echo.websocket.org;
		proxy_ssl_server_name on;
		proxy_ssl_name echo.websocket.org;
		proxy_http_version 1.1;
		proxy_set_header Upgrade $http_upgrade;
		proxy_set_header Connection $connection_upgrade;

		proxy_pass https://echo.websocket.org:443;
	}
	location = /basic_status {
		    stub_status;
	}
}
}}}

=== Test ===

1. Open 2 tabs in the browser with the URL: `http://localhost:1234/.ws`
2. Now there are 3 active connection (one connection is the request to `/basic_status`):

{{{
# curl localhost:1234/basic_status
Active connections: 3
server accepts handled requests
 4 4 4
Reading: 0 Writing: 3 Waiting: 0
}}}

3. After that, reload the nginx process and you can see a `nginx: worker process is shutting down` process:

{{{
# systemctl reload nginx

# ps aux | grep [n]ginx
root     2233850  0.0  0.0  55372  5652 ?        Ss   22:29   0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data 2233851  0.0  0.0  55896  6044 ?        S    22:29   0:00 nginx: worker process is shutting down
www-data 2233940  0.0  0.0  55868  5400 ?        S    22:30   0:00 nginx: worker process
www-data 2233941  0.0  0.0  55868  5240 ?        S    22:30   0:00 nginx: worker process

# kill 2233851
}}}

4. After we've killed the old process the websocket client in the browser will reconnect. After that we get two additional connection from `stub_status` even if the old connections are gone:

{{{
# curl localhost:1234/basic_status
Active connections: 5
server accepts handled requests
 7 7 7
Reading: 0 Writing: 5 Waiting: 0
}}}"	mookie-@…
1.25.x	2619	Issues with HTTP/3 Configuration and listen 443 quic reuseport; Directive Affecting Server Block Functionality	documentation	1.25.x	defect		new	2024-03-15T13:41:32Z	2024-03-31T13:07:09Z	"Title: Routing and Configuration Issues with HTTP/3 and listen 443 quic reuseport; in Nginx 1.25.4

Environment:

Nginx Version: 1.25.4
Operating System: Ubuntu 22.04.4 LTS
OpenSSL Version: OpenSSL 3.0.2
Description:
I'm experiencing challenges with setting up HTTP/3 on multiple server blocks within the same Nginx instance. The primary issue revolves around the listen 443 quic reuseport; directive, which seems to be the only way to enable HTTP/3 support. Utilizing this directive across multiple server blocks leads to misrouting and other server blocks not functioning correctly under HTTP/3.

Problems:

Limited HTTP/3 Activation Method: The current setup requires listen 443 quic reuseport; for HTTP/3 activation. This limitation restricts how HTTP/3 can be enabled across various server blocks.

Single Block Restriction for QUIC: Incorporating listen 443 quic reuseport; in more than one server block results in configuration issues, limiting HTTP/3 support to a single server block.

Incorrect Subdomain Routing with QUIC: Activating listen 443 quic reuseport; in any server block leads to a significant bug. Nginx does not serve the requested subdomain; instead, it serves content from an incorrect server block, which suggests a routing or server block selection problem with QUIC enabled.

Steps to Reproduce:

Configure multiple server blocks on Nginx 1.25.4, with each serving different subdomains. Ensure each has SSL enabled and aims to support HTTP/3.
In the configurations, enable HTTP/3 by using the directives http3 on;, http2 on;, and listen 443 quic reuseport; for each server block.
Access the subdomains using a client that supports HTTP/3.
Expected Behavior:
Each request to a subdomain should correctly serve content from its corresponding server block configuration, with HTTP/3 enabled for clients that support it.

Actual Behavior:

Only one server block can successfully use listen 443 quic reuseport; without encountering configuration errors.
When listen 443 quic reuseport; is enabled in any server block, Nginx does not serve the content from the requested server block for specific subdomains. Instead, it serves content from a different block, indicating an issue with routing or server block selection when QUIC is enabled.

Configuration:
server {
    listen 80;
    server_name _;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name gallery.myapp.app;

    http3 on;
    http2 on;

    quic_retry on;
    ssl_early_data on;
    add_header Alt-Svc 'h3="":$server_port""; ma=86400';

    proxy_intercept_errors on;

    ssl_certificate /etc/letsencrypt/live/myapp.app/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/myapp.app/privkey.pem;

    location / {
        root /home/usr/Ecosystem-App/gallery-server/public/;
        index  index.html;
        try_files $uri $uri.html /index.html =404;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }
}

server {
    listen 443 ssl;
    server_name billing.myapp.app;

    http3 on;
    http2 on;

    quic_retry on;
    ssl_early_data on;
    add_header Alt-Svc 'h3="":$server_port""; ma=86400';

    proxy_intercept_errors on;

    ssl_certificate /etc/letsencrypt/live/myapp.app/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/myapp.app/privkey.pem;

    location / {
        root /home/usr/Ecosystem-App/billing-server/public/;
        index  index.html;
        try_files $uri $uri.html /index.html =404;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }
}

server {
    listen 443 ssl;
    server_name dev.myapp.app;

    http3 on;
    http2 on;

    quic_retry on;
    ssl_early_data on;
    add_header Alt-Svc 'h3="":$server_port""; ma=86400';

    proxy_intercept_errors on;

    ssl_certificate /etc/letsencrypt/live/myapp.app/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/myapp.app/privkey.pem;

    location / {
        root /home/usr/Ecosystem-App/main-server/public/;
        index  index.html;
        try_files $uri $uri.html /index.html =404;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }
}

server {
    listen 443 ssl;
    listen 443 quic reuseport;
    server_name myapp.app www.myapp.app app.myapp.app;

    http3 on;
    http2 on;

    quic_retry on;
    ssl_early_data on;
    add_header Alt-Svc 'h3="":$server_port""; ma=86400';

    proxy_intercept_errors on;

    ssl_certificate /etc/letsencrypt/live/myapp.app/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/myapp.app/privkey.pem;

    location / {
        root /home/usr/Ecosystem-App/main-server/public/;
        index  index.html;
        try_files $uri $uri.html /index.html =404;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }
}

server {
    listen 443 ssl;
    server_name *.myapp.app;

    http3 on;
    http2 on;

    quic_retry on;
    ssl_early_data on;
    add_header Alt-Svc 'h3="":$server_port""; ma=86400';

    proxy_intercept_errors on;

    ssl_certificate /etc/letsencrypt/live/myapp.app/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/myapp.app/privkey.pem;

    location / {
        root /home/usr/Ecosystem-App/d2c-server/public/;
        index  index.html;
        try_files $uri $uri.html /index.html =404;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }
}

Additional Information:

No workarounds have been found to enable HTTP/3 across multiple server blocks without encountering the described issues.
This issue significantly impacts the ability to utilize HTTP/3 for enhanced performance across multiple services hosted on the same Nginx instance.
Note: This report has been formatted considering WikiFormatting support for better readability."	desaisoftwaree@…
1.25.x	2656	The following signatures were invalid: EXPKEYSIG ABF5BD827BD9BF62 nginx signing key <signing-key@nginx.com>	documentation	1.25.x	defect		new	2024-06-15T23:32:13Z	2025-05-16T15:15:10Z	"Getting the message:
The following signatures were invalid: EXPKEYSIG ABF5BD827BD9BF62 nginx signing key <signing-key@nginx.com>

when calling $ sudo apt update

Tried this:
https://www.f5.com/company/blog/nginx/updating-gpg-key-nginx-products

Also tried this:
https://forum.hestiacp.com/t/fix-nginx-expired-key-expkeysig-abf5bd827bd9bf62/14754

but it did not change a thing.

"	aski71@…
1.25.x	2664	broken header while reading PROXY protocol in nginx stream with pass module	nginx-core	1.25.x	defect		new	2024-07-03T12:09:57Z	2024-07-03T18:24:27Z	"the provided configuration should accept a proxy protocol header encapsulated in a ssl connection but it doesnt work
when the server block try to read and parse proxy protocol header it fail with the error:
broken header: ""A%��м�P���S�V]Җƨ���Tp����@o$HB����~F��r�K3��8^Q������W�M�"" while reading PROXY protocol, client: ..., server: unix:/run/nginx/uds_pp.sock

i tried to preread the stream with njs and it start with the correct proxy protocol header

i also noticed that if no ssl is involved (server unix:/run/nginx/uds.sock without ssl) everything works as expected


relevant part of nginx.conf
{{{
stream {
    ...
    map $ssl_preread_server_name $us {
        ""example.com"" unix:/run/nginx/example.sock;
        default unix:/run/nginx/uds.sock;
    }
    server {
        listen 8443;
        ssl_preread on;
        pass $us;
    }
    server {
        listen unix:/run/nginx/uds.sock ssl;
        ssl_certificate /etc/nginx/self-signed.pem;
        ssl_certificate_key /etc/nginx/self-signed.key;
        ssl_session_timeout 1d;
        ssl_session_cache shared:MozSSL:10m;
        ssl_session_tickets off;
        ssl_protocols TLSv1.3;
        ssl_prefer_server_ciphers off;
        pass unix:/run/nginx/uds_pp.sock;
    }
    server {
        listen unix:/run/nginx/uds_pp.sock proxy_protocol;
        proxy_pass ...;
        ...
    }
}
}}}
"	sad3rasd@…
1.25.x	2671	Nginx Mail Proxy TLS Problem On Postfix	nginx-core	1.25.x	defect		new	2024-07-24T19:57:02Z	2024-09-04T15:48:12Z	"Hello,

I am making multiple Postfix servers addressable through a single address using an NGINX mail proxy.
However, I am experiencing SSL/TLS issues on port 587 or 465.

My configuration and the log outputs are as follows.

How can I resolve this issue?

/var/log/mail.log:

Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: connect from unknown[my_proxy_server_ip]
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: smtp_stream_setup: maxtime=300 enable_deadline=0
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: name_mask: chunking
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: > unknown[my_proxy_server_ip]: 220 mail.mydomain.com ESMTP Postfix
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: watchdog_pat: 0x55b7b1a018f0
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: < unknown[my_proxy_server_ip]: EHLO mail.mydomain.com
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: dict_pcre_lookup: /etc/postfix/command_filter.pcre: EHLO mail.mydomain.com
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: discarding EHLO keywords: CHUNKING
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: match_list_match: unknown: no match
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: match_list_match: my_proxy_server_ip: no match
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: > unknown[my_proxy_server_ip]: 250-mail.mydomain.com
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: > unknown[my_proxy_server_ip]: 250-PIPELINING
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: > unknown[my_proxy_server_ip]: 250-SIZE 15728640
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: > unknown[my_proxy_server_ip]: 250-ETRN
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: > unknown[my_proxy_server_ip]: 250-STARTTLS
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: > unknown[my_proxy_server_ip]: 250-XCLIENT NAME ADDR PROTO HELO REVERSE_NAME PORT LOGIN DESTADDR DESTPORT
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: > unknown[my_proxy_server_ip]: 250-ENHANCEDSTATUSCODES
Jul 12 22:05:34 mail postfix/submission/smtpd[54599]: > unknown[my_proxy_server_ip]: 250-8BITMIME
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: > unknown[my_proxy_server_ip]: 250 DSN
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: watchdog_pat: 0x55b7b1a018f0
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: < unknown[my_proxy_server_ip]: XCLIENT ADDR=my_agent_ip NAME=[UNAVAILABLE]
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: dict_pcre_lookup: /etc/postfix/command_filter.pcre: XCLIENT ADDR=my_agent_ip NAME=[UNAVAILABLE]
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostname: smtpd_authorized_xclient_hosts: unknown ~? my_proxy_server_ip
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostaddr: smtpd_authorized_xclient_hosts: my_agent_ip ~? my_proxy_server_ip
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_list_match: unknown: no match
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_list_match: my_agent_ip: no match
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostname: smtpd_client_event_limit_exceptions: unknown ~? 127.0.0.1
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostaddr: smtpd_client_event_limit_exceptions: my_agent_ip ~? 127.0.0.1
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostname: smtpd_client_event_limit_exceptions: unknown ~? [::1]
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostaddr: smtpd_client_event_limit_exceptions: my_agent_ip ~? [::1]
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostname: smtpd_client_event_limit_exceptions: unknown ~? my_proxy_server_ip
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostaddr: smtpd_client_event_limit_exceptions: my_agent_ip ~? my_proxy_server_ip
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostname: smtpd_client_event_limit_exceptions: unknown ~? my_home_ip
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostaddr: smtpd_client_event_limit_exceptions: my_agent_ip ~? my_home_ip
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostname: smtpd_client_event_limit_exceptions: unknown ~? my_agent_ip
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostaddr: smtpd_client_event_limit_exceptions: my_agent_ip ~? my_agent_ip
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: name_mask: chunking
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: > unknown[my_agent_ip]: 220 mail.mydomain.com ESMTP Postfix
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: watchdog_pat: 0x55b7b1a018f0
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: < unknown[my_agent_ip]: EHLO mail.mydomain.com
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: dict_pcre_lookup: /etc/postfix/command_filter.pcre: EHLO mail.mydomain.com
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: discarding EHLO keywords: CHUNKING
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_list_match: unknown: no match
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_list_match: my_agent_ip: no match
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: > unknown[my_agent_ip]: 250-mail.mydomain.com
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: > unknown[my_agent_ip]: 250-PIPELINING
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: > unknown[my_agent_ip]: 250-SIZE 15728640
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: > unknown[my_agent_ip]: 250-ETRN
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: > unknown[my_agent_ip]: 250-STARTTLS
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: > unknown[my_agent_ip]: 250-ENHANCEDSTATUSCODES
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: > unknown[my_agent_ip]: 250-8BITMIME
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: > unknown[my_agent_ip]: 250 DSN
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: watchdog_pat: 0x55b7b1a018f0
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: < unknown[my_agent_ip]: AUTH PLAIN 77+9c2VuZGVybWFpbEBteWRvbWFpbi5jb23vv71hc2QxMjMzMjEtLQ==
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: dict_pcre_lookup: /etc/postfix/command_filter.pcre: AUTH PLAIN 77+9c2VuZGVybWFpbEBteWRvbWFpbi5jb23vv71hc2QxMjMzMjEtLQ==
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: > unknown[my_agent_ip]: 530 5.7.0 Must issue a STARTTLS command first
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: watchdog_pat: 0x55b7b1a018f0
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: smtp_get: EOF
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostname: smtpd_client_event_limit_exceptions: unknown ~? 127.0.0.1
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostaddr: smtpd_client_event_limit_exceptions: my_agent_ip ~? 127.0.0.1
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostname: smtpd_client_event_limit_exceptions: unknown ~? [::1]
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostaddr: smtpd_client_event_limit_exceptions: my_agent_ip ~? [::1]
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostname: smtpd_client_event_limit_exceptions: unknown ~? my_proxy_server_ip
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostaddr: smtpd_client_event_limit_exceptions: my_agent_ip ~? my_proxy_server_ip
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostname: smtpd_client_event_limit_exceptions: unknown ~? my_home_ip
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostaddr: smtpd_client_event_limit_exceptions: my_agent_ip ~? my_home_ip
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostname: smtpd_client_event_limit_exceptions: unknown ~? my_agent_ip
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: match_hostaddr: smtpd_client_event_limit_exceptions: my_agent_ip ~? my_agent_ip
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: lost connection after EHLO from unknown[my_agent_ip]
Jul 12 22:05:35 mail postfix/submission/smtpd[54599]: disconnect from unknown[my_agent_ip] ehlo=2 xclient=0/1 auth=0/1 commands=2/4


nginx mail proxy configuration:

mail {

server_name mail.mydomain.com;
auth_http localhost/auth/auth.php;
pop3_capabilities ""TOP"" ""USER"" ""UIDL"" ""PIPELINING"" ""SASL"";
imap_capabilities ""IMAP4rev1"" ""UIDPLUS"" ""IDLE"" ""LITERAL+"" ""QUOTA"";
smtp_capabilities ""SIZE 53477376"" ""8BITMIME"" ""ENHANCEDSTATUSCODES"" ""PIPELINING"" ""DSN"";

proxy_smtp_auth on;
proxy on;
proxy_pass_error_message on;
proxy_timeout 300s;

starttls on;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!ADH:!MD5:@STRENGTH;
ssl_session_cache shared:TLSSL:16m;
ssl_session_timeout 10m;
ssl_certificate /etc/letsencrypt/live/mail.mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mail.mydomain.com/privkey.pem;
ssl_dhparam /etc/ssl/certs/dhparam.pem;

server {
listen 25;
listen [::]:25;
protocol smtp;
smtp_auth none;
starttls only;
auth_http_header PORT 25;
}

server {
listen 465 ssl;
listen [::]:465 ssl;
protocol smtp;
smtp_auth login plain;
auth_http_header PORT 465;
}

server {
listen 587;
listen [::]:587;
protocol smtp;
smtp_auth login plain;
starttls only;
auth_http_header PORT 587;
}

server {
listen 110;
listen [::]:110;
protocol pop3;
starttls only;
}

server {
listen 995 ssl;
listen [::]:995 ssl;
protocol pop3;
}

server {
listen 143;
listen [::]:143;
protocol imap;
starttls only;
}

server {
listen 993 ssl;
listen [::]:993 ssl;
protocol imap;
}
}


auth.php codes:

header(""HTTP/1.0 200 OK"");
header(""Auth-Status: OK"");
header(""Auth-Server: $server"");
header(""Auth-Port: 587"");
exit();"	enescantas@…
1.25.x	2600	https://nginx.org/packages down?	nginx-package	1.25.x	defect		reopened	2024-02-09T12:20:29Z	2024-04-18T07:14:25Z	"https://nginx.org/packages currently results in a 404 error and if one follows the steps on https://nginx.org/en/linux_packages.html#Ubuntu, the `apt update` step yields the following error:

{{{
E: The repository 'https://nginx.org/packages/ubuntu jammy Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
}}}"	silverwind@…
1.25.x	2554	Some of the requests getting stuck after reload.	http/3	1.25.x	defect		new	2023-10-31T09:55:18Z	2023-10-31T11:07:00Z	"I suspected there are routed to shutting down workers but it happening even hours after last old workes is gone, only restart can fix it.

Oracle Linux 8.8, tested on  4.18.0-477.27.1.el8_8.x86_64 and 5.15.0-105.125.6.2.2.el8uek.x86_64  with quic_bpf on;  (bpf rules are created verified with bpftool prog) without any difference also  offical build of nginx from repositories.

Packet are correctly received according to tcpdump but there is no entry in nginx error log even on debug verbosity.

I tried to remove everything not necessary from configuration and play with http/3 module options but no success."	lathanderjk@…
1.25.x	2579	OCSP stapling vs. $ssl_server_name	other	1.25.x	defect		reopened	2023-12-29T15:16:38Z	2024-01-10T20:12:50Z	"Sorry if you prefer to be contacted via discord first, but I never got the invitation link there...

I was investigating why OCSP stapling doesn´t work in my configuration and doing experiments I discovered, that if you use statements

    ssl_certificate /etc/letsencrypt/live/$ssl_server_name/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/$ssl_server_name/privkey.pem;

instead of the real name of the server, OCSP stapling via 

    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/letsencrypt/live/some.real.dom/chain.pem;
 
is ignored.

During my experiments I also discovered that using $ssl_server_name within ssl_trusted_certificate causes an error. As I am using ""generic"" configurations for a whole bunch of websites I definitely consider that a feature.

Not sure what component this is related to, please adjust if necessary."	joachimlindenberg@…
1.25.x	2582	HTTP3 working with curl but not in Browser	http/3	1.25.x	defect		new	2024-01-07T16:34:11Z	2024-04-24T09:10:49Z	"Hello,

I configured HTTP/3 for nginx 1.25. It's working when I call my website via curl. But it's showing here https://http3check.net/ that no connection could be established.

This is a clip of my conf file:
server {
        listen 443 ssl; 
        listen [::]:443 ssl;  
	listen 443 quic reuseport;  
	server_name ...

	http2 on;
	http3 on;
	http3_hq on;
	quic_retry on;
	ssl_early_data on;
	quic_gso on;
	quic_host_key /etc/ssl/..._2023.key;
	add_header X-protocol $server_protocol always;

....

}
In the curl response it's showing: 
HTTP/3 200 
x-protocol: HTTP/3.0

But the browser check on https://http3check.net/ saying that it's not working.

Any idea for that?

Thank you!
Florian"	florian.s.senf@…
1.25.x	2626	cannot use mTLS on nginx via http3 protocol	http/3	1.25.x	defect		new	2024-04-04T15:39:08Z	2024-04-24T09:10:49Z	"I cant use user certificates over HTTP3 protocol.
Whenever I enable quic protocol in nginx config, $ssl_client_verify nginx variable always shows false.

When I disable http3 and revert to http2, client authentication works without errors.

http3 protocol tests are done via Mozilla browser version 124 and Docker-built curl with http3 support, example of command line used docker run -it -v ./:/testcert --rm ymuski/curl-http3 curl -vvv -I --http3 --cert-type P12 --cert ""/testcert/usercert.pfx:mypass"" https://<mysite.com>

Without mTLS enabled, HTTP3 protocol works normally."	terem42@…
1.25.x	2648	Nginx will disable ocsp stapling over all domains even if one is bogus	documentation	1.25.x	defect		new	2024-06-09T14:13:03Z	2024-06-09T14:13:03Z	"Hi,

I have configured nginx for SNI-based vhosting for several known subdomains. the default certificate is not meant to be used, so it is set with a bogus, snakeoil certificate.


when starting nginx, it will complain about the snakeoil certificate being incompatible with OCSP stapling and then proceed to disable OCSP stapling for all domains, including ones with valid certificates.

Jun 09 13:38:11 dev-redacted-gil nginx[1124]: nginx: [warn] ""ssl_stapling"" ignored, issuer certificate not found for certificate ""/etc/ssl/certs/ssl-cert-snakeoil.pem""

expected behaviour: disabling OCSP stapling should be done only for the invalid certificate


steps to reproduce:

1. create an nginx configuration with sni vhosting.
2. add a default_server snakeoil SSL configuration
3. add a valid vhost with valid TLS certificates
4. turn on OCSP stapling"	bahat.gil@…
1.25.x	2649	"ngx_mail_ssl_module ""starttls only"" issue if without smtp authentication"	nginx-module	1.25.x	defect		new	2024-06-09T16:46:46Z	2024-06-16T03:24:06Z	"Hi,

With setting ""starttls only"" in mail config block, 

I tested that send email to my smtp server(via nginx proxy) from mail.qq.com,
within this process, it doesn't need smtp authentication to login my smtp server,

the issue is that nginx didn't force TLS upgrade for smtp connection.

# nginx.conf
{{{
mail {
    server_name     smtp.xxx.com;
    auth_http       http://127.0.0.1:80/mail/auth;
    smtp_auth       none;
    ssl_protocols   TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
    ssl_ciphers     HIGH:!aNULL:!MD5;
    ssl_certificate        /etc/nginx/cert/smtp.xxx.com.pem;
    ssl_certificate_key    /etc/nginx/cert/smtp.xxx.com.key;
  
    server {
        listen 25; 
        protocol smtp;
        starttls only; 
    }   
}
}}}


I checked in ngx_mail_ssl_module source code at here: 

https://github.com/nginx/nginx/blob/master/src/mail/ngx_mail_smtp_handler.c#L664

it check and force TLS connection only in stage ngx_mail_smtp_auth().
for the case that there is no smtp username/password authentication stage, nginx still allow to connect with non-TLS.
Do you think it is a defect? and may we add one more invoking ngx_mail_starttls_only() in ngx_mail_smtp_mail() or ngx_mail_smtp_rcpt() stage?

{{{
ngx_int_t
ngx_mail_auth_parse(ngx_mail_session_t *s, ngx_connection_t *c)
{
    ngx_str_t                 *arg;

#if (NGX_MAIL_SSL)
    if (ngx_mail_starttls_only(s, c)) {
        return NGX_MAIL_PARSE_INVALID_COMMAND;
    }
#endif
}}}

"	zeroleo12345
1.25.x	2674	Unable to set `proxy_max_temp_file_size` greater than 1024m	nginx-module	1.25.x	defect		new	2024-07-31T16:14:40Z	2024-08-27T16:23:25Z	"I am using NGINX on Windows as a reverse proxy to a library endpoint. I want to increase the value in the `proxy_max_temp_file_size` directive to serve some of my larger libraries.

However, setting `proxy_max_temp_file_size` to a value above 1024m gives me the following error:
```
nginx: [emerg] ""proxy_max_temp_file_size"" directive invalid value in path/to/nginx.conf:178
```

Here is the full nginx.conf
```nginx
#user  nobody;
worker_processes 2;

events {
    worker_connections 1024;
}

http {
    include mime.types;
    default_type application/octet-stream;

    sendfile on;
    keepalive_timeout 65;

    # upstream endpoint
    upstream pypi-gpu {
        server localhost:9500;
    }

    server {
        listen 9000 ssl;
        server_name pythonlibs.mydomain.com;

        ssl_certificate ""path/to/cert.crt"";
        ssl_certificate_key ""path/to/private.key"";
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers HIGH:!aNULL:!MD5;

        location / {
            proxy_max_temp_file_size 3072m;
            proxy_set_header Host $host:$server_port;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_pass http://pypi-gpu;
        }
    }
}
```

Note that setting `proxy_max_temp_file_size 0` allows me to serve large files but I'd prefer to keep this bounded if possible.

nginx version: nginx/1.27.0"	PyroGenesis@…
1.25.x	2680	Error Signal 11 on reload if any dynamic module is loaded	nginx-core	1.25.x	defect		new	2024-08-14T22:35:38Z	2024-11-13T02:04:26Z	"I get a pair of signal 11 errors every time nginx is closed or reloaded:
2024/08/14 16:20:41 [alert] 6935#6935: worker process 6936 exited on signal 11 (core dumped)
2024/08/14 16:20:42 [alert] 6935#6935: worker process 6937 exited on signal 11 (core dumped)
2024/08/14 16:21:06 [alert] 7352#7352: worker process 7353 exited on signal 11 (core dumped)
2024/08/14 16:21:06 [alert] 7352#7352: worker process 7354 exited on signal 11 (core dumped)

This only happens when a dynamic module is loaded, and regardless of whether it is actually called/used. code used to load it in nginx.conf:
load_module /etc/nginx/modules/ngx_http_hello_world_module.so;

In this particular test I used module code that I knew should be error free, which was the empty gif module but with all ""empty_gif"" replaced with ""hello_world"" When hello_world is called, it properly returns the empty gif.

"	jeffglitchless@…
1.25.x	2580	Full native WebDAV support	nginx-module	1.25.x	enhancement		new	2024-01-06T13:27:40Z	2024-01-06T13:27:40Z	"WebDAV support is one feature where Nginx falls behind Apache. Not only does the native module only implement 5 methods, even the supplementary nginx-dav-ext module fails to implement the important method `PROPPATCH`, required for the native Windows WebDAV client to function.

I propose all WebDAV methods are included in ngx_http_dav_module, giving Nginx full WebDAV support and eliminating the need for the nginx-dav-ext module."	lzqhwo@…
1.25.x	2653	MIME type image/jpeg for filename extension .jfif	nginx-core	1.25.x	enhancement		new	2024-06-14T09:43:37Z	2024-12-10T02:07:03Z	"Can you please, add jfif filename extension to default nginx conf/mime.types ?

{{{#!diff
-   image/jpeg   jpeg jpg;
+   image/jpeg   jpeg jpg jfif;
}}}"	Hennadii Makhomed
1.25.x	2526	wrong gpg key for nginx-stable repo	documentation	1.25.x	defect		new	2023-08-03T11:45:44Z	2023-09-06T10:56:09Z	"Hi

gpg key is invalid for nginx  stable repo. It works only for nginx-mainline.
/docu/http://nginx.org/en/linux_packages.html#SLES

uname -a
Linux nginx02 5.3.18-150300.59.115-default #1 SMP Fri Mar 10 07:48:20 UTC 2023 (0398b56) x86_64 x86_64 x86_64 GNU/Linux

{{{
xxx:/home/ccloud # zypper addrepo --gpgcheck --type yum --refresh --check \
>     'http://nginx.org/packages/mainline/sles/$releasever_major' nginx-mainline
Warning: Legacy commandline option --type detected. This option is ignored.
Adding repository 'nginx-mainline' ...............................................................................................................................................[done]Repository 'nginx-mainline' successfully added

URI         : http://nginx.org/packages/mainline/sles/15
Enabled     : Yes
GPG Check   : Yes
Autorefresh : Yes
Priority    : 99 (default priority)

Repository priorities in effect:                                                                                                                        (See 'zypper lr -P' for details)
      98 (raised priority)  :  1 repository
      99 (default priority) : 24 repositories
xxx:/home/ccloud # curl -o /tmp/nginx_signing.key https://nginx.org/keys/nginx_signing.key
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1561  100  1561    0     0   8672      0 --:--:-- --:--:-- --:--:--  8672



xxx:/home/ccloud # **gpg --with-fingerprint** /tmp/nginx_signing.key
**gpg: WARNING: no command supplied.  Trying to guess what you mean ...**
pub   rsa2048 2011-08-19 [SC] [expires: 2024-06-14]
uid           nginx signing key <signing-key@nginx.com>
xxx:/home/ccloud #
}}}
"	nadkamo@…
1.25.x	2528	nginx reload with quic reuseport: quic packet rejected rc:-1	http/3	1.25.x	defect		new	2023-08-10T03:00:03Z	2024-02-23T20:46:53Z	"I observed some client timeout issue in http3 requests when nginx is recycle the workers (`nginx -s reload`)
== Requirements to reproduce the issue

1. You need a worker in `shutting down` status;
   a. We need a big dumb file in the server;
   b. We will use a slowly `curl` request to ""stuck"" one worker;
2. A SIGHUP signal (`nginx -s reload`);
3. http3 client, see [#point1 (This is the http3 client that I used)];

== Debug log evidence
{{{
2023/08/09 01:57:58 [debug] 22714#0: quic recvmsg on 0.0.0.0:443, ready: 0
2023/08/09 01:57:58 [debug] 22714#0: posix_memalign: 000055C8A015DFC0:512 @16
2023/08/09 01:57:58 [debug] 22714#0: malloc: 000055C8A015E1D0:1252
2023/08/09 01:57:58 [debug] 22714#0: *52 quic recvmsg: 34.95.175.91:52394 fd:10 n:1252
2023/08/09 01:57:58 [debug] 22714#0: *52 http3 init session
2023/08/09 01:57:58 [debug] 22714#0: *52 posix_memalign: 000055C8A015E6C0:512 @16
2023/08/09 01:57:58 [debug] 22714#0: *52 add cleanup: 000055C8A015E848
2023/08/09 01:57:58 [debug] 22714#0: *52 event timer add: 10: 60000:5817468822
2023/08/09 01:57:58 [debug] 22714#0: *52 quic run
2023/08/09 01:57:58 [debug] 22714#0: *52 quic packet rx long flags:c3 version:1
2023/08/09 01:57:58 [debug] 22714#0: *52 quic packet rx init len:1219
2023/08/09 01:57:58 [debug] 22714#0: *52 quic packet rx dcid len:19 206bd91956f15bbff8467934f240527fce6522
2023/08/09 01:57:58 [debug] 22714#0: *52 quic packet rx scid len:4 65133304
2023/08/09 01:57:58 [debug] 22714#0: *52 quic address validation token len:0 
2023/08/09 01:57:58 [debug] 22714#0: *52 quic packet done rc:-1 level:init decr:0 pn:0 perr:0
2023/08/09 01:57:58 [debug] 22714#0: *52 quic packet rejected rc:-1, cleanup connection
2023/08/09 01:57:58 [debug] 22714#0: *52 reusable connection: 0
2023/08/09 01:57:58 [debug] 22714#0: *52 run cleanup: 000055C8A015E848
2023/08/09 01:57:58 [debug] 22714#0: *52 event timer del: -1: 5817468822
2023/08/09 01:57:58 [debug] 22714#0: *52 free: 000055C8A015E1D0
2023/08/09 01:57:58 [debug] 22714#0: *52 free: 000055C8A015DFC0, unused: 16
2023/08/09 01:57:58 [debug] 22714#0: *52 free: 000055C8A015E6C0, unused: 96

}}}
"	murilo.b.andrade@…
1.25.x	2529	Can pto timeout check removed from ngx_quic_pto_handler?	documentation	1.25.x	defect		new	2023-08-10T09:38:33Z	2023-08-10T12:33:55Z	"I'm wonder that if the code in ''ngx_quic_pto_handler'' can be removed.

{{{
if ((ngx_msec_int_t) (f->last + (ngx_quic_pto(c, ctx) << qc->pto_count)
                              - now) > 0)
        {
            continue;
        }
}}}
I notice that ''ngx_quic_pto_handler'' only set by ''ngx_quic_set_lost_timer'' and
''ngx_quic_set_lost_timer'' calls every time when ack packets(''ngx_quic_detect_lost'') or send packets(''ngx_quic_output'').

So the timer would he updated constanly. I think under this condition ''continue'' would never be excuted, which means when pto timer timeout, we can directly send a probe and don't need to check if pto time expired.

I don't know if my understanding is wrong, please correct me if so."	wojxhr@…
1.25.x	2530	ACK of packet containing PATH_RESPONSE frame can't update rtt state	nginx-core	1.25.x	defect		accepted	2023-08-14T06:28:38Z	2023-08-14T09:44:05Z	The packet sent by calling ngx_quic_frame_sendto will not be insert into qc->sent. This causes the rtt state to lose some update because ngx_quic_handle_ack_frame_range can't know send_time.max_pn.	pl080516@…
1.25.x	2548	Worker infinite loop in ngx_http_do_read_client_request_body()	documentation	1.25.x	defect		new	2023-09-18T13:22:59Z	2023-09-18T13:22:59Z	"Consider the following location configuration:

{{{
        location = @grpcweb {
                rewrite ^ $orig_grpc_uri break;
                proxy_pass http://127.0.0.1:82;
                proxy_redirect off;
                proxy_buffering off;
                proxy_request_buffering off;
                allow all;
                proxy_http_version 1.1;
                proxy_set_header Host $http_host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Host $server_name;
                proxy_set_header X-Forwarded-Proto $scheme;
                client_body_buffer_size 0;
                client_max_body_size 0;
                proxy_max_temp_file_size 0;
                proxy_read_timeout 18000;
                proxy_send_timeout 18000;
                gzip off;
        }
}}}

I figured out that in my case ngx_http_do_read_client_request_body() is called with zero sized buffer (rb->buf->start == rb->buf->end) and as a result it loops endlessly logging ""http client request body rest 3"" message thousands times a second.

The zero-sized buffer is created earlier in ngx_http_read_client_request_body():

{{{
    /* TODO: honor r->request_body_in_single_buf */

    if (!r->headers_in.chunked && rb->rest < size) {
        size = (ssize_t) rb->rest;

        if (r->request_body_in_single_buf) {
            size += preread;
        }

        if (size == 0) {
            size++;
        }

    } else {
        size = clcf->client_body_buffer_size; // zero-sized buffer created here
    }

    rb->buf = ngx_create_temp_buf(r->pool, size);
    if (rb->buf == NULL) {
        rc = NGX_HTTP_INTERNAL_SERVER_ERROR;
        goto done;
    }
    
    r->read_event_handler = ngx_http_read_client_request_body_handler;
    r->write_event_handler = ngx_http_request_empty_handler;

    rc = ngx_http_do_read_client_request_body(r);
}}}

So as I understand in my case the whole request didn't fit r->headers_in by 3 bytes and as result rb->rest was 3.

When I removed ""client_body_buffer_size 0;"" from configuration the bug disappeared.

I blindly copied the set of proxied location directives from some page on stackoverflow and didn't realize ""client_body_buffer_size 0;"" would cause endless loop in a worker process.

"	anight@…
1.25.x	2604	Errors handling when streaming	documentation	1.25.x	defect		new	2024-02-14T17:38:44Z	2024-02-19T16:44:22Z	"Hello!

I have an application which streams data using http1.1 and chunked transfer-encoding.

nginx is used as http1.1/http2 proxy. http1.1 is used to communicate with an upstream with proxy_buffering off. 

When application aborts streaming the expected behavior is to get an error on client's side.

That works well with http1.1

{{{
$ curl --http1.1 -i localhost:8083
HTTP/1.1 200 OK
Server: nginx/1.25.3
Date: Wed, 14 Feb 2024 17:25:56 GMT
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive

Random Line 1: 2y1a0XrIc5
Random Line 2: KwIrtX5B5h
[...]
Random Line 442: FDo6cv5f4h
Random Line 443:curl: (18) transfer closed with outstanding read data remaining
}}}

but works randomly with http2. Curl may hang, exit with zero code or with error

{{{
$ curl --http2-prior-knowledge -i localhost:8082
HTTP/2 200
server: nginx/1.25.3
date: Wed, 14 Feb 2024 17:28:12 GMT
content-type: text/plain; charset=utf-8

Random Line 1: 9L02Z5qokm
Random Line 2: gR8emalzhl
[...]
Random Line 441: CbmbnKY2EC
Random Line 442: JsiYbH9Bwp
curl: (92) HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)
}}}

Also tested with haproxy 2.9.4 which works stable
{{{
~$ curl --http2-prior-knowledge -i localhost:8081
HTTP/2 200
date: Wed, 14 Feb 2024 17:30:54 GMT
content-type: text/plain; charset=utf-8

Random Line 1: CclkQfagoE
Random Line 2: 9TPlR0U51t
[...]
Random Line 441: JRMeZwNvoW
Random Line 442: 6m0PZcOe8C
curl: (92) HTTP/2 stream 0 was not closed cleanly: CANCEL (err 8)
}}}

Earlier versions of haproxy have issues too.

Please fix if possible"	inbox.artembokhan.com@…
1.25.x	2607	How to link custom library to nginx	documentation	1.25.x	defect		new	2024-02-20T10:33:36Z	2024-03-02T17:14:18Z	I am using openresty package for my webserver needs, where in trying to link custom library (xyz.so) to nginx, facing linker error like undefined reference to xyz.so, i tried using --add-module option, which failed. --with-ld-opt also i couldnot add library.  Can you please help me on this 	gayathri.shirahatti@…
1.25.x	2609	Custom 413 Error Page Not Displayed for Oversized Uploads	nginx-core	1.25.x	defect		new	2024-02-25T12:37:34Z	2024-04-17T12:36:32Z	"I have configured Nginx to display a custom 413 error page when the client uploads a file that exceeds the allowed size limit. Despite the configuration, Nginx defaults to its built-in error page instead of displaying the specified custom error page.

Here is the relevant part of my Nginx configuration:


{{{
http {
 ...
 client_max_body_size 		15m; 
 ...
 server {
  ...
  error_page 413 /custom_413.html;
  location = /custom_413.html {
    root /home/xxx/error_page;
    internal;
  }
 }
}
}}}

Expected Behavior: When a file larger than the allowed size is uploaded, Nginx should display the custom error page located at /home/xxx/error_page/custom_413.html.

Actual Behavior: Nginx displays its default error page for 413 errors, ignoring the custom error page configuration.

This issue persists even after ensuring that the client_max_body_size directive is properly set and the custom error page exists at the specified location."	Manager24live@…
1.25.x	2613	How to make openresty to wait on dependent library to be built before openresty	documentation	1.25.x	defect		new	2024-03-01T15:44:39Z	2024-03-02T16:45:41Z	"Hello Team,

I am using openresty package for my webserver needs, i have xyz.so library which is by default gets built in later stage, i want to add dependency so that xyz.so will be searched by openresty make file, if xyz.so is built , then openresty build will be proceeded.

Is there a way to achieve this in openresty make file, can you please help me out here

and also, if there is any issue in openresty makefile, which log i need to check for build error (i cant find autoconf.err in some cases and build says success but its not success)"	gayathri.shirahatti@…
1.25.x	2620	IPv6 with HTTP/3 / QUIC don't work	http/3	1.25.x	defect		new	2024-03-18T23:40:16Z	2024-04-24T09:10:49Z	"**Listener Config**
{{{
# HTTP/3 / QUIC Listener
listen 443 quic;
# HTTP/2 Fallback
listen 443 ssl;

listen [::]:443 quic;
# HTTP/2 Fallback
listen [::]:443 ssl;

http2 on;
http3 on;
http3_hq on;

# SSL Settings
ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers off;
quic_retry on;
quic_gso on;

# enable 0-RTT
ssl_early_data on;
ssl_session_tickets off;

# Redirect HTTP/3
add_header alt-svc 'h3="":$server_port""; ma=86400, h3-29="":$server_port""; ma=86400, h3-28="":$server_port""; ma=86400, h3-27="":$server_port""; ma=86400';
add_header Strict-Transport-Security
""max-age=31536000; includeSubDomains""
always;
add_header quic-status $http3 always;
add_header x-quic 'h3' always;

}}}

**Netstat**

{{{
[root@router ~]# netstat -tulpen | grep nginx
tcp        0      0 0.0.0.0:853             0.0.0.0:*               LISTEN      0          22839291   148259/nginx: maste
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      0          22839295   148259/nginx: maste
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      0          22839293   148259/nginx: maste
tcp6       0      0 :::853                  :::*                    LISTEN      0          22839292   148259/nginx: maste
tcp6       0      0 :::443                  :::*                    LISTEN      0          22839296   148259/nginx: maste
tcp6       0      0 :::80                   :::*                    LISTEN      0          22839294   148259/nginx: maste
udp        0      0 0.0.0.0:443             0.0.0.0:*                           0          22839297   148259/nginx: maste
udp6       0      0 :::443                  :::*                                0          22839298   148259/nginx: maste

}}}


HTTP/3 works fine with a IPv4 resolve but not with IPv6, HTTP/2 will answer instead to the Client (Google Chrome).

What I do wrong ? I use the latest SourceCode based on last Release with 
{{{
OpenSSL 3.2.1 30 Jan 2024 (Library: OpenSSL 3.2.1 30 Jan 2024)
}}}



"	DoM1niC@…
1.25.x	2624	Challenges Configuring HTTP/3 for Multiple Domains with Distinct SSL Certificates in Nginx 1.25.4	http/3	1.25.x	defect		new	2024-04-04T05:53:53Z	2024-04-04T05:53:53Z	"Environment:

Nginx Version: 1.25.4
Operating System: Ubuntu 22.04.4 LTS
OpenSSL Version: OpenSSL 3.0.2
Description:
Encountering configuration challenges when attempting to set up HTTP/3 for two separate domains (myapp.app and myapptwo.app), each with its own SSL certificate. Issues arise with the listen 443 quic reuseport; directive, leading to misrouting or incorrect content delivery when accessed via HTTP/3.

Steps to Reproduce:

1. Set up multiple server blocks to serve different domains with the configurations mentioned.
2. Include listen 443 quic reuseport; for the primary domain and listen 443 quic; for additional domains.
3. Access the domains using a client that supports HTTP/3.

server {
    listen 443 ssl;
    listen 443 quic reuseport;
    server_name myapp.app www.myapp.app app.myapp.app;

    http3 on;
    http2 on;

    quic_retry on;
    ssl_early_data on;
    add_header Alt-Svc 'h3="":$server_port""; ma=86400';
    proxy_intercept_errors on;

    ssl_certificate /etc/letsencrypt/live/myapp.app/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/myapp.app/privkey.pem;

    location / {
        root /home/usr/Ecosystem-App/main-server/public/;
        index index.html;
        try_files $uri $uri.html /index.html =404;
    }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root html;
    }
}

server {
    listen 443 quic;
    server_name *.myapp.app;

    http3 on;
    http2 on;

    quic_retry on;
    ssl_early_data on;
    add_header Alt-Svc 'h3="":$server_port""; ma=86400';
    proxy_intercept_errors on;

    ssl_certificate /etc/letsencrypt/live/myapp.app/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/myapp.app/privkey.pem;

    location / {
        root /home/usr/Ecosystem-App/d2c-server/public/;
        index index.html;
        try_files $uri $uri.html /index.html =404;
    }
    
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root html;
    }
}

server {
    listen 443 quic;
    server_name myapptwo.app www.myapptwo.app app.myapptwo.app;

    http3 on;
    http2 on;

    quic_retry on;
    ssl_early_data on;
    add_header Alt-Svc 'h3="":$server_port""; ma=86400';
    proxy_intercept_errors on;

    ssl_certificate /etc/letsencrypt/live/myapptwo.app/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/myapptwo.app/privkey.pem;

    location / {
        root /home/usr/Ecosystem-App/main-server/public/;
        index index.html;
        try_files $uri $uri.html /index.html =404;
    }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root html;
    }
}

server {
    listen 443 quic;
    server_name *.myapptwo.app;

    http3 on;
    http2 on;

    quic_retry on;
    ssl_early_data on;
    add_header Alt-Svc 'h3="":$server_port""; ma=86400';
    proxy_intercept_errors on;

    ssl_certificate /etc/letsencrypt/live/myapptwo.app/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/myapptwo.app/privkey.pem;

    location / {
        root /home/usr/Ecosystem-App/d2c-server/public/;
        index index.html;
        try_files $uri $uri.html /index.html =404;
    }
    
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root html;
    }
}

Expected Behavior:
Each domain should serve its corresponding content correctly over HTTP/3, utilizing its designated SSL certificate.

Actual Behavior:
Configuration limitations or misinterpretations cause only one domain to properly support HTTP/3 or result in incorrect domain content delivery.

Additional Information:
Illustrate the importance of enabling HTTP/3 across multiple server blocks, each with unique SSL certificates, for improved security and performance on Nginx."	desaisoftwaree@…
1.25.x	2627	different nginx behavior as v4 and v6	nginx-module	1.25.x	defect		new	2024-04-05T20:53:18Z	2024-04-05T20:56:20Z	"While a client is connected via IPv4, nginx will offer ""OCSP stapling"" and a set of cipher suites in the order defined in the configuration.
While a client is connected via IPv6, nginx will not offer ""OCSP stapling"" and will change the cipher suites order defined in the configuration.
Behavior first noted on nginx/1.25.3 and present on nginx/1.25.4.

Cipher configuration:
ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA2
56:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256;"	HQuest@…
1.25.x	2630	Unable to remove Cookie from request header	documentation	1.25.x	defect		new	2024-04-22T09:11:38Z	2024-04-22T09:11:38Z	"I am using nginx reverse proxy server to run my Angular application, I am unable to remove the 'Cookie' attribute from request header even after adding the below mentioned proxy settings in nginx.conf file

proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_set_header Cookie """";"	stan75j@…
1.25.x	2637	Documentation for server_name does not mention special case of underscore	documentation	1.25.x	defect		new	2024-05-02T22:53:03Z	2024-05-07T17:03:34Z	"The standard nginx install produces a sites-available/default file that contains the directive:

server_name _

However, the documentation for the server_name directive does not describe that special case."	aathan@…
1.25.x	2643	ssl_reject_handshake not working as expected	documentation	1.25.x	defect		new	2024-05-23T21:31:07Z	2024-05-23T21:41:26Z	"Hi Team 

Trying to setup a tcp load balancer on nginx : 

Getting the errors below when I tried the various combinations present in the document 

https://nginx.org/en/docs/stream/ngx_stream_ssl_module.html#ssl_reject_handshake

Error 1 :

May 23 20:53:03 ip-10-223-203-59.ec2.internal nginx[92054]: nginx: [emerg] the invalid ""default_server"" parameter in /etc/nginx/nginx.conf


and if I remove the default server parameter I get : 

May 23 21:07:25 ip-10-223-203-59.ec2.internal nginx[92134]: nginx: [emerg] ""ssl_reject_handshake"" directive is not allowed>
May 23 21:07:25 ip-10-223-203-59.ec2.internal nginx[92134]: nginx: configuration file /etc/nginx/nginx.conf test failed

Please assist.
"	srikanthvpai@…
1.25.x	2644	Different User-Agent detection at nginx and PHP via FastCGI	nginx-core	1.25.x	defect		new	2024-05-29T08:37:18Z	2024-05-31T07:58:27Z	"Hello. 
In part of incoming requests, nginx defines the ""User-Agent"" header like ""okhttp/4.9.2"", but PHP via FastCGI defines it like ""Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36"".

No specific UA config settings at all.
How it possible?
Thank you.

---
PHP 8.2.15 (cli) (built: Jan 20 2024 14:14:18) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.2.15, Copyright (c) Zend Technologies
    with Zend OPcache v8.2.15, Copyright (c), by Zend Technologies

[HTTP_USER_AGENT] => Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36"	kostroma.tvil.ru@…
1.25.x	2650	Uncovered edge case in host header validation	nginx-core	1.25.x	defect		new	2024-06-13T18:14:40Z	2024-12-10T02:06:45Z	"Hello to maintainers, developers and anybody interested!

I suppose there is an uncovered edge case during host header validation procedure. It is likely to be caused by a code in `ngx_http_validate_host`.

Consider the following `nginx.conf` file:
{{{
events {}

http {
    server {
        listen 8012 default_server;
        server_name _;

        return 200 ""default_server, host=$host, server_name=$server_name\n"";
    }
    server {
        listen 8012;
        server_name example.com;

        return 200 ""example.com, host=$host, server_name=$server_name\n"";
    }
    server {
        listen 8012;
        server_name example.com.;

        return 200 ""example.com. (with dot at the end), host=$host, server_name=$server_name\n"";
    }
}
}}}

... and responses of the server running the config:
{{{
$ curl localhost:8012 -H 'Host: example.com'
example.com, host=example.com, server_name=example.com

# As expected: dot-ended domains are unified
$ curl localhost:8012 -H 'Host: example.com.'
example.com, host=example.com, server_name=example.com

# Unexpected: request with dot-ended domain will not be processed in usual virtual host
$ curl localhost:8012 -H 'Host: example.com.:1234.'
example.com. (with dot at the end), host=example.com., server_name=example.com.
}}}

There are unexpected header validation results in cases when a port part of the host header contains a dot.

Although the last request is probably wrong, I suppose Nginx should provide consistent behavior in the case too to allow users to rely on it.

You may evolve these requests to a more strange one (within the same server with the same config):
{{{
# As expected: '.' is incorrect host
$ curl localhost:8012 -H 'Host: .'
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx/1.27.0</center>
</body>
</html>

# Unexpected: the server handle and process a bad request successfully
$ curl localhost:8012 -H 'Host: .:12.34'
default_server, host=., server_name=_
}}}

In this case a request with a single-dot host header, which is forbidden usually, succeeds now.

This may have some negative impact on configurations with an authorization based on use of `$host` variable. Consider another `nginx.conf` file:
{{{
events {}

http {
    server {
        listen 8013 default_server;
        server_name _;

        root ""html/$host"";
    }
    server {
        listen 8013;
        server_name secret.example.com;

        root ""html/secret.example.com"";
        return 401 ""Unauthorized\n"";
    }
}
}}}
... and responses of the server running this config:
{{{
$ cat html/secret.example.com/secret 
SomeSecret

# As expected: an access is unauthorized
$ curl localhost:8013/secret -H 'Host: secret.example.com'
Unauthorized

# Unexpected: one may gain an unauthorized access
$ curl localhost:8013/secret.example.com/secret -H 'Host: .:.1234'
SomeSecret
}}}

Using this configuration looks like a bad approach for an authorization, nevertheless, I think the issue may cause some other unexpected cases in other applications.

I wish you consider the issue important enough to be acknowledged. To increase your interest for fixing it, I will try to prepare a patch. Hope this helps.

Thank you all a lot!"	Daniil Lemenkov
1.25.x	2658	proxy_set_body	nginx-module	1.25.x	defect		new	2024-06-17T09:33:57Z	2024-06-19T11:11:39Z	"We've been using NGINX Plus successfully for years at our company. We've recently enabled Cloudflare proxy https://developers.cloudflare.com/dns/manage-dns-records/reference/proxied-dns-records/ on our domains and noticed a bug/race condition possibly within NGINX's `proxy_set_body`.

We're using the `ngx_http_realip_module` https://nginx.org/en/docs/http/ngx_http_realip_module.html to set the `x-real-ip` header within NGINX for requests coming via Cloudflare https://www.cloudflare.com/ips/ . The request coming from Cloudflare hits our NGINX proxy which then forwards the requests to an upstream containing K8S cluster worker nodes.

Without using the Cloudflare proxy, the below `location` block works as expected, reaching our upstream service with the correct body set with `proxy_set_body`. However, when enabling Cloudflare proxy, the request
times out after 1 minute with a `504` status code.

{{{
location ~ ^/cep/data-feed/normal-hints/(\d+) {
  include /etc/nginx/restrictaccess.conf;

  limit_req zone=cep_data_feed_normal_hints burst=10;
  
  set $k8s_service ""cep-hub-notification-api"";
  set $parameters ""/graphql"";
    
  proxy_method POST;  
  proxy_set_body '{ ""query"":"" query userNotifiableHintsDataFeed { userNotifiableHints(userId: $1, feedType: NORMAL_HINT) {  hintId firstHint isPedigree hintType  givenNames surnames hintCount oldRelevance rating familyTreeId familyTreeRef sourceCountry sourceCategory familyTreeTitle dateCreated nodeId ahnenNumber rootNodeId hintReference imageReference hintPlace hintYear pedigreeRelevance weightedRelevance searchRecencyRelevance relevance } }"" }';
  include /etc/nginx/services/fmp/conf/k8s-service.conf;

  add_header 'Access-Control-Allow-Credentials' 'true';
}
}}}

Whilst the Cloudflare proxy mode enabled, as a workaround we assign the body to a variable and call `proxy_set_body` with it, so it starts working:

{{{
  set $proxy_body '{ ""query"":"" query userNotifiableHintsDataFeed { userNotifiableHints(userId: $1, feedType: TREE_HINT) {  hintId firstHint isPedigree hintType  givenNames surnames hintCount oldRelevance rating familyTreeId familyTreeRef sourceCountry sourceCategory familyTreeTitle dateCreated nodeId ahnenNumber rootNodeId hintReference imageReference hintPlace hintYear pedigreeRelevance weightedRelevance searchRecencyRelevance relevance } }"" }'; 
  proxy_set_body $proxy_body;
}}}

Looking at our distributed tracing span (the target service is written in TypeScript using Apollo GraphQL), it's reporting the request failed after 10 seconds with:
{{{
event	exception
exception.message	request aborted
exception.stacktrace	
BadRequestError: request aborted
    at IncomingMessage.onAborted (/usr/src/app/node_modules/express/node_modules/raw-body/index.js:245:10)
    at /otel-auto-instrumentation-nodejs/node_modules/@opentelemetry/context-async-hooks/build/src/AbstractAsyncHooksContextManager.js:50:55
    at AsyncLocalStorage.run (node:async_hooks:346:14)
    at AsyncLocalStorageContextManager.with (/otel-auto-instrumentation-nodejs/node_modules/@opentelemetry/context-async-hooks/build/src/AsyncLocalStorageContextManager.js:33:40)
    at IncomingMessage.contextWrapper (/otel-auto-instrumentation-nodejs/node_modules/@opentelemetry/context-async-hooks/build/src/AbstractAsyncHooksContextManager.js:50:32)
    at IncomingMessage.clsBind (/usr/src/app/node_modules/cls-hooked/context.js:172:17)
    at IncomingMessage.emit (node:events:518:28)
    at IncomingMessage.emitted (/usr/src/app/node_modules/emitter-listener/listener.js:134:21)
    at IncomingMessage._destroy (node:_http_incoming:224:10)
    at _destroy (node:internal/streams/destroy:121:10)
exception.type	ECONNABORTED
}}}

Why would there be a discrepancy with the `proxy_set_body` directive when the contents are specified inline (without the extra variable) and the Cloudflare proxy disabled, but result in aborted connections when Cloudflare proxy is enabled?

We've updated our known `proxy_set_body` directives to use a variable with the desired body, but there is no documentation/issue describing the problem, this issue can come up in the future for us/others."	amolnar@…
1.25.x	2667	Ubuntu repository documentation: keyring may need permissions set	documentation	1.25.x	defect		accepted	2024-07-11T17:27:12Z	2024-07-11T17:33:23Z	"The documentation describes installing the GPG public keyring and source file for the Ubuntu repository. 

https://nginx.org/en/linux_packages.html#Ubuntu

However, apt requires that the keyring file be readable by non-privileged users. This is unintuitive, but even when run as root, apt uses a non-privileged user to read the keyring file (see: https://askubuntu.com/a/1401911/4512 ).

Depending on the system's umask defaults, the keyring may be created as unreadable by a non-privileged user, and in this case apt will not tell the user there is a permissions issue, rather it gives the following ambiguous error:

{{{
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY ABF5BD827BD9BF62
}}}

Thus, to avoid confusion, it may be helpful to add a line of code similar to the following in the documentation after the curl command:

{{{
sudo chmod 644 /usr/share/keyrings/nginx-archive-keyring.gpg
}}}

"	imackinnon@…
1.25.x	2669	Problems with using the $ sign in third-party modules in regexp templates	documentation	1.25.x	defect		new	2024-07-16T05:18:19Z	2024-07-16T05:18:19Z	"Previously, the configuration in nginx 1.22.0 was used with the lines:

subs_filter '?t=$Time$' '' g;
subs_filter '^#EXT-X-MEDIA.+TYPE=SUBTITLES.+\n$' '' rg;

but when switching to nginx version 1.26.1, errors began to appear as follows:
nginx: [emerg] invalid variable name in /etc/nginx/...
and
nginx: [emerg] match part cannot contain variable during regex made in /etc/nginx/...

in the process of studying the problem, it was revealed that the $ symbol was to blame, but any attempts at escaping did not lead to success.

Only after changing regexp templates:

subs_filter '\?t=.Time.' '' rg;
subs_filter '^#EXT-X-MEDIA.+TYPE=SUBTITLES.+\n' '' rg;

the configuration was started successfully.

Please comment on this behavior and will it be possible to use $ in the regexp template in third-party modules in the future?"	volga.leoking@…
1.25.x	2673	ngx_http_limit_req_module race condition that can potentialy result in wrong delay calculation	nginx-module	1.25.x	defect		new	2024-07-31T09:49:10Z	2024-08-09T09:10:05Z	"If I am not mistaking there is potential race condition between the lookup method and delay calculation method, that can result into delay being calculated based on wrong values (from another tree node (with different key)).

The version of the `nginx` neither the `uname -a` are important since this I believe is bug in the source code.

I believe the race condition is on the field `ngx_http_limit_req_ctx_t.node`.

Which is first calculated and set here: https://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_limit_req_module.c#L246-L251

But the shared memory mutex is unlocked immediately after each ""zone""/""limit"" is looked at.

And then the value is used here: https://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_limit_req_module.c#L557

but as the mutex is unlocked before the code gets there the pointer could be overridden by another thread with different key. There for the delay would be calculated based on the other request key node.



Sorry if mistaking and wasting your time.
Thanks for looking into it if you do."	dev-null-undefined@…
1.25.x	2679	Low throughput with HTTP/3	http/3	1.25.x	defect		new	2024-08-13T19:25:39Z	2024-08-13T19:25:39Z	"Hi there,

I am experiencing a significant difference in throughput performance when downloading files using HTTP/3 compared to HTTP/2 on the same server. Below are the results from my tests:

**HTTP2:**
{{{
# curl -k --http2 -o /dev/null 'https://marlon-test-jnb.tempurl.host/ubuntu_100m.iso'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  100M  100  100M    0     0  87.5M      0  0:00:01  0:00:01 --:--:-- 87.5M

}}}

**HTTP/3:**
{{{
# curl -k --http3-only -o /dev/null 'https://marlon-test-jnb.tempurl.host/ubuntu_100m.iso'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  100M  100  100M    0     0  6258k      0  0:00:16  0:00:16 --:--:-- 6395k
}}}

All using the default settings. Raising http3_stream_buffer_size from the default 64k to 128k or 256k helps, but never achieve as good as HTTP/2.

I've tested with stable 1.26.1 (compiled) and also mainline 1.27.0 (from nginx.org). Also tested the download with Google Chrome and the speeds matched curl.

I've attached the error.log with debug enabled during the download and the nginx.conf.

Are there any additional parameters or configurations that I can adjust to improve HTTP/3 throughput to be on par with HTTP/2? Any guidance would be greatly appreciated.

Thank you in advance,
Marlon"	marlonanjos@…
1.25.x	2546	Support RFC 8879: certificate compression	other	1.25.x	enhancement		new	2023-09-12T01:59:48Z	2023-09-12T15:38:38Z	[https://www.rfc-editor.org/rfc/rfc8879 RFC 8879 TLS Certificate Compression] has been supported in BoringSSL for a while, and [https://www.openssl.org/blog/blog/2023/09/07/ossl32a1/ recently made it to OpenSSL]. It allows us to more safely compress TLS certificates. OpenSSL supports zlib, zstd, and Brotli; BoringSSL supports zlib and Brotli.	Seirdy
1.25.x	2547	Support Partitioned Cookies for load balancing according to CHIPS	other	1.25.x	enhancement		new	2023-09-18T12:25:33Z	2023-11-03T14:20:29Z	"All browsers will (or already have) restrict their 3rd-Party-Cookie handling to prevent user tracking. You are affected using nginx load balancer functionality in the following use case:

* Your application is integrated in a 3rd-Party context
* You are tied to a local state on a certain deployment unit
* You use a sticky session cookie for load balancing

In such cases, the 3rd-Party session cookie will be blocked by the browser and your application will probably not work correctly. Safari blocks them already, Chrome and Firefox will do so, starting in mid 2024. To support such a use case as mentioned, CHIPS was introduced (https://github.com/privacycg/CHIPS). CHIPS will be supported by

* Chrome
* Firefox
* Safari/Webkit seems to be undecided yet
* Microsoft Edge might follow Chrome, since it's quite the same basis

Technically, CHIPS defines the cookie attribute ""Partitioned"", that will be handled by the browser in a certain jar for the 3rd-Party context within the 1st-Party context. So a tracking for multiple sites is not possible.

nginx should support Partitioned Cookies. The existing configuration could be extended as follows


{{{
upstream backend {
    server backend1.example.com route=a;
    server backend2.example.com route=b;

    sticky cookie srv_id expires=1h domain=.example.com samesite=none secure path=/ partitioned;
}
}}}


that results in an HTTP-Response Header value like this:


{{{
Set-Cookie: __Host-SID=31d4d96e407aad42; SameSite=None; Secure; Path=/; Partitioned;
}}}

(example is copied from the CHIPS proposal site)"	schnieders@…
1.25.x	2552	Correct xsl and xslt mimetypes missing from ngxinx mime.types file	nginx-core	1.25.x	enhancement		new	2023-10-25T02:32:41Z	2023-10-27T01:57:03Z	"Since the default mimetype configured in nginx.conf is `application/octet-stream`, and there is no entry in the mime.types to override this, xsl and xslt files are served with 


{{{
content-type: application/octet-stream
}}}


This is not the correct mimetype. According to W3 docs <https://www.w3.org/TR/xslt20/#media-type-registration>, xsl and xslt files should now be served as 

{{{
content-type: application/xslt+xml 
}}}

I would be happy to contribute a patch if there is agreement."	apotek@…
1.25.x	2560	"Inclusive language: rename default branch of official GitHub tracker repo nginx/nginx from ""master"" to ""main""?"	documentation	1.25.x	enhancement		new	2023-11-08T14:03:56Z	2023-11-08T14:03:56Z	"Could we look at following the suggestions in:

https://github.com/github/renaming

to rename the GitHub.com/nginx/nginx default branch to ""main""?

I believe the upstream mercurial repo calls its main branch ""default"" so in theory there shouldn't be any more or less difference than today?
"	michaelmaguire@…
1.25.x	2568	Introduce send_timeout and proxy_send_timeout in the stream module	nginx-module	1.25.x	enhancement		new	2023-11-24T15:19:05Z	2023-12-04T22:50:25Z	"myF5 Case # 00508180

Hello,
We would like to request enhancement for support on NGINX regarding stale sockets.

On payment networks, the clients connect to the network and the network sends authorization for the clients to approve.

We have observed that when the client becomes unresponsive (no TCP ACK) or the client host is too slow to process, the socket sending buffer starts queuing up and the buffer becomes full and NGINX does not close the socket ever and let it fill up, the socket becomes stale and it just tries forever to send the queued data.

The problem here is that the payload is time sensitive, some authorization requests will expire after some seconds

While we can address TCP Retransmission timeout and retries, there is no option to handle TCP window size = 0 on client side, causing NGINX send buffer to fill up.

Specifically, we want NGINX to close the socket if the buffer becomes full. Before NGINX, On our processor we handled this situation as flow control and whenever the socket is full with EWOULDBLOCK error we closed the socket and signed off the client for authorization requests.

We are looking for flow control in NGINX.

proxy_timeout doesn't work in this scenario because these are long-lived TCP sockets, they stay open for months waiting for authorizations to come through

We observe the Send-Q and Receive-Q going up in netstat when the issue happens, but NGINX doesn't close the socket.

Here are the logs

stream {


server {

    listen 10.156.35.71:6007;

    listen 10.156.35.71:6003;

    listen 10.156.35.71:6005;

    listen 10.156.35.71:6006;

    listen 10.156.35.71:6070;

    proxy_pass 127.0.0.1:$server_port;

    proxy_protocol on;

    proxy_buffer_size 8k;

}

}

2023/11/09 14:02:03 [debug] 65169#65169: *1365 write new buf t:1 f:0 0000000000000000, pos 0000557F1640E4D0, size: 306 file: 0, size: 0

2023/11/09 14:02:03 [debug] 65169#65169: *1365 stream write filter: l:0 f:1 s:306

2023/11/09 14:02:03 [debug] 65169#65169: *1365 writev: 306 of 306

2023/11/09 14:02:03 [debug] 65169#65169: *1365 stream write filter 0000000000000000

2023/11/09 14:02:03 [debug] 65169#65169: *1365 event timer del: 3: 31701382574

2023/11/09 14:02:03 [debug] 65169#65169: *1365 event timer add: 3: 31536000000:31701384196

2023/11/09 14:02:03 [debug] 65169#65169: *1365 event timer: 3, old: 31701384196, new: 31701384196

2023/11/09 14:02:03 [debug] 65169#65169: timer delta: 1622

2023/11/09 14:02:03 [debug] 65169#65169: worker cycle

2023/11/09 14:02:03 [debug] 65169#65169: epoll timer: 31536000000

2023/11/09 14:02:04 [debug] 65169#65169: epoll: fd:3 ev:0001 d:00007F4A3D6765B0

2023/11/09 14:02:04 [debug] 65169#65169: *1365 recv: eof:0, avail:-1

2023/11/09 14:02:04 [debug] 65169#65169: *1365 recv: fd:3 234 of 8192

2023/11/09 14:02:04 [debug] 65169#65169: *1365 write new buf t:1 f:0 0000000000000000, pos 0000557F16416830, size: 234 file: 0, size: 0

2023/11/09 14:02:04 [debug] 65169#65169: *1365 stream write filter: l:0 f:1 s:234

2023/11/09 14:02:04 [debug] 65169#65169: *1365 writev: 234 of 234

2023/11/09 14:02:04 [debug] 65169#65169: *1365 stream write filter 0000000000000000

2023/11/09 14:02:04 [debug] 65169#65169: *1365 event timer del: 3: 31701384196

2023/11/09 14:02:04 [debug] 65169#65169: *1365 event timer add: 3: 31536000000:31701384604

2023/11/09 14:02:04 [debug] 65169#65169: timer delta: 408

2023/11/09 14:02:04 [debug] 65169#65169: worker cycle

2023/11/09 14:02:04 [debug] 65169#65169: epoll timer: 31536000000

2023/11/09 14:02:05 [debug] 65169#65169: epoll: fd:11 ev:0005 d:00007F4A3D6766A0

2023/11/09 14:02:05 [debug] 65169#65169: *1365 recv: eof:0, avail:-1

2023/11/09 14:02:05 [debug] 65169#65169: *1365 recv: fd:11 306 of 8192


# cat /etc/centos-release

CentOS Linux release 7.9.2009 (Core)"	felipeapolanco@…
1.25.x	2603	RFE: please provide installable interface to allow build and install own ngingx  modules	documentation	1.25.x	enhancement		new	2024-02-14T17:10:02Z	2024-02-14T17:10:02Z	"Currently only method of building external modules is use --add-dynamic-module configure option.
IMO it would be good to provide possibility to install necessary headers + pkgconfig file containing to allow build and install custom nginx modules which will be build without nginx  source tree."	kloczek@…
1.25.x	2640	Rewrite module directives are not inherited into the limit_except block	documentation	1.25.x	task		new	2024-05-14T20:04:02Z	2024-05-14T20:04:02Z	"See https://trac.nginx.org/nginx/ticket/1383

This should be documented here https://nginx.org/en/docs/http/ngx_http_core_module.html#limit_except"	jnewfield@…
1.25.x	2641	Q:Does NGINX QUIC Support KTLS?	http/3	1.25.x	defect		new	2024-05-17T17:41:19Z	2024-05-17T17:50:48Z	"I configured my nginx to port 8443 which supports both quic and h2.
When i tried downloading file using quic, nginx seems to be not using quic while h2 i using KTLS
Am i missing anything here?"	Karthikdasari0423@…
1.2.x	224	Args Delimiter	nginx-core	1.2.x	enhancement	somebody	new	2012-09-23T20:17:25Z	2022-03-17T08:03:07Z	"

Hi.

I have a PHP application that uses "";"" to delimit GET arguments as in ""/index.php?a=1;b=2"". 

Hence, Nginx args sees this as a single argument ""a"" with a value of ""1;b=2"" instead of two arguments ""a"" and ""b"" with values of ""1"" and ""2"" respectively.

It would be nice to be able to define other delimiters apart from ""&"" as it is with PHP.

Thanks
"	Nginxuser
1.2.x	191	literal newlines logged in error log	nginx-module	1.2.x	defect	somebody	accepted	2012-08-01T18:06:16Z	2023-10-20T19:00:15Z	"I noticed that when a %0a exists in the URL, nginx includes a literal
newline in the error_log when logging a file not found:

-----
2012/07/26 17:24:14 [error] 5478#0: *8 ""/var/www/localhost/htdocs/


html/index.html"" is not found (2: No such file or directory), client:
1.2.3.4, server: , request: ""GET /%0a%0a%0ahtml/ HTTP/1.1"", host:
""test.example.com""
-----

This wreaks havoc with my log monitoring utility 8-/.

It seems desirable to escape the newline in the log message? I tested
with the latest 1.2.2. Is there any way with the existing configuration
options to make this not happen, or any interest in updating the logging
module to handle this situation differently?

"	Paul Henson
1.2.x	994	perl_require directive has effect only at first config	other	1.2.x	defect		accepted	2016-06-08T18:31:04Z	2016-12-01T19:55:09Z	"my configs are included as:
  include /etc/nginx/sites-enabled/*.conf;

if I want to use 'perl_require' directive I should place it ONLY at first conf file (in alfabetical order)
If I put directive into any other conf file it even does not complain if I try to load unexisting module"	KES777@…
1.2.x	314	Dynamic document roots, defaults and prescedence	nginx-core	1.2.x	enhancement		new	2013-03-08T10:04:51Z	2013-03-26T00:16:32Z	"Hello guys,

Check the following example:

# This is a really cool example of how to have dynamic document roots
# based on the url.
#
# Nginx will find the document root based on the URL of the request.
#
# This, combined with a wildcard in your dns file, will let you create
# directories and make them inmmediatelly available for nginx to find.
#
# Very useful on dev environments or ISPs that have their stuff in order.

{{{
server {
    listen 80;
    server_name ~^(?<sub>.+?)\.(?<dom>.+)$;
    root /srv/www/html/$dom/$sub/public;
    index index.html index.htm;
    autoindex on;

    location / {
        try_files $uri $uri/ =404;
    }

    location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
        # Some basic cache-control for static files to be sent to the browser
        expires max;
        add_header Pragma public;
        add_header Cache-Control ""public, must-revalidate, proxy-revalidate"";
    }

    location = /robots.txt { access_log off; log_not_found off; }
    location = /favicon.ico { access_log off; log_not_found off; }
    location ~ /\. { access_log off; log_not_found off; allow all; }
    location ~ ~$ { access_log off; log_not_found off; deny all; }
}
}}}


That and wildcard domains (*.example.tld) will give you an awesome configuration for one type of website. For example, regular PHP sites, Wordpress, Zend Framework, etc. This is useful in so many ways:

- Dev environments
  * You don't need to touch your dns nor nginx configuration. You just create the proper dir structure and that's it.

- ISPs
  * There're a lot of people that develop websites and host them. These settings, as long as they keep the dir structure, 
    work fine.

- Owned servers
  * if you keep your own server for many sites, just configure once and just deploy with the proper dir structure. You will not
    have to restart your server (or reload it) or anything of the sorts. Just point the domain to it and upload the files.

- WebAdmins
  * Web admins can benefit from implementing default configs for most common needs, frameworks and software they use. Besides,
    it is a lot easier to maintain a single file for all your drupal websites (... also, more dangerous, in case you mess up)

Some problems remain. For example:

# regular html sites
- example.tld
- example1.tld

- phpexample.tld

- wordpressexample.tld
- wordpressexample1.tld
- wordpressexample2.tld

Let's say I have three configurations. One for each website type: html, php, wordpress.

Now, my directory structure would look like this:


{{{
/srv
  /www
    /html
      /example.tld
        /www
          /public
        /downloads
          /public
      /example1.tld
        /www
          /public
    /php
      /phpexample.tld
        /www
          /public
    /wordpress
      /wordpressexample.tld
        /www
          /public
      /wordpressexample1.tld
        /www
          /public
      /wordpressexample2.tld
        /www
          /public
}}}


# Problems
- How will nginx know where to look for the website?
There should be a filtering statement, defining which domains are available in that configuration maybe.

- What about when it does find the domain but not the subdomain it's looking for?
There should be some kind of __default__ configuration. It should default to some document root in case none matches but the domain does; per configuration. For example, if I look for something.example.tld, it should default to www or some other doc root I define.


In general, I think this would be very useful. Once configured, you don't have to touch your nginx configuration again. In case of optimization and tweaks, you just tweak once and all get the benefit of it."	Renich Bon Ciric
1.3.x	405	Support for resumeable uploads	nginx-core	1.3.x	enhancement		new	2013-09-03T22:37:57Z	2024-01-17T03:35:41Z	"It seems Nginx has a big feature missing, the ability to support resumable uploads.

There are two third-party modules available at the moment, however, both have serious downfalls.

* The most popular seems to be https://github.com/vkholodkov/nginx-upload-module - however, it doesn't support any version above 1.3.8. And the author of the module has no plans to update.
* The other up and coming module is https://github.com/pgaertig/nginx-big-upload - however, there are numerous reports of incompatibilities with SPDY and the dependancy on Lua for it to run.

Thus neither of these modules is suitable for someone running Nginx 1.4+. Nginx needs a solid, supported resumable upload functionality. If not compiled by default, it should be able to compile with a flag, the same way SPDY is added when compiling."	Kieran P
1.3.x	217	"Wrong ""Content-Type"" HTTP response header in certain configuration scenarios"	nginx-core	1.3.x	defect	somebody	accepted	2012-09-12T14:56:02Z	2015-11-14T07:57:36Z	"In certain configuration scenarios the ""Content-Type"" HTTP response header is not of the expected type but rather falls back to the default setting.

I was able to shrink down the configuration to a bare minimum test case which gives some indication that this might happen in conjunction with regex captured in ""location"", ""try_files"" and ""alias"" definitions.

Verfied with Nginx 1.3.6 (with patch.spdy-52.txt applied), but was also reproducible with earlier versions, see
http://mailman.nginx.org/pipermail/nginx/2012-August/034900.html
http://mailman.nginx.org/pipermail/nginx/2012-August/035170.html
(no response was given on those posts)

{{{
# nginx -V
nginx version: nginx/1.3.6
TLS SNI support enabled
configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/sbin/nginx --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --user=nginx --group=nginx --with-openssl=openssl-1.0.1c --with-debug --with-http_stub_status_module --with-http_ssl_module --with-ipv6
}}}
Minimal test configuration for that specific scenario:
{{{
server {
    listen                          80;
    server_name                     t1.example.com;

    root                            /data/web/t1.example.com/htdoc;

    location                        ~ ^/quux(/.*)?$ {
        alias                       /data/web/t1.example.com/htdoc$1;
        try_files                   '' =404;
    }
}
}}}
First test request where Content-Type is being correctly set to ""image/gif"" as expected:
{{{
$ curl -s -o /dev/null -D - -H 'Host: t1.example.com' http://127.0.0.1/foo/bar.gif
HTTP/1.1 200 OK
Server: nginx/1.3.6
Date: Wed, 12 Sep 2012 14:20:09 GMT
Content-Type: image/gif
Content-Length: 68
Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT
Connection: keep-alive
ETag: ""501a0a78-44""
Accept-Ranges: bytes
}}}
Second test request where Content-Type is wrong, ""application/octet-stream"" instead of ""image/gif"" (actually matches the value of whatever ""default_type"" is set to):
{{{
$ curl -s -o /dev/null -D - -H 'Host: t1.example.com' http://127.0.0.1/quux/foo/bar.gif
HTTP/1.1 200 OK
Server: nginx/1.3.6
Date: Wed, 12 Sep 2012 14:20:14 GMT
Content-Type: application/octet-stream
Content-Length: 68
Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT
Connection: keep-alive
ETag: ""501a0a78-44""
Accept-Ranges: bytes
}}}
Debug log during the first test request:
{{{
2012/09/12 16:20:09 [debug] 15171#0: *1 delete posted event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: *1 malloc: 09BDA0C8:672
2012/09/12 16:20:09 [debug] 15171#0: *1 malloc: 09BE3210:1024
2012/09/12 16:20:09 [debug] 15171#0: *1 posix_memalign: 09C0AE10:4096 @16
2012/09/12 16:20:09 [debug] 15171#0: *1 http process request line
2012/09/12 16:20:09 [debug] 15171#0: *1 recv: fd:11 178 of 1024
2012/09/12 16:20:09 [debug] 15171#0: *1 http request line: ""GET /foo/bar.gif HTTP/1.1""
2012/09/12 16:20:09 [debug] 15171#0: *1 http uri: ""/foo/bar.gif""
2012/09/12 16:20:09 [debug] 15171#0: *1 http args: """"
2012/09/12 16:20:09 [debug] 15171#0: *1 http exten: ""gif""
2012/09/12 16:20:09 [debug] 15171#0: *1 http process request header line
2012/09/12 16:20:09 [debug] 15171#0: *1 http header: ""User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2""
2012/09/12 16:20:09 [debug] 15171#0: *1 http header: ""Accept: */*""
2012/09/12 16:20:09 [debug] 15171#0: *1 http header: ""Host: t1.example.com""
2012/09/12 16:20:09 [debug] 15171#0: *1 http header done
2012/09/12 16:20:09 [debug] 15171#0: *1 event timer del: 11: 3134905866
2012/09/12 16:20:09 [debug] 15171#0: *1 rewrite phase: 0
2012/09/12 16:20:09 [debug] 15171#0: *1 test location: ~ ""^/quux(/.*)?$""
2012/09/12 16:20:09 [debug] 15171#0: *1 using configuration """"
2012/09/12 16:20:09 [debug] 15171#0: *1 http cl:-1 max:1048576
2012/09/12 16:20:09 [debug] 15171#0: *1 rewrite phase: 2
2012/09/12 16:20:09 [debug] 15171#0: *1 post rewrite phase: 3
2012/09/12 16:20:09 [debug] 15171#0: *1 generic phase: 4
2012/09/12 16:20:09 [debug] 15171#0: *1 generic phase: 5
2012/09/12 16:20:09 [debug] 15171#0: *1 access phase: 6
2012/09/12 16:20:09 [debug] 15171#0: *1 access phase: 7
2012/09/12 16:20:09 [debug] 15171#0: *1 post access phase: 8
2012/09/12 16:20:09 [debug] 15171#0: *1 try files phase: 9
2012/09/12 16:20:09 [debug] 15171#0: *1 content phase: 10
2012/09/12 16:20:09 [debug] 15171#0: *1 content phase: 11
2012/09/12 16:20:09 [debug] 15171#0: *1 content phase: 12
2012/09/12 16:20:09 [debug] 15171#0: *1 http filename: ""/data/web/t1.example.com/htdoc/foo/bar.gif""
2012/09/12 16:20:09 [debug] 15171#0: *1 add cleanup: 09C0B3D8
2012/09/12 16:20:09 [debug] 15171#0: *1 http static fd: 14
2012/09/12 16:20:09 [debug] 15171#0: *1 http set discard body
2012/09/12 16:20:09 [debug] 15171#0: *1 HTTP/1.1 200 OK
Server: nginx/1.3.6
Date: Wed, 12 Sep 2012 14:20:09 GMT
Content-Type: image/gif
Content-Length: 68
Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT
Connection: keep-alive
ETag: ""501a0a78-44""
Accept-Ranges: bytes

2012/09/12 16:20:09 [debug] 15171#0: *1 write new buf t:1 f:0 09C0B500, pos 09C0B500, size: 235 file: 0, size: 0
2012/09/12 16:20:09 [debug] 15171#0: *1 http write filter: l:0 f:0 s:235
2012/09/12 16:20:09 [debug] 15171#0: *1 http output filter ""/foo/bar.gif?""
2012/09/12 16:20:09 [debug] 15171#0: *1 http copy filter: ""/foo/bar.gif?""
2012/09/12 16:20:09 [debug] 15171#0: *1 read: 14, 09C0B67C, 68, 0
2012/09/12 16:20:09 [debug] 15171#0: *1 http postpone filter ""/foo/bar.gif?"" 09C0B6C0
2012/09/12 16:20:09 [debug] 15171#0: *1 write old buf t:1 f:0 09C0B500, pos 09C0B500, size: 235 file: 0, size: 0
2012/09/12 16:20:09 [debug] 15171#0: *1 write new buf t:1 f:0 09C0B67C, pos 09C0B67C, size: 68 file: 0, size: 0
2012/09/12 16:20:09 [debug] 15171#0: *1 http write filter: l:1 f:0 s:303
2012/09/12 16:20:09 [debug] 15171#0: *1 http write filter limit 0
2012/09/12 16:20:09 [debug] 15171#0: *1 writev: 303
2012/09/12 16:20:09 [debug] 15171#0: *1 http write filter 00000000
2012/09/12 16:20:09 [debug] 15171#0: *1 http copy filter: 0 ""/foo/bar.gif?""
2012/09/12 16:20:09 [debug] 15171#0: *1 http finalize request: 0, ""/foo/bar.gif?"" a:1, c:1
2012/09/12 16:20:09 [debug] 15171#0: *1 set http keepalive handler
2012/09/12 16:20:09 [debug] 15171#0: *1 http close request
2012/09/12 16:20:09 [debug] 15171#0: *1 http log handler
2012/09/12 16:20:09 [debug] 15171#0: *1 run cleanup: 09C0B3D8
2012/09/12 16:20:09 [debug] 15171#0: *1 file cleanup: fd:14
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09C0AE10, unused: 1645
2012/09/12 16:20:09 [debug] 15171#0: *1 event timer add: 11: 75000:3134920866
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09BDA0C8
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09BE3210
2012/09/12 16:20:09 [debug] 15171#0: *1 hc free: 00000000 0
2012/09/12 16:20:09 [debug] 15171#0: *1 hc busy: 00000000 0
2012/09/12 16:20:09 [debug] 15171#0: *1 tcp_nodelay
2012/09/12 16:20:09 [debug] 15171#0: *1 reusable connection: 1
2012/09/12 16:20:09 [debug] 15171#0: *1 post event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: posted event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: *1 delete posted event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: *1 http keepalive handler
2012/09/12 16:20:09 [debug] 15171#0: *1 malloc: 09BE3210:1024
2012/09/12 16:20:09 [debug] 15171#0: *1 recv: fd:11 -1 of 1024
2012/09/12 16:20:09 [debug] 15171#0: *1 recv() not ready (11: Resource temporarily unavailable)
2012/09/12 16:20:09 [debug] 15171#0: posted event 00000000
2012/09/12 16:20:09 [debug] 15171#0: worker cycle
2012/09/12 16:20:09 [debug] 15171#0: accept mutex locked
2012/09/12 16:20:09 [debug] 15171#0: epoll timer: 75000
2012/09/12 16:20:09 [debug] 15171#0: epoll: fd:11 ev:0001 d:09C117C8
2012/09/12 16:20:09 [debug] 15171#0: *1 post event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: timer delta: 2
2012/09/12 16:20:09 [debug] 15171#0: posted events 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: posted event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: *1 delete posted event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: *1 http keepalive handler
2012/09/12 16:20:09 [debug] 15171#0: *1 recv: fd:11 0 of 1024
2012/09/12 16:20:09 [info] 15171#0: *1 client 127.0.0.1 closed keepalive connection
2012/09/12 16:20:09 [debug] 15171#0: *1 close http connection: 11
2012/09/12 16:20:09 [debug] 15171#0: *1 event timer del: 11: 3134920866
2012/09/12 16:20:09 [debug] 15171#0: *1 reusable connection: 0
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09BE3210
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 00000000
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09BD9FC0, unused: 56
}}}
Debug log during the second test request:
{{{
2012/09/12 16:20:14 [debug] 15171#0: *2 delete posted event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: *2 malloc: 09BDA0C8:672
2012/09/12 16:20:14 [debug] 15171#0: *2 malloc: 09BE3210:1024
2012/09/12 16:20:14 [debug] 15171#0: *2 posix_memalign: 09C0AE10:4096 @16
2012/09/12 16:20:14 [debug] 15171#0: *2 http process request line
2012/09/12 16:20:14 [debug] 15171#0: *2 recv: fd:11 183 of 1024
2012/09/12 16:20:14 [debug] 15171#0: *2 http request line: ""GET /quux/foo/bar.gif HTTP/1.1""
2012/09/12 16:20:14 [debug] 15171#0: *2 http uri: ""/quux/foo/bar.gif""
2012/09/12 16:20:14 [debug] 15171#0: *2 http args: """"
2012/09/12 16:20:14 [debug] 15171#0: *2 http exten: ""gif""
2012/09/12 16:20:14 [debug] 15171#0: *2 http process request header line
2012/09/12 16:20:14 [debug] 15171#0: *2 http header: ""User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2""
2012/09/12 16:20:14 [debug] 15171#0: *2 http header: ""Accept: */*""
2012/09/12 16:20:14 [debug] 15171#0: *2 http header: ""Host: t1.example.com""
2012/09/12 16:20:14 [debug] 15171#0: *2 http header done
2012/09/12 16:20:14 [debug] 15171#0: *2 event timer del: 11: 3134910906
2012/09/12 16:20:14 [debug] 15171#0: *2 rewrite phase: 0
2012/09/12 16:20:14 [debug] 15171#0: *2 test location: ~ ""^/quux(/.*)?$""
2012/09/12 16:20:14 [debug] 15171#0: *2 using configuration ""^/quux(/.*)?$""
2012/09/12 16:20:14 [debug] 15171#0: *2 http cl:-1 max:1048576
2012/09/12 16:20:14 [debug] 15171#0: *2 rewrite phase: 2
2012/09/12 16:20:14 [debug] 15171#0: *2 post rewrite phase: 3
2012/09/12 16:20:14 [debug] 15171#0: *2 generic phase: 4
2012/09/12 16:20:14 [debug] 15171#0: *2 generic phase: 5
2012/09/12 16:20:14 [debug] 15171#0: *2 access phase: 6
2012/09/12 16:20:14 [debug] 15171#0: *2 access phase: 7
2012/09/12 16:20:14 [debug] 15171#0: *2 post access phase: 8
2012/09/12 16:20:14 [debug] 15171#0: *2 try files phase: 9
2012/09/12 16:20:14 [debug] 15171#0: *2 http script copy: ""/data/web/t1.example.com/htdoc""
2012/09/12 16:20:14 [debug] 15171#0: *2 http script capture: ""/foo/bar.gif""
2012/09/12 16:20:14 [debug] 15171#0: *2 trying to use file: """" ""/data/web/t1.example.com/htdoc/foo/bar.gif""
2012/09/12 16:20:14 [debug] 15171#0: *2 try file uri: """"
2012/09/12 16:20:14 [debug] 15171#0: *2 content phase: 10
2012/09/12 16:20:14 [debug] 15171#0: *2 content phase: 11
2012/09/12 16:20:14 [debug] 15171#0: *2 content phase: 12
2012/09/12 16:20:14 [debug] 15171#0: *2 http script copy: ""/data/web/t1.example.com/htdoc""
2012/09/12 16:20:14 [debug] 15171#0: *2 http script capture: ""/foo/bar.gif""
2012/09/12 16:20:14 [debug] 15171#0: *2 http filename: ""/data/web/t1.example.com/htdoc/foo/bar.gif""
2012/09/12 16:20:14 [debug] 15171#0: *2 add cleanup: 09C0B414
2012/09/12 16:20:14 [debug] 15171#0: *2 http static fd: 14
2012/09/12 16:20:14 [debug] 15171#0: *2 http set discard body
2012/09/12 16:20:14 [debug] 15171#0: *2 HTTP/1.1 200 OK
Server: nginx/1.3.6
Date: Wed, 12 Sep 2012 14:20:14 GMT
Content-Type: application/octet-stream
Content-Length: 68
Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT
Connection: keep-alive
ETag: ""501a0a78-44""
Accept-Ranges: bytes

2012/09/12 16:20:14 [debug] 15171#0: *2 write new buf t:1 f:0 09C0B53C, pos 09C0B53C, size: 250 file: 0, size: 0
2012/09/12 16:20:14 [debug] 15171#0: *2 http write filter: l:0 f:0 s:250
2012/09/12 16:20:14 [debug] 15171#0: *2 http output filter ""?""
2012/09/12 16:20:14 [debug] 15171#0: *2 http copy filter: ""?""
2012/09/12 16:20:14 [debug] 15171#0: *2 read: 14, 09C0B6C4, 68, 0
2012/09/12 16:20:14 [debug] 15171#0: *2 http postpone filter ""?"" 09C0B708
2012/09/12 16:20:14 [debug] 15171#0: *2 write old buf t:1 f:0 09C0B53C, pos 09C0B53C, size: 250 file: 0, size: 0
2012/09/12 16:20:14 [debug] 15171#0: *2 write new buf t:1 f:0 09C0B6C4, pos 09C0B6C4, size: 68 file: 0, size: 0
2012/09/12 16:20:14 [debug] 15171#0: *2 http write filter: l:1 f:0 s:318
2012/09/12 16:20:14 [debug] 15171#0: *2 http write filter limit 0
2012/09/12 16:20:14 [debug] 15171#0: *2 writev: 318
2012/09/12 16:20:14 [debug] 15171#0: *2 http write filter 00000000
2012/09/12 16:20:14 [debug] 15171#0: *2 http copy filter: 0 ""?""
2012/09/12 16:20:14 [debug] 15171#0: *2 http finalize request: 0, ""?"" a:1, c:1
2012/09/12 16:20:14 [debug] 15171#0: *2 set http keepalive handler
2012/09/12 16:20:14 [debug] 15171#0: *2 http close request
2012/09/12 16:20:14 [debug] 15171#0: *2 http log handler
2012/09/12 16:20:14 [debug] 15171#0: *2 run cleanup: 09C0B414
2012/09/12 16:20:14 [debug] 15171#0: *2 file cleanup: fd:14
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09C0AE10, unused: 1568
2012/09/12 16:20:14 [debug] 15171#0: *2 event timer add: 11: 75000:3134925906
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09BDA0C8
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09BE3210
2012/09/12 16:20:14 [debug] 15171#0: *2 hc free: 00000000 0
2012/09/12 16:20:14 [debug] 15171#0: *2 hc busy: 00000000 0
2012/09/12 16:20:14 [debug] 15171#0: *2 tcp_nodelay
2012/09/12 16:20:14 [debug] 15171#0: *2 reusable connection: 1
2012/09/12 16:20:14 [debug] 15171#0: *2 post event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: posted event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: *2 delete posted event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: *2 http keepalive handler
2012/09/12 16:20:14 [debug] 15171#0: *2 malloc: 09BE3210:1024
2012/09/12 16:20:14 [debug] 15171#0: *2 recv: fd:11 -1 of 1024
2012/09/12 16:20:14 [debug] 15171#0: *2 recv() not ready (11: Resource temporarily unavailable)
2012/09/12 16:20:14 [debug] 15171#0: posted event 00000000
2012/09/12 16:20:14 [debug] 15171#0: worker cycle
2012/09/12 16:20:14 [debug] 15171#0: accept mutex locked
2012/09/12 16:20:14 [debug] 15171#0: epoll timer: 75000
2012/09/12 16:20:14 [debug] 15171#0: epoll: fd:11 ev:0001 d:09C117C9
2012/09/12 16:20:14 [debug] 15171#0: *2 post event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: timer delta: 2
2012/09/12 16:20:14 [debug] 15171#0: posted events 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: posted event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: *2 delete posted event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: *2 http keepalive handler
2012/09/12 16:20:14 [debug] 15171#0: *2 recv: fd:11 0 of 1024
2012/09/12 16:20:14 [info] 15171#0: *2 client 127.0.0.1 closed keepalive connection
2012/09/12 16:20:14 [debug] 15171#0: *2 close http connection: 11
2012/09/12 16:20:14 [debug] 15171#0: *2 event timer del: 11: 3134925906
2012/09/12 16:20:14 [debug] 15171#0: *2 reusable connection: 0
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09BE3210
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 00000000
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09BD9FC0, unused: 56
}}}"	cschug.myopenid.com
1.3.x	242	DAV module does not respect if-unmodified-since	nginx-module	1.3.x	defect	somebody	accepted	2012-11-04T19:01:37Z	2016-05-15T03:06:15Z	"I.e. if you PUT or DELETE a resource with an if-unmodified-since header, the overwrite or delete will go through happily even if the header should have prevented it.

(This is a common use case, where you've previously a version of a resource, and you know its modified date, and then, when updating it or deleting it, you want to check for race conditions with other clients, and can use if-unmodified-since to get an error back if someone else messed with the resource in the meantime.)

Find a patch for this attached (also at https://gist.github.com/4013062). It's my first Nginx contribution -- feel free to point out style mistakes or general wrong-headedness.

I did not find a clean way to make the existing code in ngx_http_not_modified_filter_module.c handle this. It looks directly at the last-modified header, and, as a header filter, will only run *after* the actions for the request have already been taken.

I also did not add code for if-match, which is analogous, and code for which could probably be added to the ngx_http_test_if_unmodified function I added (which would be renamed in that case). But I don't really understand handling of etags by nginx yet, so I didn't touch that."	Marijn Haverbeke
1.3.x	288	Wrong REQUEST_URI when using PHP with SSI	nginx-module	1.3.x	defect		new	2013-01-28T22:05:59Z	2013-01-28T22:05:59Z	"In the default fastcgi.conf

    fastcgi_param REQUEST_URI $request_uri;

is set. This doesn't work with SSI, because now Fastcgi will use the (parent) uri for every subrequest.

    fastcgi_param REQUEST_URI $uri;

Applications, that rely on REQUEST_URI will end in an endless recursion, because the tag is replace with the same page, that obviously contains the tag too.

I don't know, if ""$uri"" is a good solution, but at least a hint in the ""pitfalls""-section would have saved me 2 days ;)"	Sebastian Krebs
1.3.x	384	trailing dot in server_name	nginx-core	1.3.x	defect		accepted	2013-07-09T12:47:37Z	2016-05-14T23:48:47Z	"nginx should treat server_name values with and without trailing dot as identical to each other. Thus, it shall warn and continue during configuration syntax check for the below snippet due to conflicting server_name.

{{{
    server {
        server_name  localhost;
    }

    server {
        server_name  localhost.;
    }
}}}
"	Sergey Kandaurov
1.3.x	2677	Support for resumable uploads	documentation	1.3.x	defect		new	2024-08-08T03:12:35Z	2024-08-08T03:12:35Z	"It seems Nginx has a big feature missing, the ability to support resumable uploads.
There are two third-party modules available at the moment, however, both have serious downfalls.
The most popular seems to be ​https://github.com/vkholodkov/nginx-upload-module [https://geometrydash-free.io/ geometry dash] - however, it doesn't support any version above 1.3.8. And the author of the module has no plans to update.
Thus neither of these modules is suitable for someone running Nginx 1.4+. Nginx needs a solid, supported resumable upload functionality. If not compiled by default, it should be able to compile with a flag, the same way SPDY is added when compiling."	aaronbeandgg@…
1.3.x	221	Feature Request - X-Accel header to singal if another upstream server should be attempted or not	nginx-module	1.3.x	enhancement	somebody	new	2012-09-18T16:15:38Z	2012-09-19T10:32:55Z	"Imagine the following upstream block:

upstream name {
    server    1.1.1.1;
    server    2.2.2.2 backup;
}

At the same time, imagine proxy_next_upstream is set as follows:

proxy_next_upstream http_503

It would be VERY useful to have an X-Accel header that one could pass to nginx from a backend to signal if 'another' backend server (from the same upstream) should be attempted or not.

The reasoning for this is as follows:

We 'manually' return 503 codes whenever something like a database restart occurs. This is a great way to signal to search engines to 'please return, this condition is just temporary'. My understanding is that this is a fairly common practice and googlebot and bingbot both understand html 503 + retry-after.

So, database is temporarily down for restart (lets say a downtime of 1-3 seconds), we send back a 503. In this case, we DO want nginx to try another server in the upstream block, as the database will most likely be restarted, and the request can be fullfiled. It should be noted that our backup servers are proxied via haproxy. Which handles the ""try connect -> failed? -> wait -> try again"" situation quite well.

But there are situations when it would be VERY useful to signal to nginx 'nope, this is a serious issue... dont attempt another backend server'.

The above is just one example, but I think it makes a lot of sense to be able to stop attempting other servers via a header sent from the backend."	riddla riddla
1.3.x	237	Add optional systemd socket activation support	nginx-core	1.3.x	enhancement	somebody	reopened	2012-10-27T11:01:44Z	2023-03-19T16:03:15Z	"The systemd project supports socket activation for services, allowing systemd to listen on the socket initially. When the first connection comes in, systemd starts the service and passes in any listening sockets as file descriptors.

The following patch, which is ready for general implementation review, adds support with a --with-system configuration option. When this support is compiled in, nginx can use ""listen fd:3"" (and other numbers) to specify that a given nginx server should use the inherited file descriptor in lieu of opening its own socket.

The implementation initializes nginx's data structures with (mostly) the same information that would generally be derived from the configured listener. For example, nginx will know the listening IP address and port of an INET or INET6 socket.

Why nginx benefits from this:

 * systemd can listen on privileged ports or socket paths and start nginx with privileges already dropped. This is good for shared/multitenant environments where user configuration of nginx is offered.
 * Virtually no memory or CPU is in use until nginx receives the first request. There is a ~20ms overhead for the first request while nginx starts. It is also possible with systemd socket activation to start nginx by default, before any request has come in.
 * Any service in front of nginx doesn't have to wait for nginx to finish starting (or start at all) before sending in the first request. This is useful for setups where something like Varnish is in front of nginx.
 * Major upgrades or reconfigurations of nginx requiring a full restart are possible without having the listening socket(s) ever go away. If nginx does a clean shutdown (without killing any requests), it's possible to have zero requests fail and no service interruption from the nginx restart.

Known TODOs or possible changes needed in the implementation:

 * Obviously, logging and printf calls need cleanup.
 * Unix sockets don't initialize the path as the address name. They list the ""fd:N"" string. This doesn't affect functionality, but it might be nice to fix.
 * No documentation updates made yet."	David Strauss
1.3.x	289	Add support for HTTP Strict Transport Security (HSTS / RFC 6797)	nginx-core	1.3.x	enhancement		accepted	2013-01-29T21:08:22Z	2020-11-04T18:28:14Z	"It would be great if support for HSTS (RFC 6797) would be added to the nginx-core.

Currently HSTS is ""enabled"" like this
(according to https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security):
{{{
add_header Strict-Transport-Security max-age=31536000;
}}}

However this has at least two downsides:
1. The header is only added when the HTTP status code is 200, 204, 301, 302 or 304.
   - It would be great if the header would always be added
2. The header is added on HTTPS '''and''' HTTP responses, but according to RFC 6797 (7.2.) it should not:
   - ''An HSTS Host MUST NOT include the STS header field in HTTP responses conveyed over non-secure transport.''


RFC 6797: https://tools.ietf.org/html/rfc6797"	petermap.myopenid.com
1.3.x	318	Change response behavior when SSL client certificate won't validate	nginx-module	1.3.x	enhancement		new	2013-03-14T01:20:58Z	2017-01-06T13:18:35Z	"Currently if nginx receives an SSL client certificate that is invalid, nginx returns a 400 Bad Request. This also gets triggered if no certificate was submited.

This is not good because most user-agents (browsers) then won't prompt the user for a certificate again till the user quits the browser. Instead nginx should, like Apache and other webservers, respond with an SSL connection error so that clients know that there was a connection error and then can reprompt for a certificate."	Sebastian Wyder
1.3.x	319	koi-utf koi-win win-utf in conf are artifacts of the past	nginx-core	1.3.x	enhancement		reopened	2013-03-15T10:55:12Z	2017-02-16T12:58:44Z	Charsets maps koi-utf, koi-win, win-utf are bundled with every nginx package out there. It is year 2013 and everyone uses utf-8. Removing these ancient charmaps could save humanity gigabytes of disk space. It won't affect users who upgrade, and the very minority of new users who need them will find them.	Andrian Budantsov
1.3.x	320	nginx should reliably check client connection close with pending data	nginx-core	1.3.x	enhancement		accepted	2013-03-15T13:16:26Z	2022-01-27T15:04:56Z	"To detect if a connection was closed by a client, nginx uses:

- A EV_EOF flag as reported by kqueue.  This only works if you use kqueue, i.e. on FreeBSD and friends.

- The recv(MSG_PEEK) call to test a case when connection was closed.  This works on all platforms, but only if there are no pending data.

Most notably, this doesn't cover Linux and SSL connections, which are usually closed with pending data (a shutdown alert). To improve things, the following should be implemented (in no particular order):

- SSL_peek() for SSL connections instead of recv(MSG_PEEK), likely with additional c->peek() call.
- Support for EPOLLRDHUP, which is believed to be close to EV_EOF provided by kqueue.

References:
http://mailman.nginx.org/pipermail/nginx/2011-June/027669.html
http://mailman.nginx.org/pipermail/nginx/2011-November/030614.html
http://mailman.nginx.org/pipermail/nginx/2013-March/038119.html"	Sergey Kandaurov
1.3.x	327	Add support for animated GIF to HttpImageFilterModule	nginx-module	1.3.x	enhancement		new	2013-03-28T16:09:26Z	2013-03-28T16:09:26Z	"Hello,

please add full support for animated GIF to HttpImageFilterModule. Now only first frame is used for resizing.
"	Jakub Trmota
1.3.x	417	ngx_cache_purge	nginx-core	1.3.x	enhancement		new	2013-09-21T13:44:47Z	2016-09-30T13:30:36Z	would be nice to include in core	Steve Weber
1.3.x	454	disable ngx_http_upstream_store for HEAD requests	nginx-module	1.3.x	enhancement		reopened	2013-11-29T20:15:58Z	2013-12-24T16:24:43Z	"it looks like you missed check  for 
{{{
r->method == NGX_HTTP_GET
}}}
in 
{{{
ngx_http_upstream_process_request(ngx_http_request_t *r)
}}}

so, if fastcgi_store directive is used, empty file will be stored on HEAD request "	Proforg M
1.3.x	241	Ability to align cropped images in image_filter	nginx-module	1.3.x	enhancement	somebody	new	2012-11-03T10:53:44Z	2017-02-15T05:57:51Z	"I'd like to see this in nginx core :)

Here's my little project to make it happen: https://github.com/bobrik/nginx_image_filter"	Ivan Babrou
1.4.x	644	nginx rewrite $uri not right	nginx-module	1.4.x	defect		reopened	2014-10-20T09:55:43Z	2016-12-24T16:34:24Z	"request is ""/test?a=1""

when
{{{
rewrite ^/test /t.php/test?a=1 last;
}}}
then $uri is ""/t.php/test"" which is right.

when
{{{
rewrite ^/test /t.php$request_uri last;
}}}
then $uri is ""/t.php/test?a=1"" which is not right.
"	caoyu
1.4.x	774	modern_browser // gecko version overwrites msie version	nginx-module	1.4.x	defect		accepted	2015-07-21T23:17:58Z	2015-07-22T22:17:56Z	"I am not sure, if this behavior is still the case in the current version, but it occurs in 1.4 on ubuntu 14.04.

giving the following config:

##########################################
    modern_browser gecko     27.0;
    modern_browser opera     19.0;
    modern_browser safari    8.0;
    modern_browser msie      9.0;
    modern_browser unlisted;

    ancient_browser Links Lynx netscape4;
##########################################

on an IE11 (Win 8) $ancient_browser == 1. I am not sure if its only me, but this seems wrong in my understanding of how the module should work.
This applies for a 'real' IE11, but does not for a spoofed UA (in chromium 46.0.2462.0) of IE10, IE9, IE8, IE7 - so in that case everything works as expected.
Interestingly though the next config:

##########################################
    modern_browser gecko     9.0;
    modern_browser opera     19.0;
    modern_browser safari    8.0;
    modern_browser msie      9.0;
    modern_browser unlisted;

    ancient_browser Links Lynx netscape4;
##########################################

works as expected (in terms of the IE behavior), meaning $ancient_browser != 1. But now I would support older firefox versions - and that is not intended.
The following config also gets $ancient_browser to be != 1

##########################################
    modern_browser gecko     9.0;
    modern_browser opera     19.0;
    modern_browser safari    8.0;
    modern_browser msie      12.0;
    modern_browser unlisted;

    ancient_browser Links Lynx netscape4;
##########################################


_Conclusion_: it looks like the gecko version is overwriting the defined msie version. This does not mean, that its exactly what is happening internally."	openid.stackexchange.com/user/e58ad7e4-c803-4f2f-899a-9effcfb76f61
1.4.x	523	Information leak with automatic trailing slash redirect	nginx-core	1.4.x	enhancement		new	2014-03-18T11:07:52Z	2014-03-18T16:51:02Z	"Hi,

Under specific circumstances, Nginx leaks information regarding the topology of the underlying infrastructure.

If no server name is specified (`server_name _;`, not uncommon for multi-tenant webapps) and no `Host` header is passed in a `HTTP/1.0` request on a resource that is a directory, then the redirect uses the IP address of the server as the host part of the `Location` header in the response.

This assumes a `location` block like this one:
{{{
location ~* ^/(favicon.ico$|javascripts|assets|images|stylesheets) {
    # Do things
    break;
}
}}}

This might not be a big issue for a server that is directly accessible from the Internet, but in a configuration where the servers are in a VLAN behind a Load-Balancer, the IP leaked is the private network IP, which should never be made public.

I believe that if Nginx isn't able to construct a response without using private information such as port or IP address, then it should refrain from responding and terminate the request. At the least, an option to disable this behaviour would be nice.

Here is a more graphic representation of the problem (this assume a server with private IP 192.168.1.15 behind a LB):
{{{
$> nc -nv 1.2.3.4 80
found 0 associations
found 1 connections:
     1:	flags=82<CONNECTED,PREFERRED>
	outif en1
	src 192.168.10.15 port 57239
	dst 1.2.3.4 port 80
	rank info not available
	TCP aux info available
Connection to 1.2.3.4 port 80 [tcp/*] succeeded!

HEAD /images HTTP/1.0

HTTP/1.1 301 Moved Permanently
Date: Tue, 18 Mar 2014 10:49:07 GMT
Content-Type: text/html
Content-Length: 178
Location: http://192.168.1.15/images/ <= private IP leaked
Connection: close
Expires: Thu, 31 Dec 2037 23:55:55 GMT
Cache-Control: public, max-age=315360000
}}}"	Vincent Boisard
1.4.x	936	"For security purposes it is necessary to remove or change the ""server"" header"	nginx-core	1.4.x	enhancement		new	2016-03-23T15:26:19Z	2022-10-11T11:08:31Z	"Advertising what server you are using makes a hacker's job easier.  It would be helpful if there was a configuration setting beyond ""server_tokens off"" that would completely suppress the the ""server"" header.   "	jon.strayer@…
1.5.x	508	nginx rewrite URL decoding first encoded character in URI	nginx-core	1.5.x	defect		new	2014-02-19T18:28:11Z	2019-08-06T00:20:29Z	"I have rules like the following:

    location /trk/ {
        if ($args ~ ""url=(.*)"" ) {
            set $url $1;
            rewrite click.gif$ $url? redirect;
            rewrite redirect.gif$ $url? redirect;
        }
    }

problem is if I have a request like:

http://localhost/trk/click.gif?some_foo&url=http://www.foo.com/%3Fmore_foo%3F

it gets rewritten to

http://www.foo.com/?more_foo%3F

and I need the first %3F to not get decoded to ?

It appears to decode the FIRST and only the first encoded character it finds...

I've tried this with several version of nginx (1.1.13, 1.3.2 and 1.5.10) - output of nginx -V is below for 1.5.10.

here's the issue:
{{{
ubuntu@Nginx-Test-Zusw1b-S01:~$ GET -Sd ""http://localhost/trk/click.gif?some_foo&url=http://www.foo.com/%3Fmore_foo%3F""
GET http://localhost/trk/click.gif?some_foo&url=http://www.foo.com/%3Fmore_foo%3F --> 302 Moved Temporarily
GET http://www.foo.com/?more_foo%3F --> 200 OK
}}}
Maybe this is a ""feature"" but I'm unable to find it documented anywhere nor do have I found a way to ""turn it off"""	Jerry Hoffmeister
1.5.x	525	Max connection limit too low (http_limit_conn_module)	nginx-module	1.5.x	enhancement		new	2014-03-26T14:14:40Z	2014-03-26T15:18:09Z	"Hello,

Trying to set 'per server' connection limit:

http:
  limit_conn_zone $server_name zone=perserver:10m;
location:
  limit_conn perserver 165000;

Getting error:
  nginx: [emerg] connection limit must be less '''65536''' in /etc/nginx/sites-enabled/

The limit seems to be hard-coded: 
  http://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_limit_conn_module.c#L733

The nginx serves all content from single port (80). Is there a reason for this limit? Could it be raised/removed?

PS: Thanks for awesome server!"	Čeněk Zach
1.6.x	960	TCP connection re-use without upstream conf	nginx-core	1.6.x	enhancement		new	2016-04-27T05:27:45Z	2016-04-27T05:27:45Z	"Hi, 

        We are using nginx as a http proxy. 
	We have to connect to only a couple of http servers. 
	There is high load generated from the clients. We want to reuse the TCP connections. 
	This can be acheived by adding upstream servers (as described in https://ma.ttias.be/enable-keepalive-connections-in-nginx-upstream-proxy-configurations/ ). 
	But the list of http servers can change. 
	Please let us know if there some configuration using which we can re-use TCP connections without needing upstream server configuration. (From https://www.ruby-forum.com/topic/4486861 I understand that dynamic configuration of upstream servers is not possible.)
	If it is not supported yet, please enhance nginx so that with a new configuration tcp connections could be re-used without needing upstream server configuration.
		
With Best Regards,
Diwakar
"	diwakar.jois@…
1.6.x	606	lower log level of ngx_http_access_module forbidden access	nginx-core	1.6.x	enhancement		new	2014-08-17T11:33:51Z	2021-03-10T22:15:54Z	"When using the deny/allow ip directives of the ngx_http_access_module,
nginx logs the denied accesses with level ""error"".
If there are many unauthorized clients, it fills the error log with useless messages,
and changing the log level is not acceptable since it hides legitimate errors.

I propose to set the log_level for ""access forbidden by rule"" messages to info, notice, or warn
instead of error.
"	Jérémy Lal
1.6.x	609	Apply xslt-html-parser patch to http_xslt_module (used by Diazo)	nginx-module	1.6.x	enhancement		new	2014-08-21T15:57:15Z	2016-01-27T23:51:15Z	"Hi,

Please integrate the xslt-html-parser patch to the http_xslt_module. This is a feature notably used by Diazo (diazo.org). Here is the patch: https://github.com/jcu-eresearch/nginx-custom-build/blob/master/nginx-xslt-html-parser.patch. 

The patch applies to src/http/modules/ngx_http_xslt_filter_module.c. My patch was patch src/http/modules/ngx_http_xslt_filter_module.c nginx-xslt-html-parser.patch. 

Here is a blog where I described what I did to get this working:
http://en.wordpress.managence.com/?p=15

I just read the patch, and it seems OK. It adds a htmlFreeParserCtxt if the module is html and not xml.  My guess is that would get the parser to XSLT work even if not quite well-formed, but well-enough formed (the goal of the html parse I guess).
"	Christopher Mann
1.6.x	632	option to send the access log to stdout	nginx-core	1.6.x	enhancement		new	2014-09-24T09:45:10Z	2014-09-24T09:45:10Z	"When nginx runs in a container or under a process manager that provide own logging facilities or to prevent the nginx process from manipulate the written log it would be useful if the `access_log` directive supported setting the log path to stdout similarly as `error_log` allows to send the log to stderr.

As a workaround in some cases one can use `access_log /dev/stdout` or similar. However, as nginx calls `open()` on the supplied path that may fail depending on how the receiving end of stderr and its permissions is set up. A better workaround is to use mkfifo to create a named pipe and then use `cat /path-to-pipe` to send the log out, but this is messy.
 "	Igor Bukanov
1.7.x	640	enable usage of $ in variable	nginx-core	1.7.x	enhancement		new	2014-10-13T09:30:55Z	2024-11-06T13:30:37Z	"I would like to set this header in a location:

{{{
proxy_set_header X_DBFILTER '^db-main$';
}}}

But it fails:
nginx: [emerg] invalid variable name in /etc/nginx/nginx.conf:121

I tried these, but again fails:
{{{
proxy_set_header X_DBFILTER '^db-main\$';
proxy_set_header X_DBFILTER '^db-main$$';
}}}

Please make it to work, it would be a very basic functionality!
My suggestion is to make the $$ chars to mean the $ char."	Csaba Tóth
1.7.x	689	ngx_http_referer_module issue	nginx-module	1.7.x	enhancement		new	2014-12-25T09:28:44Z	2014-12-25T13:47:13Z	"hello,
when i use nginx all versions as a proxy server.
i found the ngx_http_referer_module did not work as expected.
i configure it like this:

valid_referers none blocked server_names $host;

i use $host variable and it did not work as expected.
it is a BUG or not?
if it is not a BUG,i wish you can add this function in the next nginx version.
thanks.
Merry Christmas!
"	vvip859@…
1.7.x	633	limit_except causes 404	nginx-core	1.7.x	defect		new	2014-09-29T01:57:38Z	2017-09-18T12:16:02Z	"This does not work:

{{{
    location / {
        limit_except OPTIONS {
            auth_basic ""Access is restricted."";
            auth_basic_user_file /srv/app/www/htpasswd;
        }
        try_files $uri $uri/ @app;
    }

    location @app {
        include uwsgi_params;
        uwsgi_pass unix:///run/uwsgi/app/app/socket;
    }
}}}

$ curl -ski https://localhost -X OPTIONS
HTTP/1.1 200 OK

$ curl -ski -u username:password https://localhost -X GET                                                                    
HTTP/1.1 404 Not Found

error.log:
[error] 23859#0: *25 ""/var/www/site/index.html"" is not found (2: No such file or directory), client: 127.0.0.1, server: localhost, request: ""GET / HTTP/1.1"", host: ""localhost""

But this does work:

{{{
    location / {
        try_files $uri $uri/ @app;
    }

    location @app {
        limit_except OPTIONS {
            auth_basic ""Access is restricted."";
            auth_basic_user_file /srv/app/www/htpasswd;
        }
        include uwsgi_params;
        uwsgi_pass unix:///run/uwsgi/app/app/socket;
    }
}}}

Previously I had:

{{{
    auth_basic ""Access is restricted."";
    auth_basic_user_file /srv/app/www/htpasswd;

    location / {
        try_files $uri $uri/ @app;
    }

    location @app {
        include uwsgi_params;
        uwsgi_pass unix:///run/uwsgi/app/app/socket;
    }
}}}

This works too with the exception that OPTIONS requires a username and password."	Tom Vaughan
1.7.x	738	Describe how to extend mime.types in types docs	documentation	1.7.x	defect		new	2015-03-27T09:48:09Z	2021-10-27T12:48:49Z	"The correct way to extend `mime.types` with `types` is to include it in the same section:

{{{
include mime.types;
types {
    # here is additional types
}
}}}

From here: http://stackoverflow.com/questions/16789494/extending-default-nginx-mime-types-file

But this is not documented in: http://nginx.org/en/docs/http/ngx_http_core_module.html#types"	anatoly techtonik
1.7.x	752	try_files + subrequest + proxy-handler problem	nginx-core	1.7.x	defect		accepted	2015-04-23T14:58:37Z	2015-04-23T15:14:12Z	"When using subrequests with try_files the following behaviour is observed.

{{{
   server {
       listen       8081;
       default_type text/html;

       location /uno {   return 200 ""uno  "";   }
       location /duo {   return 200 ""duo  "";   }
       location /tres {  return 200 ""tres  "";  }
   }


   server {
       listen       8080;

       location / {
           root /tmp;
           try_files /tres =404;
           proxy_pass http://127.0.0.1:8081;
           add_after_body /duo;
       }
   }
}}}


Assuming /tmp/tres exists, a request to

http://127.0.0.1:8080/uno

returns **""uno  tres ""**, not **""uno  duo ""** or  **""tres tres ""**.

I.e., main request assumes that the request URI is unmodified and passes original request URI, ""/uno"".
But in a subrequest the URI is modified and nginx uses modified URI, ""/tres"".

This is believed to be a bug, and one of the following should be done:

- `try_files` should reset the `r->valid_unparsed_uri` flag if it modifies the URI;
- or `try_files` should not modify the URI at all.

See [[http://mailman.nginx.org/pipermail/nginx-ru/2015-April/055769.html|this thread]] (in Russian) for additional details."	openid.yandex.ru/emychlo
1.7.x	586	variable support for client_max_body_size	nginx-core	1.7.x	enhancement		new	2014-06-30T07:04:30Z	2020-06-03T09:37:37Z	"I would like to suggest adding nginx variable support to the client_max_body_size directive. 
This would be quite useful to set this value dynamically."	Tarek Ziade
1.7.x	617	Add secondary groups configuration option in nginx user conf directive	nginx-core	1.7.x	enhancement		new	2014-08-29T00:43:01Z	2014-08-29T10:32:26Z	"Would be great if we could specify secondary groups as 3rd to nth arguments in the user directive of conf files.
This would use setgroups() I suppose."	Vivien Leroy
1.7.x	658	"Implement new type of ""resolver"" -- ""system"" [for Docker usage]"	nginx-core	1.7.x	enhancement		new	2014-11-07T08:51:32Z	2016-05-04T20:54:50Z	"It would be great if I could use ""system"" resolver for converting DNS name into IP. That would make usage of nginx inside Docker way easier.

When you use dynamic resolving (see example below) you have to set up your DNS server IP by ""resolver"" parameter.

It is fine for the most of the cases but in some setups (e.g with Docker) you don't have a dedicated DNS server but have resolving configured on your machine (using /etc/hosts, /etc/resolv and so one). Yes, my DNS names are not public, so I cannot use 8.8.8.8 or whatever public DNS.

That brings us to idea to use the same nginx code that does static resolving on nginx.conf parsing -- the one that is using system function to resolve it.

Since you already have dynamic name resolving (so you solved all surrounding issues like non-blocking resolving, etc.) it seems to be a minor change to use another function for resolving.

My current config (not valid!):

{{{
server {
    listen 80;
    server_name ~(?P<project>[^.]+)\.localhost$;
    location / {
        proxy_pass http://$project:8000;
    }
}
}}}

All projects have entry in /etc/hosts generated by Docker on run.

Ideal solution:
{{{
server {
    listen 80;
    server_name ~(?P<project>[^.]+)\.localhost$;
    resolver system; # here is the trick
    location / {
        proxy_pass http://$project:8000;
    }
}
}}}"	Артём Скорецкий
1.7.x	692	Introduce variable to get SSL cipher bits of current connection	nginx-module	1.7.x	enhancement		new	2015-01-08T13:32:44Z	2015-01-08T13:32:44Z	Currently it's only possible to see/use SSL protocol ($ssl_protocol) and SSL cipher name ($ssl_cipher). I miss SSL cipher bits. See attached patch (which might require a bit of polishing) which intriduces $ssl_cipher_bits variable which returns the number of secret bits used for $ssl_cipher.	Marcin Deranek
1.7.x	697	Couldn't produce multiple error log items from FastCGI	nginx-core	1.7.x	enhancement		new	2015-01-16T11:52:40Z	2016-05-12T14:06:21Z	"When issuing multiple error_log() PHP commands nginx logs ONE log item in error log broken into several lines.

I think it should be possible to produce several log items with multiple error_log() calls (outputing new line separated stderr to FastCGI) as in Apache.

nginx + FastCGI (unix socket) php-fpm
"	Viktor Szépe
1.7.x	711	Support X-Forwarded-Proto or similar when operating as a backend behind a SSL terminator	nginx-core	1.7.x	enhancement		new	2015-02-03T13:39:25Z	2018-05-23T13:03:26Z	"Currently there is no way to override $scheme and $https variables when operating as a backend server behind a SSL terminator.

Most issues can be worked around by hardcoding https or referring to a x-forwarded-proto variable in rewrites/etc instead of $scheme, but this does not work for nginx initiated redirects (such as when adding a trailing slash) and also complicated configuration (possibly many lines need changing just to move ssl to a terminating device).

There are two ways I can see this being resolved, one by allowing inclusion of the scheme in the server_name option (or as a separate server_scheme option or similar) which is the route Apache takes (see http://httpd.apache.org/docs/2.2/mod/core.html#servername for more info)

The other option would be handling it similar to how the realip module handles setting the client address, but this lacks the option to hardcode it to always be https if no suitable upstream header is available."	Tiernan Messmer
1.8.x	1406	"duplicated ""content-encoding"" while proxy server return a empty content-encoding header"	nginx-core	1.8.x	defect		new	2017-10-27T10:28:37Z	2017-10-27T10:28:37Z	"use nginx as a reverse proxy. when the source site returns 

HTTP/1.1 200 OK
Host: 127.0.0.1:1234
Connection: close
X-Powered-By: PHP/5.6.30
Content-Encoding:
Content-type: text/html; charset=UTF-8

configure nginx to enable gzip, than returns 

HTTP/1.1 200 OK
Server: nginx/1.8.1
Date: Fri, 27 Oct 2017 10:25:40 GMT
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Host: 127.0.0.1:1234
X-Powered-By: PHP/5.6.30
Content-Encoding:
Content-Encoding: gzip"	monstersb@…
1.8.x	1446	gzip_types can't handle types longer than 46 chars	nginx-module	1.8.x	defect		new	2017-12-14T22:31:22Z	2022-09-14T02:40:25Z	"We're using content types like `application/vnd.koordinates.featureQuery1-dojo+gmaps+json` to serve API responses. 

We can't add them to `gzip_types`:

{{{
 * Restarting nginx nginx

nginx: [emerg] could not build the test_types_hash, you should increase test_types_hash_bucket_size: 64
nginx: configuration file /etc/nginx/nginx.conf test failed

}}}

Turns out nginx is rejecting any types over 46 characters.

This was mentioned in #203 but not fixed because the reporter was doing this unintentionally.

The only way to fix this at present appears to be to build nginx ourselves and change the magic number.

I think the appropriate magic number to change might be https://github.com/nginx/nginx/blob/752f66bf7d70fae2bf05fbf5941ff4be52b2b9a5/src/http/ngx_http.c#L2032
Any chance of setting that from a conf directive?


Btw, aware this is an old version of nginx, but from the source code I see no indication that it's been changed in recent releases."	craigds@…
1.8.x	790	Support for send log with GELF (Graylog Extended Log Format)	nginx-core	1.8.x	enhancement		new	2015-09-18T10:17:30Z	2015-09-18T10:17:30Z	https://www.graylog.org/resources/gelf/	ismaelpuerto@…
1.9.x	868	new variable: $remote_addr_anon	nginx-core	1.9.x	enhancement		new	2015-12-23T20:30:10Z	2023-04-03T19:58:38Z	"I'd like to suggest a new feature:

There should be a new variable:
    (I suggest the name: $remote_addr_anon)

That variable should be a anonymized version of the $remote_adrr variable.

In case of ipv4, the last octet should be replaced by '1':
aka when $remote_address is 123.45.67.89
then $remote_addr_anon should be 123.45.67.1

I'm not sure how to achieve the same thing for ipv6,
but maybe replacing the last octet there would be good enough for a start.

I'm from Germany, we are not allowed to place full ip addresses in any log files,
this is deemed violation of privacy and is actually forbidden by law.

some solutions have been suggested:
see: http://stackoverflow.com/questions/6477239/anonymize-ip-logging-in-nginx

We still want to retain some part of the ip address,
so that we can still apply geoip.

To my believe that would be ok than with german law,
as long as we drop the last octet (aka default it to 1)

This would really help all of us using nginx in Germany,
and it might also be a welcome privacy enhancement around the world.

Let me put that another way (so why this is a critical enhancement):
Anyone who does write any log files using the standard log facility is breaking german law.

I believe that the $remote_addr is set really deep in the core,
I'd like to suggest that the $remote_addr_anon should be set at the same place, deep in there.

This would really, really be a very welcome feature.































"	eike.inter.net@…
1.9.x	798	Implement http_brotli_static module	nginx-core	1.9.x	enhancement		new	2015-09-23T17:23:57Z	2020-04-25T23:19:02Z	"Today, nginx supports the http_gzip_static module (http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html) for doing ahead-of-time gzip compression of assets; this is very useful because it allows using high-ratio/low-performance compressors like Zopfli.

Web Browsers (Chrome and Firefox, to start) are about to start supporting a new Content-Encoding, named Brotli, which offers MUCH improved compression ratios but with slow performance. http://textslashplain.com/2015/09/10/brotli/ As such, we need nginx to support a similar module (e.g. http_brotli_static) for this new content-encoding.

"	bayden@…
1.9.x	944	Enchance $server_addr to return original IP even after local DNAT	nginx-core	1.9.x	enhancement		new	2016-03-31T16:34:28Z	2016-03-31T16:49:18Z	"I'd like to suggest to improve current $server_addr or introduce new veriable, that will return origin IP address used by a client to connect to nginx, even if connection was diverted by local DNAT rule.

My usecase is following:

a) I create iptables rules on the host:
      # iptables -t nat -A OUTPUT -p tcp -d 192.168.170.1 --dport 7654 -j DNAT --to-destination 127.0.0.1:11123
      # iptables -t nat -A OUTPUT -p tcp -d 192.168.170.2 --dport 7654 -j DNAT --to-destination 127.0.0.1:11123
b) Run nginx on localhost port 11123
c) Use telnet to hit 192.168.170.1:7654 and 192.168.170.2:7654
d) I need load balancer to choose different upstreams depending on the
address I specified on step (c)

Please note, that HAProxy supports this usecase via ""dst"" acl
https://github.com/haproxy/haproxy/blob/master/src/proto_tcp.c#L600

Thanks."	kvrico@…
1.9.x	1134	CVE-2016-1247	nginx-core	1.9.x	enhancement		new	2016-11-18T15:06:27Z	2016-11-19T17:34:49Z	"Hi!

Recently there was a vulnerability reported against Debian nginx package [1]. It seems to be more general and applicable to different nginx installations on various systems, so it needs to be fixed in nginx itself.

The problem is that if log file can be replaced with a symbolic link, it allows overwriting files owned by root. The solution is to perform some checks before opening log files. If (a) nginx have not dropped root privileges and (b) directory where log file is placed is writable by non-root user and (c) log file is symbolic link, nginx should decline opening it.

[1]: https://legalhackers.com/advisories/Nginx-Exploit-Deb-Root-PrivEsc-CVE-2016-1247.html"	mikhirev@…
1.9.x	1222	Update doc to mention about HTSP	documentation	1.9.x	enhancement		new	2017-03-17T22:45:01Z	2017-03-18T16:25:27Z	"security.stackexchange.com/questions/154166/httpoxy-what-about-https-proxy-when-dealing-with-httpoxy-vulnerability
"	privacyisright@…
1.9.x	772	No Vary header on 304 Response.	nginx-core	1.9.x	defect	Maxim Dounin	assigned	2015-07-05T06:55:10Z	2017-05-11T17:05:53Z	"Yes, I know its Tengine, but I'm betting this will be in Nginx 1.6.2 as well, as its better to fix it ""upstream"" so that everyone gets it fixed.   Tested this via RedBot.org.

Here, everything is working on regular 200 Response

    HTTP/1.1 200 OK
    Server: Tengine
    Date: Sun, 05 Jul 2015 06:45:18 GMT
    Content-Type: text/html; charset=UTF-8
    Last-Modified: Sun, 05 Jul 2015 03:15:13 GMT
    Transfer-Encoding: chunked
    Connection: keep-alive
    Vary: Accept-Encoding
    Expires: Sun, 12 Jul 2015 06:45:18 GMT
    Cache-Control: max-age=604800
    Strict-Transport-Security: max-age=63072000; includeSubdomains; preload
    X-Content-Type-Options: nosniff
    Content-Encoding: gzip


Now, the Vary header is gone (and I'm pretty sure it should be there for this response)

HTTP/1.1 304 Not Modified
    Server: Tengine
    Date: Sun, 05 Jul 2015 06:45:18 GMT
    Last-Modified: Sun, 05 Jul 2015 03:15:13 GMT
    Connection: keep-alive
    ETag: ""5598a141-5eb""
    Expires: Sun, 12 Jul 2015 06:45:18 GMT
    Cache-Control: max-age=604800
    Strict-Transport-Security: max-age=63072000; includeSubdomains; preload
    X-Content-Type-Options: nosniff"	uudruid74.startssl.com
1.9.x	861	Possibility of Inconsistent HPACK Dynamic Table Size in HTTP/2 Implementation	nginx-module	1.9.x	defect		accepted	2015-12-15T20:47:36Z	2015-12-17T11:38:24Z	"The hpack dynamic table is only initialized upon addition of the first entry (see ngx_http_v2_add_header) in http/v2/ngx_http_v2_table.c.

If a dynamic table size update is sent before the first header to be added, the size will be set appropriately. However, once the first header is added, the table size is updated with NGX_HTTP_V2_TABLE_SIZE, resulting in a different size than the client.

After a brief reading of the HTTP/2 and HPACK specification, it appears that updating the dynamic table size before adding any headers is allowed."	tim-becker@…
1.9.x	869	open_file_cache with NGX_HAVE_PREAD 0	nginx-core	1.9.x	defect		new	2015-12-23T20:36:17Z	2015-12-30T15:38:52Z	"I'm running NGINX on a Unix like embedded platform that doesn't have pread() implemented, so I have NGX_HAVE_PREAD set to 0 and am using the Unix version of ngx_files.c

I've been testing out the open file cache and I've found what seems to be an incompatibility with systems not supporting pread.

The sys_offset (from ngx_file_t) used in the non-pread case isn't cached in the open file cache. When a module opens the cached file, the sys_offset in its ngx_file_t starts out at 0 instead of the current position of the file.  This causes issues when the file is not actually at position 0, but both sys_offset and offset are at 0 (thus we don't seek).

Additionally, if two HTTP requests for the same file happen simultaneously, each will have its own sys_offset and won't be updated when the other request moves the file position.

Is the NGX_HAVE_PREAD 0 configuration not used on any POSIX systems now-a-days and is dead code?

One fix that I could think of is to just seek every time since we can't rely on sys_offset in these configurations.

If we want to avoid the seek and to fix the multiple requests case, we would need to update a single copy of sys_offset, which I don't believe there is infrastructure in place for that."	Joel Cunningham
1.9.x	1216	Confusing use of 'URI' when referring to a path in the proxy_pass documentation	documentation	1.9.x	defect		new	2017-03-13T12:01:40Z	2022-05-18T17:44:23Z	"In http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass it states:

----
A request URI is passed to the server as follows:

If the proxy_pass directive is specified with a '''URI''', then when a request is passed to the server, the part of a normalized request URI matching the location is replaced by a URI specified in the directive:

location /name/ {
    proxy_pass http://127.0.0.1/remote/;
}

If proxy_pass is specified without a '''URI''', the request URI is passed to the server in the same form as sent by a client when the original request is processed, or the full normalized request URI is passed when processing the changed URI:

location /some/path/ {
    proxy_pass http://127.0.0.1;
}
----

The highlighted uses of 'URI' should IMVHO be 'path', as in both cases a URI (with a scheme and a host, and the path in the first case) is given.
"	ankon@…
1.9.x	770	Enable PolarSSL or Botan as a compile-time alternative to OpenSSL	nginx-core	1.9.x	enhancement		new	2015-06-24T04:16:10Z	2015-06-24T04:16:10Z	"Timing attacks have plagued OpenSSL for over a decade. Having more than one choice for a TLS library is likely a good thing.

To my knowledge, no one has attempted to integrate nginx with Botan (http://botan.randombit.net/), however several forks of nginx have enabled mbed TLS (formerly PolarSSL; https://tls.mbed.org/) support:

* https://github.com/Yawning/nginx-polarssl
* https://github.com/alinefr/nginx-polarssl (fork of Yawning's effort)

There are, of course, other options (https://en.wikipedia.org/wiki/Comparison_of_TLS_implementations), but Botan and mbed TLS both show promise. As of this writing, they are the only two libraries to support Curve25519 (which is kind of embarrassing for the rest of the world, but I digress...)."	launchpad.net/~posita
1.9.x	775	Support for more complex satisfy configurations	nginx-core	1.9.x	enhancement		new	2015-07-27T20:13:20Z	2015-07-27T20:13:20Z	"It would be nice to have more complex satisfy directives so that we can do more complex AuthN and AuthZ combinations. An example from RedHat of the pseudo configuration would look like:

{{{
( authenticate via Kerberos
		*and* authorize with different module against FreeIPA )
	or ( authenticate via SAML
		*and* (authorize with different module against FreeIPA 
			or authorize based on static list of groups ))
	or allow access from domain .internal.example.com
}}}


At the moment this would require an external daemon to handle the auth."	ahutchings
1.9.x	778	Immediatley expire cached responses	nginx-core	1.9.x	enhancement		new	2015-09-04T18:16:34Z	2015-09-04T18:16:34Z	"I'm using nginx as a reverse proxy in front of my backend servers which do authentication/authorization before executing queries to our databases.

The queries are expensive so I've configured nginx to cache the responses from that part of our API.

Since authorization is done on the backend servers and not in nginx, I've configured nginx to revalidate requests for cached URLs with `proxy_cache_revalidate on`.

nginx only seems to revalidate stale responses so I set `proxy_cache_valid` to the smallest value I could (1s).

This leaves a 1 second window open for any client to request that URL and receive the cached response directly from nginx without it being revalidated by a backend server.

Would it be possible for nginx to allow 0 second durations so that responses become immediately stale and, therefore, require revalidation before being released to any other client?
"	jdiamond@…
1.9.x	782	nginx doesn't check delta CRLs	nginx-core	1.9.x	enhancement		reopened	2015-09-09T11:45:24Z	2017-02-15T20:57:29Z	"Hi,

we are using nginx for certificate authentication. We have multiple trusted certificate authorities (CA) and related certificate revokation lists (CRL) in one pem file which is updated on a daily basis:

ssl_client_certificate /etc/nginx/clientcerts/trustedCAs.pem;
ssl_crl /etc/nginx/clientcerts/revoked_certs.pem;

This works fine so far when a certificate authority has only one corresponding CRL. However when a CA uses so called ""Delta CRLs"", a revoked client certificate which is only present in the delta CRL seems to not be read by nginx. The revoked certificate is accepted by nginx. If the revoked certificate is directly inserted into the ""main"" CRL, nginx declines the authentication.

Does nginx support ""Delta CRLS""? I believe this is a security issue, because there may be some certificate authorities which make use of """"Delta CRLs"". If nginx ignores them, a client certificate is accepted although it is revoked.

"	Niko
1.9.x	812	Fetch OCSP responses on startup, and store across restarts	nginx-core	1.9.x	enhancement		new	2015-10-10T19:30:15Z	2020-06-06T17:23:39Z	"Once TLS Feature (https://datatracker.ietf.org/doc/draft-hallambaker-tlsfeature/?include_text=1, formerly known as OCSP Must Staple) lands, CAs will be able to sign certs with a bit that says ""Do not trust this certificate unless it is accompanied by a stapled OCSP response."" For Nginx users to be able to use such certificates, they need to be able to serve stapled OCSP with high reliability and speed. That means two things:

 - Nginx should prefetch OCSP responses for all configured certificates on startup, and when the responses are nearing their NextUpdate time.
 - Nginx should store OCSP responses in long-term storage, to minimize the cost of startup fetching, and to ensure that if an OCSP responder is temporarily unreachable at startup time, it doesn't prevent correctly serving the relevant site."	jsha@…
1.9.x	853	Поведение cache_use_stale updating если новые ответы нельзя кешировать	nginx-core	1.9.x	enhancement		accepted	2015-12-08T15:04:03Z	2023-11-27T03:28:35Z	"Конфигурация следующая:
fastcgi_cache_path /var/tmp/nginx/fastcgi_cache levels=1:2 keys_zone=fcgi_cache:16m max_size=1024m inactive=35m;
fastcgi_cache_revalidate on;

fastcgi_cache fcgi_cache;
fastcgi_cache_valid 200 301 302 304 10m;
fastcgi_cache_valid 404 2m;
fastcgi_cache_use_stale updating error timeout invalid_header http_500 http_503;
fastcgi_cache_key ""$request_method|$host|$uri|$args"";
fastcgi_no_cache $cookie_nocache $arg_nocache $cookie_NRGNSID $cookie_NRGNTourSID $cookie_failed_login;
fastcgi_cache_bypass $cookie_nocache $arg_nocache $cookie_NRGNSID $cookie_NRGNTourSID $cookie_failed_login;

Сейчас бекенд отвечает 200 с заголовками ""Cache-Control: no-store, no-cache, must-revalidate"" и ""Pragma: no-cache"".
Но две недели назад некоторое время там было 302 без запрета кеширования и ответ попал в кеш по fastcgi_cache_valid 10m.
После этого одинокие запросы получают upstream_cache_status EXPIRED и ответ бекенда, но если несколько приходят одновременно, то срабатывает UPDATING и редирект из кеша двухнедельной давности.
Запросы приходят регулярно и удаление по inactive=35m не происходит.

Поведение полностью объяснимо механикой работы кеша, но не с точки зрения человеческих ожиданий.
Хотелось бы иметь механизм инвалидации таких устаревших данных из  кеша кроме удаления элементов на файловой системе внешним скриптом.
Например, ещё один параметр для cache_path, который будет задавать максимальное время жизни в кеше expired элементов, даже если к ним есть обращения.
"	Oleksandr Typlyns'kyi
1.9.x	915	"""Upgrade"" header should not be proxied over h2"	nginx-module	1.9.x	enhancement		new	2016-03-01T16:18:46Z	2020-11-05T12:56:35Z	"When proxying an HTTP/2-enabled webserver with nginx, nginx fetches resources using HTTP/1.1, which the backend server tries to upgrade to HTTP/2 by sending the ""Upgrade: h2"" header.
In the default configuration, this header is then forwarded to the client, which is incorrect.
In the case of nghttp, this is interpreted as an error:


{{{
inflatehd: header emission: upgrade: h2
recv: proclen=10
recv: HTTP error: type=1, id=13, header upgrade: h2
[  0.008] [INVALID; error=Invalid HTTP header field was received] recv HEADERS frame <length=798, flags=0x04, stream_id=13>
          ; END_HEADERS
          (padlen=0)
          ; First response header
recv: [IB_IGN_HEADER_BLOCK]
}}}

cf. https://github.com/curl/curl/issues/674

An example setup would be using httpd-2.4 as a backend.

This is easily remedied with the following nginx proxy config:

{{{
proxy_hide_header      Upgrade;
}}}


But maybe the default behaviour could be improved?"	Guillaume Rossolini
1.9.x	938	Концепт модуля: Миниатюры как часть прогрессивных JPEG, PNG.	nginx-module	1.9.x	enhancement		new	2016-03-24T17:16:53Z	2017-02-17T10:49:02Z	"Здравствуйте. Не знаю куда лучше написать по этому поводу поэтому пишу сюда, не хочу чтобы идея пропала. Возможно она нестостоятельная, но я на всякий случай напишу про неё.

Предложение сделать модуль отдачи прогрессивных картинок до определённого уровня. В изображениях прогрессивно закодированых есть грубо говоря уровни предпросмотра, которые вполне соответветствуют определённым размерам в пикселях. Суть в том чтобы отдавать изображение ровно до того уровня когда оно покрывает размер миниатюры, а дальше рвать соединение. Клиент получит часть изображения, но она корректно отобразится. 
Тогда можно было бы забыть о генерации миниатюр и не тратить место под них. Возможен конечно небольшой перерасход трафика ибо ближайший покрывающий уровень может быть размером 500х500 а не 200х200, но это надо выяснять на большой коллекции изображений и можно в таких случаях брать ближайший прошлый уровень если он не сильно далеко от размера желаемой миниатюры.

Пример конфига:
location /thumb/ {
	access_log off;
	alias /var/www/site1/img/;
	image_progressive 200 auto -50 300;
}
У разного размера картинок разные уровни будут соотвествовать разному размеру, поэтому указываем желаемый размер. Тут 200 ширина, а auto высота желаемой минаютюры. Auto автоматом просчитает размер высоты относительно ширины (сохранение соотношения сторон). -50 это даём понять что если ближайший уровень размером 150 пикселей то берём его, при условии что следующий уровень за 300 пикселей вперёд (то есть не 200, а 500). Но для начала можно и без этой замороченной логики.

Вомзожно проблемная сторона это кэширование на стороне клиента — обозреватели же кэшируют не по пути и имени файла, а по содержимому файла? Или как? В случае если по содержимому может быть можно как-то легко немного модифицировать файл на лету например добавив в конец отрезка пару байт чтобы файлы для клиента не совпадали.

Я погуглил и нашёл проект решающий проблему своим путём:
http://fhtr.org/multires/spif/spif.html
https://github.com/kig/multires
Но это не то, у него свой формат, а хотелось бы использовать уже существующие технологии.

C гифками вроде [https://msdn.microsoft.com/en-us/library/windows/desktop/ee720036(v=vs.85).aspx так тоже можно], но я не заметил подобной поддержки в обозревателях, так что для них здесь добавляется дополнительное условие перенаправляющее на сделанные миниатюры или задействовать image_filter resize.

"	0x4E69676874466F78@…
1.9.x	969	proxy module does not honour proxy_max_temp_file_size on cacheable responses	nginx-module	1.9.x	enhancement		new	2016-05-03T00:07:03Z	2016-05-04T13:02:38Z	"In the ngx_http_proxy module, there is a directive: `proxy_max_temp_file_size` - it is intended to limit the size of buffered files on disk. There is a caveat:

 > ""This restriction does not apply to responses that will be cached or stored on disk.""

http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_max_temp_file_size

Why is this the case? I have an issue when using the slice module - I'm proxying a large request, and Nginx sends smaller range requests to the proxied server, for ~16MB hunks of the resource. Sometimes the proxied server misbehaves, and responds with a ranged response from the offset, till the end of the very large file.

Nginx will continue to 'temporarily' buffer this file to disk, resulting in a blocked downstream request, and an ever increasing use of the temporary proxy storage. The upstream file is multiple terabytes - Nginx should honor this directive and terminate the upstream range response when it receives over the max size for buffered files."	Stealthii@…
1.9.x	971	Clarify $host and $hostname in embedded variables documentation	documentation	1.9.x	enhancement		new	2016-05-04T13:56:01Z	2019-05-07T14:00:45Z	"Hi

I've just been documenting some of my project which is based on NGINX and did some tests to clarify the values of $host and $hostname which I feel could be improved in docs: http://nginx.org/en/docs/http/ngx_http_core_module.html#variables

I'd suggest adding to $host a line to state that the value is normalised, e.g.:
""in this order of precedence: host name from the request line, or host name from the “Host” request header field, or the server name matching a request. The value of $host is normalised into lower case.""

This is important if, for example, you're using $host in a cache key.

I'd also like to expand on the docs for $hostname, e.g. make it something like:
""The FQDN of the host computer, e.g. the value of 'hostname -f' on *nix systems""

Hope that all makes sense."	Neil Craig
1.9.x	1055	Allow to configure ssl_ciphers in multiple lines	other	1.9.x	enhancement		new	2016-08-20T10:35:16Z	2016-08-20T10:35:16Z	"It would be nice to be able to use multiple lines for ssl_ciphers, for example:

ssl_ciphers ""\
ECDHE-RSA-CHACHA20-POLY1305: \
ECDHE-RSA-AES256-GCM-SHA384: \
ECDHE-ECDSA-AES256-GCM-SHA384 \
"";

Instead of 
ssl_ciphers ""ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384"";

Especially cool if we use a lot of cipher suites"	heptathlon@…
1.9.x	1182	"Responses with ""no-cache"" or ""max-age=0"" should be cached"	other	1.9.x	enhancement		new	2017-01-18T00:05:29Z	2023-03-10T16:58:49Z	"I know the summary sounds contradictory, but I believe that caching these responses is allowed, when revalidation is enabled. These responses would always need to be revalidated.

Here's the relevant part of the HTTP spec:
{{{
no-cache
       If the no-cache directive does not specify a field-name, then a
      cache MUST NOT use the response to satisfy a subsequent request
      without successful revalidation with the origin server. This
      allows an origin server to prevent caching even by caches that
      have been configured to return stale responses to client requests.
}}}

So, if ""no-cache"" is present, then nginx is allowed to cache the response, but must treat the cache entry as if it has already expired. The next lookup requires revalidation. I believe that ""max-age=0"" or ""s-maxage=0"" should be treated the same way, but I don't have a specific reference to the spec to justify my opinion.

We have an upstream server that is capable of returning 304 Not Modified much more quickly than it can generate the response body for a 200. We want to be able to use nginx's caching abilities to store the response body (especially to share that response body between users), but we also want to revalidate with the upstream server on every request. Right now we're working around this with ""max-age=1"" so that things expire quickly, but technically we want them to expire right away.

Also see ""Pattern 2"" on the first hit when searching ""HTTP Cache Best Practices"".
https://jakearchibald.com/2016/caching-best-practices/"	geoff.addepar.com@…
1.9.x	1306	ngx_http_geo_module ranges do not support ipv6	nginx-module	1.9.x	enhancement		new	2017-06-29T22:11:22Z	2021-08-04T09:48:43Z	"It appears that while ipv6 is supported via CIDR in the geo module, ranges for ipv6 address are not supported and will return an error claiming an invalid range.

The documentation does not seem to give any indication that only CIDR is supported for ipv6 so I am not sure if this is just a code path that was not upgrade to support ipv6 or if the implementation itself does not support the ability to check that an ipv6 address is in a range.

I am looking to use geo to identify a mix of ipv4 and ipv6 ip ranges that do not fit well into a CIDR block so expressing them as such is difficult. It would be great if nginx would support ipv6 ranges as well so that all of the ip address can be defined in the same geo block.

Example:
geo $matcher {
  ranges;
  default 0;
  192.168.0.0-192.168.255.255 US;
  2001::-2001:ffff:ffff:ffff:ffff:ffff:ffff:ffff US;
}
2017/06/29 20:15:52 [emerg] 520#0: invalid range ""2001::-2001:ffff:ffff:ffff:ffff:ffff:ffff:ffff"" in /etc/nginx/conf.d/test.conf:14
nginx: [emerg] invalid range ""2001::-2001:ffff:ffff:ffff:ffff:ffff:ffff:ffff"" in /etc/nginx/conf.d/test.conf:14
nginx: configuration file /etc/nginx/nginx.conf test failed
"	ajorgensen@…
1.9.x	781	Documentation not clear on auth_basic_user_file	documentation	1.9.x	task		new	2015-09-07T10:45:08Z	2015-10-30T14:22:08Z	"On the page http://nginx.org/en/docs/http/ngx_http_auth_basic_module.html it is not very clear to what auth_basic_user_file is relative to.

Where exactly needs the file htpasswd to be when it is defined like this:

auth_basic_user_file conf/htpasswd;

is it /etc/nginx/conf/htpasswd or something else?"	r.luthiger@…
1.9.x	838	enable compare operators within if directive	nginx-core	1.9.x	enhancement		new	2015-11-24T17:51:27Z	2015-11-24T17:51:27Z	"It would be nice to have string comparison operators within the if directive to enable time based redirects/responses like:

{{{
  location /foo/ {
    if ($time_iso8601 lt $startTime) {
      return 403 ""to early $time_iso8601 $startTime"";
    } 
    if ($time_iso8601 gt $endTime) {
      return 403 ""to late $time_iso8601 $endTime"";
    }
    ...
  }
}}}

"	elektro-wolle@…
	1958	`modern_browser` definition for Safari version is wrong/unexpected	nginx-module		defect		accepted	2020-04-20T07:59:08Z	2020-04-20T15:27:46Z	"http://nginx.org/en/docs/http/ngx_http_browser_module.html

One of the great use cases for `ngx_http_browser_module` is when your website needs to officially support certain browser versions. E.g. ""we support Edge Legacy >= 15, Safari >= 12, and all recent versions of Chrome, Firefox, and Edgium"".

With the current implementation of `modern_browser`, we cannot achieve this, as the version number detected for `safari` is not the release number, but instead the WebKit build number.

The Safari WebKit build number can be the same across different releases (see the above example user agent strings below). I am currently working around this using a `map` against the user agent to set a flag variable.
{{{
map $http_user_agent $is_safari_lt_12 {
  ""~ Version/((?:[1-9]|1[0-1])(?:\.\d+)+) (?:Mobile/\w+ )?Safari/(?:\d+(?:\.\d+)*)$"" 1;
  default 0;
}
}}}
and then combining it with `ancient_browser` and `modern_browser` directives.
{{{
  # Redirect requests from IE to the unsupported browser page.
  ancient_browser ""MSIE "";
  ancient_browser ""Trident/"";
  modern_browser unlisted;
  if ($ancient_browser) {
    rewrite ^/.*  /unsupported-browser/ last;
  }
  if ($is_safari_lt_12) {
    rewrite ^/.*  /unsupported-browser/ last;
  }
}}}

It would be much nicer if one could just do
{{{
  modern_browser safari_version 12;
}}}
instead of needing the map and the additional `if` statement.

=== More details

https://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_browser_module.c#L181

The current implementation for `modern_browser` for `safari` is to take the numbers that are after `Safari/` in the User Agent string. This number is _a_ version number, but is not the _expected_ version number when talking about Safari versions.

https://en.wikipedia.org/wiki/Safari_version_history

The number after `Safari/` is the WebKit build number, which is unrelated to the Safari release number. For example, here are some Safari user agent strings:

{{{
Mozilla/5.0 (iPad; CPU iPhone OS 12_1_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.1.2 Safari/605.1.15
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.0.5 Safari/605.1.15
}}}

The commonly referred to version number is the number after `Version/`.

I would like to propose:

1) changing the documentation to make it clearer that the version number one passes to `modern_browser safari` is the WebKit version number and not the release number; and

2) Adding a new named option for `modern_browser` that can be used for the Safari release number; e.g. `safari_release`, `safari2`, etc."	Tim Dawborn
	2242	DNS UDP proxy with UNIX socket is not working	nginx-core		defect		accepted	2021-09-03T12:27:31Z	2021-09-06T15:13:04Z	"Hi,

things go in such a way, that I need to pass DNS traffic from LXC container to host system without real network between them.
I decided to try NGINX as a proxy server to pass DNS requests/responses via shared unix socket, which is passed from host system as a mountpoint.

I've removed LXC container from my scheme to concentrate on the problem itself, as it reproduces on a normal system without containers involved.

I've got two separate unix sockets: one for tcp-originated requests and one for udp, as nginx configures unix sockets to be stream or dgram based on server's configuration (tcp vs udp).

nginx.conf:


{{{
user nginx;
worker_processes 1;
worker_rlimit_nofile 100000;

pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log warn;

events {
    use epoll;
    worker_connections 1024;
    multi_accept on;
}

stream {

    # TCP
    server {
        listen 5353;
        proxy_pass unix:/var/lib/nginx/dns-tcp.sock;
    }

    server {
        listen unix://var/lib/nginx/dns-tcp.sock;
        proxy_pass 10.70.112.1:53;
    }


    # UDP
    server {
        listen 5353 udp;
        proxy_pass unix:/var/lib/nginx/dns-udp.sock;
    }

    server {
        listen unix://var/lib/nginx/dns-udp.sock udp;
        proxy_pass 10.70.112.1:53;
    }
}
}}}


For tcp, DNS traffic works excellent:

{{{
[root@dev ~]# dig @127.0.0.1 -p 5353 ya.ru +tcp

; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> @127.0.0.1 -p 5353 ya.ru +tcp
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 59275
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;ya.ru.				IN	A

;; ANSWER SECTION:
ya.ru.			384	IN	A	87.250.250.242

;; Query time: 2 msec
;; SERVER: 127.0.0.1#5353(127.0.0.1)
;; WHEN: Fri Sep 03 15:20:55 MSK 2021
;; MSG SIZE  rcvd: 50
}}}

strace output:

{{{
[root@dev ~]# strace -s 1024 -fp 3876008
strace: Process 3876008 attached
epoll_wait(10, [{EPOLLIN, {u32=1176072208, u64=139720757178384}}], 512, 588295) = 1
accept4(5, {sa_family=AF_INET, sin_port=htons(40085), sin_addr=inet_addr(""127.0.0.1"")}, [16], SOCK_NONBLOCK) = 13
setsockopt(13, SOL_TCP, TCP_NODELAY, [1], 4) = 0
socket(AF_LOCAL, SOCK_STREAM, 0)        = 14
ioctl(14, FIONBIO, [1])                 = 0
epoll_ctl(10, EPOLL_CTL_ADD, 14, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=1176073648, u64=139720757179824}}) = 0
connect(14, {sa_family=AF_LOCAL, sun_path=""/var/lib/nginx/dns-tcp.sock""}, 110) = 0
epoll_ctl(10, EPOLL_CTL_ADD, 13, {EPOLLIN|EPOLLRDHUP|EPOLLET, {u32=1176073408, u64=139720757179584}}) = 0
accept4(5, 0x7fff487cc150, 0x7fff487cc14c, SOCK_NONBLOCK) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(10, [{EPOLLOUT, {u32=1176073648, u64=139720757179824}}, {EPOLLIN, {u32=1176072448, u64=139720757178624}}, {EPOLLIN, {u32=1176073408, u64=139720757179584}}], 512, 583915) = 3
accept4(6, {sa_family=AF_LOCAL, NULL}, [2], SOCK_NONBLOCK) = 15
socket(AF_INET, SOCK_STREAM, IPPROTO_IP) = 16
ioctl(16, FIONBIO, [1])                 = 0
epoll_ctl(10, EPOLL_CTL_ADD, 16, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=1176074608, u64=139720757180784}}) = 0
connect(16, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr(""10.70.112.1"")}, 16) = -1 EINPROGRESS (Operation now in progress)
accept4(6, 0x7fff487cc150, 0x7fff487cc14c, SOCK_NONBLOCK) = -1 EAGAIN (Resource temporarily unavailable)
recvfrom(13, ""\0\""\347\213\1 \0\1\0\0\0\0\0\1\2ya\2ru\0\0\1\0\1\0\0)\20\0\0\0\0\0\0\0"", 16384, 0, NULL, NULL) = 36
writev(14, [{""\0\""\347\213\1 \0\1\0\0\0\0\0\1\2ya\2ru\0\0\1\0\1\0\0)\20\0\0\0\0\0\0\0"", 36}], 1) = 36
epoll_wait(10, [{EPOLLOUT, {u32=1176074608, u64=139720757180784}}], 512, 60000) = 1
getsockopt(16, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
setsockopt(16, SOL_TCP, TCP_NODELAY, [1], 4) = 0
epoll_ctl(10, EPOLL_CTL_ADD, 15, {EPOLLIN|EPOLLRDHUP|EPOLLET, {u32=1176074368, u64=139720757180544}}) = 0
epoll_wait(10, [{EPOLLIN, {u32=1176074368, u64=139720757180544}}], 512, 583913) = 1
recvfrom(15, ""\0\""\347\213\1 \0\1\0\0\0\0\0\1\2ya\2ru\0\0\1\0\1\0\0)\20\0\0\0\0\0\0\0"", 16384, 0, NULL, NULL) = 36
writev(16, [{""\0\""\347\213\1 \0\1\0\0\0\0\0\1\2ya\2ru\0\0\1\0\1\0\0)\20\0\0\0\0\0\0\0"", 36}], 1) = 36
epoll_wait(10, [{EPOLLOUT, {u32=1176073648, u64=139720757179824}}], 512, 583913) = 1
epoll_wait(10, [{EPOLLIN|EPOLLOUT, {u32=1176074608, u64=139720757180784}}], 512, 583913) = 1
recvfrom(16, ""\0002\347\213\201\200\0\1\0\1\0\0\0\1\2ya\2ru\0\0\1\0\1\300\f\0\1\0\1\0\0\1\200\0\4W\372\372\362\0\0)\20\0\0\0\0\0\0\0"", 16384, 0, NULL, NULL) = 52
writev(15, [{""\0002\347\213\201\200\0\1\0\1\0\0\0\1\2ya\2ru\0\0\1\0\1\300\f\0\1\0\1\0\0\1\200\0\4W\372\372\362\0\0)\20\0\0\0\0\0\0\0"", 52}], 1) = 52
epoll_wait(10, [{EPOLLIN|EPOLLOUT, {u32=1176073648, u64=139720757179824}}], 512, 583912) = 1
recvfrom(14, ""\0002\347\213\201\200\0\1\0\1\0\0\0\1\2ya\2ru\0\0\1\0\1\300\f\0\1\0\1\0\0\1\200\0\4W\372\372\362\0\0)\20\0\0\0\0\0\0\0"", 16384, 0, NULL, NULL) = 52
writev(13, [{""\0002\347\213\201\200\0\1\0\1\0\0\0\1\2ya\2ru\0\0\1\0\1\300\f\0\1\0\1\0\0\1\200\0\4W\372\372\362\0\0)\20\0\0\0\0\0\0\0"", 52}], 1) = 52
epoll_wait(10, [{EPOLLIN|EPOLLRDHUP, {u32=1176073408, u64=139720757179584}}], 512, 583912) = 1
recvfrom(13, """", 16384, 0, NULL, NULL)  = 0
close(14)                               = 0
close(13)                               = 0
epoll_wait(10, [{EPOLLIN|EPOLLHUP|EPOLLRDHUP, {u32=1176074368, u64=139720757180544}}], 512, 583912) = 1
recvfrom(15, """", 16384, 0, NULL, NULL)  = 0
close(16)                               = 0
close(15)                               = 0
epoll_wait(10, ^Cstrace: Process 3876008 detached
 <detached ...>
}}}


But in UDP case, nginx process:
1. gets request from dgram unix socket
2. sends request to configured upstream server
3. gets response from configured upstream server
4. tries to send response to unix socket and gets an ECONNREFUSED error and request hangs.


{{{
[{EPOLLIN, {u32=1176072688, u64=139720757178864}}], 512, 440326) = 1
recvmsg(7, {msg_name(16)={sa_family=AF_INET, sin_port=htons(55102), sin_addr=inet_addr(""127.0.0.1"")}, msg_iov(1)=[{""\6\261\1 \0\1\0\0\0\0\0\1\2ya\2ru\0\0\1\0\1\0\0)\20\0\0\0\0\0\0\0"", 65535}], msg_controllen=32, [{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, {ipi_ifindex=if_nametoindex(""lo""), ipi_spec_dst=inet_addr(""127.0.0.1""), ipi_addr=inet_addr(""127.0.0.1"")}}], msg_flags=0}, 0) = 34
socket(AF_LOCAL, SOCK_DGRAM, 0)         = 13
ioctl(13, FIONBIO, [1])                 = 0
epoll_ctl(10, EPOLL_CTL_ADD, 13, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=1176074609, u64=139720757180785}}) = 0
connect(13, {sa_family=AF_LOCAL, sun_path=""/var/lib/nginx/dns-udp.sock""}, 110) = 0
sendmsg(13, {msg_name(0)=NULL, msg_iov(1)=[{""\6\261\1 \0\1\0\0\0\0\0\1\2ya\2ru\0\0\1\0\1\0\0)\20\0\0\0\0\0\0\0"", 34}], msg_controllen=0, msg_flags=0}, 0) = 34
recvmsg(7, 0x7fff487cc010, 0)           = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(10, [{EPOLLOUT, {u32=1176074609, u64=139720757180785}}, {EPOLLIN, {u32=1176072928, u64=139720757179104}}], 512, 438024) = 2
recvmsg(8, {msg_name(0)=0x7fff487cc0a0, msg_iov(1)=[{""\6\261\1 \0\1\0\0\0\0\0\1\2ya\2ru\0\0\1\0\1\0\0)\20\0\0\0\0\0\0\0"", 65535}], msg_controllen=0, msg_flags=0}, 0) = 34
socket(AF_INET, SOCK_DGRAM, IPPROTO_IP) = 14
ioctl(14, FIONBIO, [1])                 = 0
epoll_ctl(10, EPOLL_CTL_ADD, 14, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=1176073649, u64=139720757179825}}) = 0
connect(14, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr(""10.70.112.1"")}, 16) = 0
sendmsg(14, {msg_name(0)=NULL, msg_iov(1)=[{""\6\261\1 \0\1\0\0\0\0\0\1\2ya\2ru\0\0\1\0\1\0\0)\20\0\0\0\0\0\0\0"", 34}], msg_controllen=0, msg_flags=0}, 0) = 34
recvmsg(8, 0x7fff487cc010, 0)           = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(10, [{EPOLLOUT, {u32=1176074609, u64=139720757180785}}, {EPOLLOUT, {u32=1176073649, u64=139720757179825}}], 512, 438024) = 2
epoll_wait(10, [{EPOLLIN|EPOLLOUT, {u32=1176073649, u64=139720757179825}}], 512, 438023) = 1
recvfrom(14, ""\6\261\201\200\0\1\0\1\0\0\0\1\2ya\2ru\0\0\1\0\1\300\f\0\1\0\1\0\0\0\356\0\4W\372\372\362\0\0)\20\0\0\0\0\0\0\0"", 16384, 0, NULL, NULL) = 50
sendmsg(8, {msg_name(16)={sa_family=AF_LOCAL, sun_path=@""""}, msg_iov(1)=[{""\6\261\201\200\0\1\0\1\0\0\0\1\2ya\2ru\0\0\1\0\1\300\f\0\1\0\1\0\0\0\356\0\4W\372\372\362\0\0)\20\0\0\0\0\0\0\0"", 50}], msg_controllen=0, msg_flags=0}, 0) = -1 ECONNREFUSED (Connection refused)
close(14)                               = 0
epoll_wait(10,
}}}

In tcpdump I see request to upstream server and response:

{{{
[root@dev ~]# tcpdump  -ni eth0 port 53 and host 10.70.112.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:24:00.825725 IP 10.70.112.35.55180 > 10.70.112.1.domain: 23283+ [1au] A? ya.ru. (34)
15:24:00.826905 IP 10.70.112.1.domain > 10.70.112.35.55180: 23283 1/0/1 A 87.250.250.242 (50)
}}}

Please help understand what could go wrong and how to fix this.
Feel free to ask any additional information.

Thanks."	Vladislav Odintsov
	2310	Document behaviour for all config statements in nested location blocks	documentation		defect		accepted	2022-01-20T10:44:05Z	2022-01-24T01:13:36Z	"From my understanding each request is only ever handled in a single top level location block and some, but not all statements are inherited in nested location blocks. Each location may also only have exactly one block.

Ideally this could be changed to actually allow for modularity and reduced duplication of statements but since this system is unlikely to change for backwards compatibility reasons it would at least be useful to know which statements need to be duplicated in every nested location block and which are inherited.

I ran into this issue when some location blocks that were only meant to disable password protection for specific domains also disabled the reverse proxy for those locations. Presumably proxy_pass is what this post calls a command type directive

https://stackoverflow.com/questions/32104731/directive-inheritance-in-nested-location-blocks"	taladar@…
	2322	client_max_body_size doesn't work in named location	nginx-core		defect		new	2022-02-18T11:37:31Z	2022-06-02T18:25:29Z	"I have the next configuration and client_max_body_size doesn't matter in a named location. I have errors ""413 Request Entity Too Large"" in log files. Everything is OK when I move client_max_body_size to server {} from named locations.

error log:
client intended to send too large body: 1441104 bytes, client: 10.178.67.87, server: loki, request: ""POST /loki/api/v1/push HTTP/1.1"", host: ""loki""
access log:
10.178.67.87 - eissd [18/Feb/2022:14:37:55 +0500] eissd_dev ""POST /loki/api/v1/push HTTP/1.1"" 413 306 ""Apache-HttpClient/4.5.13 (Java/17.0.2)"" 0.086 -
"	gadskypapa@…
	2441	pkg-oss - build error	nginx-package		defect		accepted	2023-01-24T11:57:32Z	2023-01-27T20:00:42Z	"Hi guys,

Trying to build a module for nginx, but build error arises:
{{{
===> Building nginx-module-rtmp package
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.dK9eXn
+ umask 022
+ cd /root/rpmbuild/BUILD
+ cd /root/rpmbuild/BUILD
+ rm -rf nginx-plus-module-rtmp-1.17.6
+ /usr/bin/mkdir -p nginx-plus-module-rtmp-1.17.6
+ cd nginx-plus-module-rtmp-1.17.6
+ /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w .
+ tar --strip-components=1 -zxf /root/rpmbuild/SOURCES/nginx-1.17.6.tar.gz
tar (child): /root/rpmbuild/SOURCES/nginx-1.17.6.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
error: Bad exit status from /var/tmp/rpm-tmp.dK9eXn (%prep)
    Bad exit status from /var/tmp/rpm-tmp.dK9eXn (%prep)
}}}

How to reproduce:
{{{
docker run --rm rockylinux:8 bash -c 'yum install -y wget && wget https://hg.nginx.org/pkg-oss/raw-file/default/build_module.sh && bash build_module.sh -y -r 20 https://github.com/arut/nginx-rtmp-module.git'
}}}

Same error on different platforms - aarch64 and amd64.
Full output is attached(aarch64).
"	Alexander Kubyshkin
	2657	Specail redirect in location does not resolve upstream names	nginx-core		defect		new	2024-06-17T08:30:42Z	2024-12-10T02:07:16Z	"First things firsts: thank you for such titanic work, holding internet on your shoulders should be not easy.

So here is a self-explaining script reproducing my problem step by step.
I use official nginx image from docker hub.
The setup is simple: two nginxs, one for ingress, second impersonates some app.

(to pass antispam I had to replace http with h_t_t_p)
{{{
# common preparation
docker network create nginx_test_network

# setup ingress
tee <<CONF > ingress.conf
server {
  listen 81 default;
  server_name _;

  resolver 127.0.0.53      ipv6=off;
  set      \$app_upstream ""h_t_t_p://nginx-test-app:9090"";

  location / {
    proxy_pass \$app_upstream;
  }
}
CONF

docker run \
  --detach \
  --name nginx-test-ingress \
  --network nginx_test_network \
  --publish 81:81 \
  --rm \
  --volume ""$(pwd)/ingress.conf"":/etc/nginx/conf.d/default.conf \
  nginx:1.27

# setup app
tee <<CONF > app.conf
server {
  server_name _;
  listen 9090;

  location / {
    #add_header Content-Type text/plain;
    #return 200 ""Here is an APP"";
    root /usr/share/nginx/html;
  }
}
CONF

docker run \
  --detach \
  --name nginx-test-app \
  --network nginx_test_network \
  --rm \
  --volume ""$(pwd)/app.conf"":/etc/nginx/conf.d/default.conf \
  nginx:1.27

# setup problem
docker exec -i nginx-test-app mkdir /usr/share/nginx/html/css

# taking power nap
sleep 1

# getting troubles
curl -i ""h_t_t_p://nginx-test.localhost:81/css""

# cleaning up
docker stop nginx-test-ingress nginx-test-app
docker network rm nginx_test_network

}}}
You should see in the end headers of the response
{{{
HTTP/1.1 301 Moved Permanently
Server: nginx/1.27.0
Date: Mon, 17 Jun 2024 07:51:08 GMT
Content-Type: text/html
Content-Length: 169
Connection: keep-alive
Location: h_t_t_p://nginx-test-app:9090/css/
}}}
So nginx made special redirect for location without trailling slash and served by proxy_pass, BUT forgot to resolve upstream defined by variable, see Location header.

May be variable is not a good idea at all, I thought, and rewrote it with upstreams
{{{
upstream app_upstream {
  server nginx-test-app:9090;
}

server {
  listen 81 default;
  server_name _;

  resolver 127.0.0.11 ipv6=off;

  location / {
    proxy_pass h_t_t_p://app_upstream;
  }
}
}}}

Upstreams did not help, the problem remains in this test case."	AnotherOneAckap@…
	2670	Экранирование $ в конфигурации сторонних модулей	nginx-module		defect		new	2024-07-17T06:36:59Z	2024-07-17T06:36:59Z	"Ранее использовалась конфигурация в nginx 1.22.0 со строками:

subs_filter '?t=$Time$' '' g;
subs_filter '^#EXT-X-MEDIA.+TYPE=SUBTITLES.+\n$' '' rg;

но при переходе на версию nginx 1.26.1 стали получать ошибки следующего содержания:
nginx: [emerg] invalid variable name in /etc/nginx/...
и
nginx: [emerg] match part cannot contain variable during regex mode in /etc/nginx/...

в процессе изучения проблемы выявили что виной тому символ $, но любые попытки экранирования не приводили к успеху.

Только после изменения regexp шаблонов:

subs_filter '\?t=.Time.' '' rg;
subs_filter '^#EXT-X-MEDIA.+TYPE=SUBTITLES.+\n' '' rg;

удалось запустить конфигурацию.

Просьба прокоментировать данное поведение и будет ли возможность в дальнейшем спользовать $ в regexp шаблоне в сторонних модулях?
Экранирование в виде $$, \$ не работает!"	volga.leoking@…
	1282	Add nginx.repo file to RPM repos	nginx-package		enhancement		new	2017-05-29T17:00:42Z	2017-05-29T17:10:28Z	"There is a common approach to use *.repo files from actual repositories with ""yum-config-manager --add-repo"" / ""dnf config-manager --add-repo"" / ""zypper addrepo"".

1. Please add ""nginx.repo"" files to RPM repositories for convenience.

2. Besides ""$basearch"" variable please also use ""$releasever"".

3. Please enable ""gpgcheck"" in default nginx.repo and add another ""nginx-nogpg.repo"" to maintain old behavior.

I've noticed previous ""won't fix"" for ""gpgcheck=0"" bug report. If another project would recommend me to install repository this way, I'd run farther than I can see. There are enough relatively secure ways to retrieve nginx key starting from HTTPS, using PGP servers, hardcoding the key, etc.
"	andvgal@…
	1472	Downloads stop after 1GB depending of network	nginx-module		enhancement		accepted	2018-01-26T14:25:37Z	2018-01-29T18:12:45Z	"Hi,
we tried nginx version 1.6.2 till version 1.12.2 and have a problem when used as proxy before artifactory. Downloads get interrupted at 1GB.

This behavior depends on the internal VLAN. On one VLAN this always happens. On an other VLAN it never happens. This is size limited, not time limited. From some network it stops after 30 seconds and from one other slow network it stops after 13 minutes.

We made a minimal proxy setup with apache and this works with all VLANs. This is why we expect it has something to do with nginx or the combination of nginx and TCP/IP stack of linux.

In wireshark we see ""TCP Dup ACK"" on the client side sent to the nginx server.

Wget fails with connection closed at byte 1083793011 but continues download with partial content. docker can't handle this and our customers can't download docker images with layers greater 1 GB.

The following text shows two anonymized minimal configs. The nginx config that is problematic and the apache config that works:

{{{
NGINX config:
server {
    listen *:80;
    server_name NAME;
    client_max_body_size 3G;
    access_log /var/log/nginx/NAME.access.log;
    error_log /var/log/nginx/NAME.error.log;

    if ($request_uri !~ /artifactory/) {
        rewrite ^ $scheme://NAME/artifactory/ permanent;
    }

    location /artifactory {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_pass http://ARTIFACTORY:PORT;
        proxy_pass_header Server;
        proxy_read_timeout 90s;
    }
}

APACHE config:
<VirtualHost *:80>
    ServerName NAME
    ServerAdmin NAME

    ErrorLog ${APACHE_LOG_DIR}/error.log

    LogLevel warn

    ProxyRequests Off
    <Proxy *>
      Order allow,deny
      Allow from All
    </Proxy>

    ProxyPass / http://ARTIFACTORY:PORT/
    ProxyPassReverse / http://ARTIFACTORY:PORT/
</VirtualHost>

}}}
"	nudgegoonies@…
	1765	configure is fragile in finding system libraries	other		enhancement		new	2019-04-14T09:39:36Z	2019-04-15T16:06:35Z	"configure uses its own homebuilt scripts to detect system libraries. Currently, this will most likely fail on macOS with Xcode10 if the builder doesn't have Macports installed into /opt/local or is using another package manager. Xcode10 does not install headers into /usr/include (only into /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk). So the script in `auto/lib/libxslt/conf` will fail to find libxslt by not finding libxml2.

Alternatively, the build can find headers from the system install, but use the library from a local install (Fink or MacPorts not in /opt/local) because it's not setting the correct -I flags. In the paste below, -I/sw/include/libxml2 is missing (from a Xcode9 install with /usr/include/libxml2 present).

{{{
cc -c -g -O2 -fstack-protector -Wformat -Werror=format-security -fPIE -g -O2 -fstack-protector -Wformat -Werror=format-security -fPIE -D_FORTIFY_SOURCE=2 -MD -I/sw/include -I src/core -I src/event -I src/event/modules -I src/os/unix -I /usr/include/libxml2 -I objs -I src/http -I src/http/modules -I src/http/v2 -I src/http/modules/perl \
		-o objs/src/http/modules/ngx_http_xslt_filter_module.o \
		src/http/modules/ngx_http_xslt_filter_module.c
}}}
So this build will eventually use headers from /usr/include/libxml2 but link to a different /sw/lib/libxml2.dylib install.

Instead of hardcoding the possible search locations of system libraries, the configure detection subscripts should use pkg-config (or the library included *-config script) whenever possible. Of the detected libraries in auto/lib, geoip, libgd, libxslt, openssl, pcre, and zlib all officially include .pc files from upstream and should be a more foolproof way of detecting their presence and usage flags.
"	nieder@…
	1893	Support Linux abstract namespace socket?	nginx-core		enhancement		new	2019-11-22T02:48:53Z	2023-02-19T13:36:51Z	"Please support Linux abstract namespace socket, which is like Unix domain socket, but doesnot create a file.

My special setup is that, we have many websites in the same machine, each website can have 2 socket files to be connected by Nginx (one for HTTP, one for WebSocket).

Because of the large amount of websites, we don't use port number, which is difficult to maintain (after a while, we cannot remember which website holds which port), so we use unix domain socket, shared the name with the web app.

There is small problem with Unix domain socket is that, the file creation needs permission, and problem with file deletion. So it would be nice if Nginx support Linux's abstract namespace socket."	ng.hong.quan@…
	2282	Add audio/x-flac to MIME types	nginx-core		enhancement		new	2021-11-21T04:45:23Z	2021-11-27T04:40:36Z	"Example patchset:

{{{#!diff
# HG changeset patch
# User xnaas <me@xnaas.info>
# Date 1637469283 21600
#      Sat Nov 20 22:34:43 2021 -0600
# Node ID 69946fb5438fbcbe5b834543c4b0358bb9bacb57
# Parent  82b750b20c5205d685e59031247fe898f011394e
add FLAC `audio/x-flac` MIME type

diff -r 82b750b20c52 -r 69946fb5438f conf/mime.types
--- a/conf/mime.types   Tue Nov 02 17:49:22 2021 +0300
+++ b/conf/mime.types   Sat Nov 20 22:34:43 2021 -0600
@@ -81,6 +81,7 @@
     audio/midi                                       mid midi kar;
     audio/mpeg                                       mp3;
     audio/ogg                                        ogg;
+    audio/x-flac                                     flac;
     audio/x-m4a                                      m4a;
     audio/x-realaudio                                ra;
}}}

There's precedence for this, as Apache has this MIME type: https://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/conf/mime.types (see line 1516)."	xnaas
	2351	Support reading file ETag from additional sources	nginx-core		enhancement		new	2022-05-10T01:33:24Z	2023-08-21T02:40:50Z	"While the current ETag generation is perfect for most cases, there are some cases in which allowing to control the ETag value of a file would be very valuable (e.g. when the last modified time of the file is unreliable, something that is common when building container images).

To support these cases, it would be ideal to allow users to opt-in to reading the ETag value from different sources. Examples of such sources could include a specific xattr on the file being read, or a metadata file placed either in the same directory of the file being read, or in a location specified in configuration.

In this way, users could very simply generate content-based etags at container build time with something along the lines of:

find . -type f -exec xattr -s user.nginx_etag -V $(sha256sum '{}' | cut -d "" "" -f 1) '{}' \;

and nginx would then use it both when adding the ETag response as well as when handling conditional requests."	CAFxX@…
	2486	Documentation for client_max_body_size may contain an error	documentation		enhancement		new	2023-04-18T16:17:06Z	2023-04-28T02:21:56Z	"http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size

contains:
""Please be aware that browsers cannot correctly display this error.""

I have seen this error displayed in Chrome as a simple HTML error document output by nginx, so it seems that some browsers can display this error.

Could you please clarify if there are different outcomes for other browsers? Thank you!"	liam.boxclever.ca@…
	2509	Support IPv6 interface identifiers outside of URLs	nginx-core		enhancement		new	2023-06-11T11:40:36Z	2023-06-11T22:11:33Z	"As of version 1.25.0, nginx doesn't seem to support IPv6 interface identifiers at all, even outside of URLs. For example, when trying to use it with the ""listen"" directive in the ""http server"" context, I'm getting the following error:

2023/06/11 11:01:54 [emerg] 4542#4542: invalid IPv6 address in ""[fe80::123%wg0]:443"" of the ""listen"" directive in /etc/nginx/sites-enabled/test.conf

Even though there might be issues with supporting such address literals within URLs (as stated in #623 and #1422), these addresses would also be useful for other things that don't use URLs, for example for using IPv6 link-local addresses with the ""listen"" and ""set_real_ip_from"" directives in the ""http server"" and ""stream server"" contexts, or when using such addresses with the ""server"" directive in the ""stream upstream"" context.

Side note: It seems that trac doesn't allow setting the 1.24.x or 1.25.x versions when creating a new ticket."	hardfalcon@…
	2562	SSL: use server names from upstream configuration for proxied server's name validation	nginx-core		enhancement		new	2023-11-11T13:20:54Z	2024-08-15T18:44:50Z	"This is a feature request (with a basic implementation).

My scenario requires to validate server names against names found in the {{{server}}} directive in an upstream. For example,

{{{#!nginx
upstream u1 {
    server su1.blah.com;
    server su2.blah.com;
}
}}}

By default, all peers from upstream ''u1'' will be validated against name {{{u1}}} which is what variable {{{$proxy_host}}} contains. I want to validate them dynamically according to which name is bound to the chosen peer (i.e. {{{su1.blah.com}}} or {{{su2.blah.com}}}).

Currently, this seems to be not feasible. However, this can be achieved with a few additions into Nginx code. Basically, the additions include

1. A new no-cacheable variable, say {{{$proxy_peer_host}}}, which will contain the server name of the current peer.
2. Pushing ''server name'' available in the ''round-robin'' peer structure into the ''peer_connection'' structure.

The peer connection data is available at the time of server name validation, therefore {{{proxy_ssl_name $proxy_peer_host;}}} shall work.

I will attach the patch.

Here is an Nginx configuration which I used to test this:

{{{#!nginx
user                    nobody;
worker_processes        1;

events {
    worker_connections  1024;
}

http {
    default_type        application/octet-stream;
    sendfile            on;

    upstream u1 {
        server 127.0.0.1:8080;
        server localhost:8080;
    }

    server {
        listen       8010;
        server_name  main;

        location /u1 {
            proxy_ssl_verify on;
            proxy_ssl_trusted_certificate /etc/ssl/certs/ca-bundle.crt;
            proxy_ssl_name $proxy_peer_host;
            proxy_pass https://u1;
        }
    }

    server {
        listen       8080 ssl;
        server_name  backend;

        ssl_certificate     /home/lyokha/devel/nginx/certs/server/server.crt;
        ssl_certificate_key /home/lyokha/devel/nginx/certs/server/server.key;
        ssl_protocols       TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
        ssl_ciphers         HIGH:!aNULL:!MD5;

        location / {
            echo ""In $server_name"";
        }
    }
}
}}}"	lyokha@…
	2438	Improve fastcgi_cache_key documentation	documentation		task		accepted	2023-01-18T21:59:00Z	2023-01-21T01:44:55Z	"Сразу извиняюсь что не на английском, быстрее зарепорчу на родном языке.

В документации к директивам **fastcgi_cache_key** и **proxy_cache_key** не указана одна особенность: для ключа лучше использовать, в том числе и переменную **$request_method**, т.к. по дефолту:

{{{
fastcgi_cache_methods GET HEAD;

proxy_cache_methods GET HEAD;
}}}

означает что, в случае использования чего-то вроде (скопировал из доки):
{{{
fastcgi_cache_key localhost:9000$request_uri;

proxy_cache_key $scheme$proxy_host$uri$is_args$args;
}}}

и запросе HEAD-методом будет закеширован пустой ответ (без контента), который будет отдаваться nginx'ом и в GET-запросе. (''Конечно, если бэкенд далее поддерживает HEAD запросы и обрабатывает их правильным образом'')

Я столкнулся с этой неочевидной штукой на своих серверах, раскопал даже чью-то заметку на этот счет: https://www.claudiokuenzler.com/blog/705/empty-blank-page-nginx-fastcgi-cache-head-get

Думаю, стоит указать эту особенность в документации к директивам fastcgi_cache_key/proxy_cache_key.

PS: могу ли я прислать merge request для документации? Смущает что нужно прислать в merge request перевод сразу на нескольких языках."	Denis
	1644	"Educate people about the importance of ""Server"" HTTP header"	other		enhancement		new	2018-09-25T23:05:36Z	2018-09-25T23:05:36Z	"Nginx Inc. should consider increasing the visibility of the problem of stripping the the ""Server"" header, in particular, bringing forward the consequences for global server market share analytics and fundraising, and advertise the importance of the header for the sustainability of Nginx as a free and open-source product. 


I just learned about the issue from @mdounin and @vbart:

  https://trac.nginx.org/nginx/ticket/1641?replyto=3#comment:3


Here are some examples of what can be done:

 - Add a short, unobtrusive message on a midly-colored background, explaining the importance of the header to the docs section for ""server_tokens"":

       https://nginx.org/en/docs/http/ngx_http_core_module.html#server_tokens

  That's where I imagine people typically begin their header-stripping journey.
  A good starting point for a message will be the comment from @vbart linked above. Explain that they don't really hide Nginx from hackers or  prevent the ""fingerptinting"" (bring on other fingerprinting methods, like packet structure analysis or whatever). Most importantly, insist on the fact that they support Nginx by leaving the header public. Make sure their potential shameful act stops right there.

 - Repeat the same message in FAQ sections for both open-source Nginx and Nginx Plus. Additionally and optionally, it would be also nice to have some more transparency on the financial aspect of the organization: in few words explain where you get cash.

 - Add a source code comment to:

     {{{ src/http/ngx_http_header_filter_module.c:49 }}}

    explaining the importance of not tempering with these strings.
   
   ,,<evil_mode>Also consider making ever-so-slight modifications to these variables such that would break the existing header-stripping patches and scripts. Some people will be upset, but, well, they are not doing things right, these are ""bad"" patches, and we want to let them know about it. Also, you have full right to change anything you want in the upstream code, including introducing breaking changes</evil_mode>,,

 - There are numerous ""Nginx hardening"" guides on the web that advertise stripping or changing the header for additional ""security"". That's how you learn about ""the patch"". Make marketing people to contact some of the authors and ask them to add a remark about the importance of the header. Granted, you cannot cover all of the articles, but some authors will surely comply.

 - Pull-request to 
  
  https://github.com/openresty/headers-more-nginx-module

  from the organizational Nginx Inc. account, at very least removing  `more_set_headers 'Server: my-server';` from the very first line in the examples on the front page, and explaining why. Contact Openresty. They are commercials, they are your downstream, they will understand.

 - There are several stackoverflow/serverfault Q&A on ""Nginx hardening"". Go comment on those. Start with words ""I'm one of the Nginx core developers..."".

 - Comment on Google Pagespeed resources, e.g. on its Github issues, where people ask to remove the headers. I was so convinced by all these ""hardening"" articles, I even tried to avoid Pagespeed, arguably one of the most useful modules, because it forcibly re-enables the header (and adds its own). For me, avoiding Pagespeed already felt kinda weird, but now it feels just plain stupid.

 - Educate people. Write blog articles, regularly presenting current server market shares and mention (shame) header-strippers. Share on reddit, twitter, whatever. I am pretty sure there are entire companies with policies for stripping server header among other crazy stuff. Educate those folks.

 - In the future, try to refrain from snarky comments, especially on bug reports, especially to newcomers. This turns off immediately. Provide pure distilled information instead. Have header-stripping reply as a quick copy-paste template. Most people just don't know a thing about server share analytics and it's consequences to fundraising. 5 minutes ago I didn't know it existed and didn't know you do fundraising. I was imagining Google pays you because I heard they use it. I thought you swim in money.

If your devs feel offended by the stripped header and it influences financial side, I think it's in the best interest of the organization (and consequently all of us, relying on Nginx), to make the problem more visible, so that people could make an informed decision. Nobody will be digging into your bug tracker, trying to find answers to problems they never knew existed.

I just donated a few bucks to wikipedia, and they bring a large ugly ad frame that covers half of the page. They asked for cash. I love wikipedia, I gave cash. Nginx needs a header? Great, I love Nginx, I'll bring the header back. Gosh, I will go ahead and add a proud ""Powered by Nginx"" on every page. Every free project needs money. No need to be shy here.

Thank you all guys, and please keep up the great job!
"	Ivan Aksamentov
