{5} Accepted, Active Tickets by Owner (Full Description) (51 matches)

List tickets accepted, group by ticket owner. This report demonstrates the use of full-row display.

(empty) (1 match)

Ticket Summary Component Milestone Type Created
Description
#384 trailing dot in server_name nginx-core defect 07/09/13

nginx should treat server_name values with and without trailing dot as identical to each other. Thus, it shall warn and continue during configuration syntax check for the below snippet due to conflicting server_name.

    server {
        server_name  localhost;
    }

    server {
        server_name  localhost.;
    }

somebody (12 matches)

#86 the "if" directive have problems in location context nginx-core defect 01/17/12

To start, I'm doing tricky stuff so please don't point out at the weird things and stay focused on the issue at hand. I'm mixing a configuration with userdir and symfony2 (http://wiki.nginx.org/Symfony) for a development environment, php is using php-fpm and a unix socket. The userdir configuration is classic, all your files in ~user/public_html/ will be accessible through http://server/~user/. I add to this the fact that if you create a folder ~user/public_html/symfony/ and put a symfony project in it (~user/public_html/symfony/project/) it will have the usual symfony configuration applied (rewrites and fastcgi path split).

Here you go for the configuration :

    # match 1:username, 2:project name, 3:the rest
    location ~ ^/~(.+?)/symfony/(.+?)/(.+)$ {
        alias /home/$1/public_html/symfony/$2/web/$3;
        if (-f $request_filename) {
            break;
        }
        # if no app.php or app_dev.php, redirect to app.php (prod)
        rewrite ^/~(.+?)/symfony(/.+?)/(.+)$ /~$1/symfony/$2/app.php/$3 last;
    }

    # match 1:username, 2:project name, 3:env (prod/dev), 4:trailing ('/' or
    # end)
    location ~ ^/~(.+?)/symfony(/.+)/(app|app_dev)\.php(/|$) {
        root /home/$1/public_html/symfony$2/web;
        # fake $request_filename
        set $req_filename /home/$1/public_html/symfony$2/web/$3.php;
        include fastcgi_params;
        fastcgi_split_path_info ^((?U).+\.php)(/?.+)$;
        fastcgi_param PATH_INFO $fastcgi_path_info;
        fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
        fastcgi_param SCRIPT_FILENAME $req_filename;
        fastcgi_pass unix:/tmp/php-fpm.sock;
    }

The second block (PHP backend) works on its own. The first block (files direct access) works on its own.

You can see that I already had a problem with PHP but went around it with creating my own variable.

To help understand, here is a sample of a symfony project layout (I removed some folders to help the comprehension):

project/
    src/
        [... my php code ...]
    web/
        app_dev.php
        app.php
        favicon.ico

If I try to access http://server/~user/symfony/project/favicon.ico I see this in the logs :

2012/01/17 16:36:25 [error] 27736#0: *1 open() "/home/user/public_html/symfony/project/web/favicon.icoavicon.ico" failed (2: No such file or directory), client: 10.11.60.36, server: server, request: "HEAD /~user/symfony/project/favicon.ico HTTP/1.1", host: "server"

If I remove the block that tests $request_filename, it works but I have to remove the rewrite as well.

The server is a CentOS 5.7 and the nginx is coming from the EPEL repository.

Unfortunately my C skills are down the floor so I can't really provide a better understanding of the problem. I tried to poke around the code but with not much luck.


#97 try_files and alias problems nginx-core defect 02/03/12
# bug: request to "/test/x" will try "/tmp/x" (good) and
# "/tmp//test/y" (bad?)
location /test/ {
    alias /tmp/;
    try_files $uri /test/y =404;
}
# bug: request to "/test/x" will fallback to "fallback" instead of "/test/fallback"
location /test/ {
    alias /tmp/;
    try_files $uri /test/fallback?$args;
}
# bug: request to "/test/x" will try "/tmp/x/test/x" instead of "/tmp/x"
location ~ /test/(.*) {
    alias /tmp/$1;
    try_files $uri =403;
}

Or document special case for regexp locations with alias? See 3711bb1336c3.

# bug: request "/foo/test.gif" will try "/tmp//foo/test.gif"
location /foo/ {
    alias /tmp/;
    location ~ gif {
        try_files $uri =405;
    }
}

#157 cache max_size limit applied incorrectly with xfs nginx-core defect 04/29/12

No matter what I write in inactive= parameter in proxy_cache_path directive - it is always resolved to 10 minutes .

I tried different formats : inactive=14d inactive=2w inactive=336h

but the result is always the same : 10 minutes .

Checked both by counting files in cache and manually doing ls -ltr in cache dir .

This bug exists in 1.0.15 too .

This bug does NOT exist in 0.8.55 ( the version we had to roll back to ) .

relevant lines :

proxy_cache_path /ssd/two levels=1:2:2 keys_zone=static:2000m inactive=14d max_size=120000m; proxy_temp_path /ssd/temp;

in some server :

location /images {

expires 5d ; proxy_pass http://static-local.domain:80; proxy_cache_valid 2w; proxy_cache static; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

}


#191 literal newlines logged in error log nginx-module defect 08/01/12

I noticed that when a %0a exists in the URL, nginx includes a literal newline in the error_log when logging a file not found:


2012/07/26 17:24:14 [error] 5478#0: *8 "/var/www/localhost/htdocs/

html/index.html" is not found (2: No such file or directory), client: 1.2.3.4, server: , request: "GET /%0a%0a%0ahtml/ HTTP/1.1", host: "test.example.com"


This wreaks havoc with my log monitoring utility 8-/.

It seems desirable to escape the newline in the log message? I tested with the latest 1.2.2. Is there any way with the existing configuration options to make this not happen, or any interest in updating the logging module to handle this situation differently?


#196 Inconsistent behavior on uri's with unencoded spaces followed by H nginx-core defect 08/12/12

When requesting files with unencoded spaces, nginx will typically respond with the file requested. But if the filename has a space followed by a capital H, nginx responds with a 400 error.

[foo@bar Downloads]$ nc -vv 127.0.0.1 8000
Ncat: Version 6.01 ( http://nmap.org/ncat )
Ncat: Connected to 127.0.0.1:8000.
GET /t h HTTP/1.1
Host: 127.0.0.1:8000

HTTP/1.1 200 OK
Server: nginx/1.3.4
Date: Sun, 12 Aug 2012 20:22:30 GMT
Content-Type: application/octet-stream
Content-Length: 4
Last-Modified: Sun, 12 Aug 2012 18:30:35 GMT
Connection: keep-alive
ETag: "5027f64b-4"
Accept-Ranges: bytes

bar

[foo@bar Downloads]$ nc -vv 127.0.0.1 8000
Ncat: Version 6.01 ( http://nmap.org/ncat )
Ncat: Connected to 127.0.0.1:8000.
GET /a H HTTP/1.1
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx/1.3.4</center>
</body>
</html>
Ncat: 18 bytes sent, 172 bytes received in 7.29 seconds.
[foo@bar Downloads]$ nc -vv 127.0.0.1 8000
Ncat: Version 6.01 ( http://nmap.org/ncat )
Ncat: Connected to 127.0.0.1:8000.
GET /a%20H HTTP/1.1
Host: 127.0.0.1:8000

HTTP/1.1 200 OK
Server: nginx/1.3.4
Date: Sun, 12 Aug 2012 20:23:32 GMT
Content-Type: application/octet-stream
Content-Length: 4
Last-Modified: Sun, 12 Aug 2012 18:34:44 GMT
Connection: keep-alive
ETag: "5027f744-4"
Accept-Ranges: bytes

bar


#217 Wrong "Content-Type" HTTP response header in certain configuration scenarios nginx-core defect 09/12/12

In certain configuration scenarios the "Content-Type" HTTP response header is not of the expected type but rather falls back to the default setting.

I was able to shrink down the configuration to a bare minimum test case which gives some indication that this might happen in conjunction with regex captured in "location", "try_files" and "alias" definitions.

Verfied with Nginx 1.3.6 (with patch.spdy-52.txt applied), but was also reproducible with earlier versions, see http://mailman.nginx.org/pipermail/nginx/2012-August/034900.html http://mailman.nginx.org/pipermail/nginx/2012-August/035170.html (no response was given on those posts)

# nginx -V
nginx version: nginx/1.3.6
TLS SNI support enabled
configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/sbin/nginx --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --user=nginx --group=nginx --with-openssl=openssl-1.0.1c --with-debug --with-http_stub_status_module --with-http_ssl_module --with-ipv6

Minimal test configuration for that specific scenario:

server {
    listen                          80;
    server_name                     t1.example.com;

    root                            /data/web/t1.example.com/htdoc;

    location                        ~ ^/quux(/.*)?$ {
        alias                       /data/web/t1.example.com/htdoc$1;
        try_files                   '' =404;
    }
}

First test request where Content-Type is being correctly set to "image/gif" as expected:

$ curl -s -o /dev/null -D - -H 'Host: t1.example.com' http://127.0.0.1/foo/bar.gif
HTTP/1.1 200 OK
Server: nginx/1.3.6
Date: Wed, 12 Sep 2012 14:20:09 GMT
Content-Type: image/gif
Content-Length: 68
Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT
Connection: keep-alive
ETag: "501a0a78-44"
Accept-Ranges: bytes

Second test request where Content-Type is wrong, "application/octet-stream" instead of "image/gif" (actually matches the value of whatever "default_type" is set to):

$ curl -s -o /dev/null -D - -H 'Host: t1.example.com' http://127.0.0.1/quux/foo/bar.gif
HTTP/1.1 200 OK
Server: nginx/1.3.6
Date: Wed, 12 Sep 2012 14:20:14 GMT
Content-Type: application/octet-stream
Content-Length: 68
Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT
Connection: keep-alive
ETag: "501a0a78-44"
Accept-Ranges: bytes

Debug log during the first test request:

2012/09/12 16:20:09 [debug] 15171#0: *1 delete posted event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: *1 malloc: 09BDA0C8:672
2012/09/12 16:20:09 [debug] 15171#0: *1 malloc: 09BE3210:1024
2012/09/12 16:20:09 [debug] 15171#0: *1 posix_memalign: 09C0AE10:4096 @16
2012/09/12 16:20:09 [debug] 15171#0: *1 http process request line
2012/09/12 16:20:09 [debug] 15171#0: *1 recv: fd:11 178 of 1024
2012/09/12 16:20:09 [debug] 15171#0: *1 http request line: "GET /foo/bar.gif HTTP/1.1"
2012/09/12 16:20:09 [debug] 15171#0: *1 http uri: "/foo/bar.gif"
2012/09/12 16:20:09 [debug] 15171#0: *1 http args: ""
2012/09/12 16:20:09 [debug] 15171#0: *1 http exten: "gif"
2012/09/12 16:20:09 [debug] 15171#0: *1 http process request header line
2012/09/12 16:20:09 [debug] 15171#0: *1 http header: "User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2"
2012/09/12 16:20:09 [debug] 15171#0: *1 http header: "Accept: */*"
2012/09/12 16:20:09 [debug] 15171#0: *1 http header: "Host: t1.example.com"
2012/09/12 16:20:09 [debug] 15171#0: *1 http header done
2012/09/12 16:20:09 [debug] 15171#0: *1 event timer del: 11: 3134905866
2012/09/12 16:20:09 [debug] 15171#0: *1 rewrite phase: 0
2012/09/12 16:20:09 [debug] 15171#0: *1 test location: ~ "^/quux(/.*)?$"
2012/09/12 16:20:09 [debug] 15171#0: *1 using configuration ""
2012/09/12 16:20:09 [debug] 15171#0: *1 http cl:-1 max:1048576
2012/09/12 16:20:09 [debug] 15171#0: *1 rewrite phase: 2
2012/09/12 16:20:09 [debug] 15171#0: *1 post rewrite phase: 3
2012/09/12 16:20:09 [debug] 15171#0: *1 generic phase: 4
2012/09/12 16:20:09 [debug] 15171#0: *1 generic phase: 5
2012/09/12 16:20:09 [debug] 15171#0: *1 access phase: 6
2012/09/12 16:20:09 [debug] 15171#0: *1 access phase: 7
2012/09/12 16:20:09 [debug] 15171#0: *1 post access phase: 8
2012/09/12 16:20:09 [debug] 15171#0: *1 try files phase: 9
2012/09/12 16:20:09 [debug] 15171#0: *1 content phase: 10
2012/09/12 16:20:09 [debug] 15171#0: *1 content phase: 11
2012/09/12 16:20:09 [debug] 15171#0: *1 content phase: 12
2012/09/12 16:20:09 [debug] 15171#0: *1 http filename: "/data/web/t1.example.com/htdoc/foo/bar.gif"
2012/09/12 16:20:09 [debug] 15171#0: *1 add cleanup: 09C0B3D8
2012/09/12 16:20:09 [debug] 15171#0: *1 http static fd: 14
2012/09/12 16:20:09 [debug] 15171#0: *1 http set discard body
2012/09/12 16:20:09 [debug] 15171#0: *1 HTTP/1.1 200 OK
Server: nginx/1.3.6
Date: Wed, 12 Sep 2012 14:20:09 GMT
Content-Type: image/gif
Content-Length: 68
Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT
Connection: keep-alive
ETag: "501a0a78-44"
Accept-Ranges: bytes

2012/09/12 16:20:09 [debug] 15171#0: *1 write new buf t:1 f:0 09C0B500, pos 09C0B500, size: 235 file: 0, size: 0
2012/09/12 16:20:09 [debug] 15171#0: *1 http write filter: l:0 f:0 s:235
2012/09/12 16:20:09 [debug] 15171#0: *1 http output filter "/foo/bar.gif?"
2012/09/12 16:20:09 [debug] 15171#0: *1 http copy filter: "/foo/bar.gif?"
2012/09/12 16:20:09 [debug] 15171#0: *1 read: 14, 09C0B67C, 68, 0
2012/09/12 16:20:09 [debug] 15171#0: *1 http postpone filter "/foo/bar.gif?" 09C0B6C0
2012/09/12 16:20:09 [debug] 15171#0: *1 write old buf t:1 f:0 09C0B500, pos 09C0B500, size: 235 file: 0, size: 0
2012/09/12 16:20:09 [debug] 15171#0: *1 write new buf t:1 f:0 09C0B67C, pos 09C0B67C, size: 68 file: 0, size: 0
2012/09/12 16:20:09 [debug] 15171#0: *1 http write filter: l:1 f:0 s:303
2012/09/12 16:20:09 [debug] 15171#0: *1 http write filter limit 0
2012/09/12 16:20:09 [debug] 15171#0: *1 writev: 303
2012/09/12 16:20:09 [debug] 15171#0: *1 http write filter 00000000
2012/09/12 16:20:09 [debug] 15171#0: *1 http copy filter: 0 "/foo/bar.gif?"
2012/09/12 16:20:09 [debug] 15171#0: *1 http finalize request: 0, "/foo/bar.gif?" a:1, c:1
2012/09/12 16:20:09 [debug] 15171#0: *1 set http keepalive handler
2012/09/12 16:20:09 [debug] 15171#0: *1 http close request
2012/09/12 16:20:09 [debug] 15171#0: *1 http log handler
2012/09/12 16:20:09 [debug] 15171#0: *1 run cleanup: 09C0B3D8
2012/09/12 16:20:09 [debug] 15171#0: *1 file cleanup: fd:14
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09C0AE10, unused: 1645
2012/09/12 16:20:09 [debug] 15171#0: *1 event timer add: 11: 75000:3134920866
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09BDA0C8
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09BE3210
2012/09/12 16:20:09 [debug] 15171#0: *1 hc free: 00000000 0
2012/09/12 16:20:09 [debug] 15171#0: *1 hc busy: 00000000 0
2012/09/12 16:20:09 [debug] 15171#0: *1 tcp_nodelay
2012/09/12 16:20:09 [debug] 15171#0: *1 reusable connection: 1
2012/09/12 16:20:09 [debug] 15171#0: *1 post event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: posted event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: *1 delete posted event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: *1 http keepalive handler
2012/09/12 16:20:09 [debug] 15171#0: *1 malloc: 09BE3210:1024
2012/09/12 16:20:09 [debug] 15171#0: *1 recv: fd:11 -1 of 1024
2012/09/12 16:20:09 [debug] 15171#0: *1 recv() not ready (11: Resource temporarily unavailable)
2012/09/12 16:20:09 [debug] 15171#0: posted event 00000000
2012/09/12 16:20:09 [debug] 15171#0: worker cycle
2012/09/12 16:20:09 [debug] 15171#0: accept mutex locked
2012/09/12 16:20:09 [debug] 15171#0: epoll timer: 75000
2012/09/12 16:20:09 [debug] 15171#0: epoll: fd:11 ev:0001 d:09C117C8
2012/09/12 16:20:09 [debug] 15171#0: *1 post event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: timer delta: 2
2012/09/12 16:20:09 [debug] 15171#0: posted events 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: posted event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: *1 delete posted event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: *1 http keepalive handler
2012/09/12 16:20:09 [debug] 15171#0: *1 recv: fd:11 0 of 1024
2012/09/12 16:20:09 [info] 15171#0: *1 client 127.0.0.1 closed keepalive connection
2012/09/12 16:20:09 [debug] 15171#0: *1 close http connection: 11
2012/09/12 16:20:09 [debug] 15171#0: *1 event timer del: 11: 3134920866
2012/09/12 16:20:09 [debug] 15171#0: *1 reusable connection: 0
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09BE3210
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 00000000
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09BD9FC0, unused: 56

Debug log during the second test request:

2012/09/12 16:20:14 [debug] 15171#0: *2 delete posted event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: *2 malloc: 09BDA0C8:672
2012/09/12 16:20:14 [debug] 15171#0: *2 malloc: 09BE3210:1024
2012/09/12 16:20:14 [debug] 15171#0: *2 posix_memalign: 09C0AE10:4096 @16
2012/09/12 16:20:14 [debug] 15171#0: *2 http process request line
2012/09/12 16:20:14 [debug] 15171#0: *2 recv: fd:11 183 of 1024
2012/09/12 16:20:14 [debug] 15171#0: *2 http request line: "GET /quux/foo/bar.gif HTTP/1.1"
2012/09/12 16:20:14 [debug] 15171#0: *2 http uri: "/quux/foo/bar.gif"
2012/09/12 16:20:14 [debug] 15171#0: *2 http args: ""
2012/09/12 16:20:14 [debug] 15171#0: *2 http exten: "gif"
2012/09/12 16:20:14 [debug] 15171#0: *2 http process request header line
2012/09/12 16:20:14 [debug] 15171#0: *2 http header: "User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2"
2012/09/12 16:20:14 [debug] 15171#0: *2 http header: "Accept: */*"
2012/09/12 16:20:14 [debug] 15171#0: *2 http header: "Host: t1.example.com"
2012/09/12 16:20:14 [debug] 15171#0: *2 http header done
2012/09/12 16:20:14 [debug] 15171#0: *2 event timer del: 11: 3134910906
2012/09/12 16:20:14 [debug] 15171#0: *2 rewrite phase: 0
2012/09/12 16:20:14 [debug] 15171#0: *2 test location: ~ "^/quux(/.*)?$"
2012/09/12 16:20:14 [debug] 15171#0: *2 using configuration "^/quux(/.*)?$"
2012/09/12 16:20:14 [debug] 15171#0: *2 http cl:-1 max:1048576
2012/09/12 16:20:14 [debug] 15171#0: *2 rewrite phase: 2
2012/09/12 16:20:14 [debug] 15171#0: *2 post rewrite phase: 3
2012/09/12 16:20:14 [debug] 15171#0: *2 generic phase: 4
2012/09/12 16:20:14 [debug] 15171#0: *2 generic phase: 5
2012/09/12 16:20:14 [debug] 15171#0: *2 access phase: 6
2012/09/12 16:20:14 [debug] 15171#0: *2 access phase: 7
2012/09/12 16:20:14 [debug] 15171#0: *2 post access phase: 8
2012/09/12 16:20:14 [debug] 15171#0: *2 try files phase: 9
2012/09/12 16:20:14 [debug] 15171#0: *2 http script copy: "/data/web/t1.example.com/htdoc"
2012/09/12 16:20:14 [debug] 15171#0: *2 http script capture: "/foo/bar.gif"
2012/09/12 16:20:14 [debug] 15171#0: *2 trying to use file: "" "/data/web/t1.example.com/htdoc/foo/bar.gif"
2012/09/12 16:20:14 [debug] 15171#0: *2 try file uri: ""
2012/09/12 16:20:14 [debug] 15171#0: *2 content phase: 10
2012/09/12 16:20:14 [debug] 15171#0: *2 content phase: 11
2012/09/12 16:20:14 [debug] 15171#0: *2 content phase: 12
2012/09/12 16:20:14 [debug] 15171#0: *2 http script copy: "/data/web/t1.example.com/htdoc"
2012/09/12 16:20:14 [debug] 15171#0: *2 http script capture: "/foo/bar.gif"
2012/09/12 16:20:14 [debug] 15171#0: *2 http filename: "/data/web/t1.example.com/htdoc/foo/bar.gif"
2012/09/12 16:20:14 [debug] 15171#0: *2 add cleanup: 09C0B414
2012/09/12 16:20:14 [debug] 15171#0: *2 http static fd: 14
2012/09/12 16:20:14 [debug] 15171#0: *2 http set discard body
2012/09/12 16:20:14 [debug] 15171#0: *2 HTTP/1.1 200 OK
Server: nginx/1.3.6
Date: Wed, 12 Sep 2012 14:20:14 GMT
Content-Type: application/octet-stream
Content-Length: 68
Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT
Connection: keep-alive
ETag: "501a0a78-44"
Accept-Ranges: bytes

2012/09/12 16:20:14 [debug] 15171#0: *2 write new buf t:1 f:0 09C0B53C, pos 09C0B53C, size: 250 file: 0, size: 0
2012/09/12 16:20:14 [debug] 15171#0: *2 http write filter: l:0 f:0 s:250
2012/09/12 16:20:14 [debug] 15171#0: *2 http output filter "?"
2012/09/12 16:20:14 [debug] 15171#0: *2 http copy filter: "?"
2012/09/12 16:20:14 [debug] 15171#0: *2 read: 14, 09C0B6C4, 68, 0
2012/09/12 16:20:14 [debug] 15171#0: *2 http postpone filter "?" 09C0B708
2012/09/12 16:20:14 [debug] 15171#0: *2 write old buf t:1 f:0 09C0B53C, pos 09C0B53C, size: 250 file: 0, size: 0
2012/09/12 16:20:14 [debug] 15171#0: *2 write new buf t:1 f:0 09C0B6C4, pos 09C0B6C4, size: 68 file: 0, size: 0
2012/09/12 16:20:14 [debug] 15171#0: *2 http write filter: l:1 f:0 s:318
2012/09/12 16:20:14 [debug] 15171#0: *2 http write filter limit 0
2012/09/12 16:20:14 [debug] 15171#0: *2 writev: 318
2012/09/12 16:20:14 [debug] 15171#0: *2 http write filter 00000000
2012/09/12 16:20:14 [debug] 15171#0: *2 http copy filter: 0 "?"
2012/09/12 16:20:14 [debug] 15171#0: *2 http finalize request: 0, "?" a:1, c:1
2012/09/12 16:20:14 [debug] 15171#0: *2 set http keepalive handler
2012/09/12 16:20:14 [debug] 15171#0: *2 http close request
2012/09/12 16:20:14 [debug] 15171#0: *2 http log handler
2012/09/12 16:20:14 [debug] 15171#0: *2 run cleanup: 09C0B414
2012/09/12 16:20:14 [debug] 15171#0: *2 file cleanup: fd:14
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09C0AE10, unused: 1568
2012/09/12 16:20:14 [debug] 15171#0: *2 event timer add: 11: 75000:3134925906
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09BDA0C8
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09BE3210
2012/09/12 16:20:14 [debug] 15171#0: *2 hc free: 00000000 0
2012/09/12 16:20:14 [debug] 15171#0: *2 hc busy: 00000000 0
2012/09/12 16:20:14 [debug] 15171#0: *2 tcp_nodelay
2012/09/12 16:20:14 [debug] 15171#0: *2 reusable connection: 1
2012/09/12 16:20:14 [debug] 15171#0: *2 post event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: posted event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: *2 delete posted event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: *2 http keepalive handler
2012/09/12 16:20:14 [debug] 15171#0: *2 malloc: 09BE3210:1024
2012/09/12 16:20:14 [debug] 15171#0: *2 recv: fd:11 -1 of 1024
2012/09/12 16:20:14 [debug] 15171#0: *2 recv() not ready (11: Resource temporarily unavailable)
2012/09/12 16:20:14 [debug] 15171#0: posted event 00000000
2012/09/12 16:20:14 [debug] 15171#0: worker cycle
2012/09/12 16:20:14 [debug] 15171#0: accept mutex locked
2012/09/12 16:20:14 [debug] 15171#0: epoll timer: 75000
2012/09/12 16:20:14 [debug] 15171#0: epoll: fd:11 ev:0001 d:09C117C9
2012/09/12 16:20:14 [debug] 15171#0: *2 post event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: timer delta: 2
2012/09/12 16:20:14 [debug] 15171#0: posted events 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: posted event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: *2 delete posted event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: *2 http keepalive handler
2012/09/12 16:20:14 [debug] 15171#0: *2 recv: fd:11 0 of 1024
2012/09/12 16:20:14 [info] 15171#0: *2 client 127.0.0.1 closed keepalive connection
2012/09/12 16:20:14 [debug] 15171#0: *2 close http connection: 11
2012/09/12 16:20:14 [debug] 15171#0: *2 event timer del: 11: 3134925906
2012/09/12 16:20:14 [debug] 15171#0: *2 reusable connection: 0
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09BE3210
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 00000000
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09BD9FC0, unused: 56

#242 DAV module does not respect if-unmodified-since nginx-module defect 11/04/12

I.e. if you PUT or DELETE a resource with an if-unmodified-since header, the overwrite or delete will go through happily even if the header should have prevented it.

(This is a common use case, where you've previously a version of a resource, and you know its modified date, and then, when updating it or deleting it, you want to check for race conditions with other clients, and can use if-unmodified-since to get an error back if someone else messed with the resource in the meantime.)

Find a patch for this attached (also at https://gist.github.com/4013062). It's my first Nginx contribution -- feel free to point out style mistakes or general wrong-headedness.

I did not find a clean way to make the existing code in ngx_http_not_modified_filter_module.c handle this. It looks directly at the last-modified header, and, as a header filter, will only run *after* the actions for the request have already been taken.

I also did not add code for if-match, which is analogous, and code for which could probably be added to the ngx_http_test_if_unmodified function I added (which would be renamed in that case). But I don't really understand handling of etags by nginx yet, so I didn't touch that.


#52 urlencode/urldecode needed in rewrite and other places nginx-module enhancement 11/13/11

Если в $http_accept есть пробелы, то они передаются без кодирования

rewrite /cgi-bin/index.pl?_requri=$uri&_accept=$http_accept break; ... proxy_pass http://127.0.0.1:82; # mini-httpd listening


#165 Nginx worker processes don't seem to have the right group permissions nginx-core enhancement 05/11/12

Package: nginx Version: 1.2.0-1~squeeze (from Nginx repository, Debian version)

When a UNIX domain socket permissions are set to allow the primary group of the nginx worker processes to read/write on it, the Nginx worker process fail to access it with a 'permission denied' error logged.

Way to reproduce it: Binding Nginx on PHP-FPM UNIX domain socket

PHP-FPM socket configured as follow:

  • User: www-data
  • Group: www-data
  • Mode: 0660

Nginx configured as follow:

  • Worker processes spawned with the user 'nginx'
  • User 'nginx' has 'www-data' as primary group

Details on the configuration can be found here: http://forum.nginx.org/read.php?2,226182

It would be also nice to check than any group of the Nginx worker processes can be used for setting access permissions on sockets, not only its primary one.


#195 Close connection if SSL not enabled for vhost nginx-module enhancement 08/11/12

Instead of using the default SSL certificate, nginx should (by default or when configured) close the SSL connection as soon as it realizes that the requested domain has not been configured to be served over HTTPS.

For example,

server {
    listen 80 default_server;
    listen 443 ssl;

    server_name     aaa.example.net;

    ssl_certificate     /etc/ssl/certs/aaa.example.net.pem;
    ssl_certificate_key /etc/ssl/private/aaa.example.net.key;
}

server {
    listen 80;

    server_name     bbb.example.net;
}

If a client starts an HTTPS request for bbb.example.net, it will be greeted with a (correct) error/warning: "This certificate is untrusted, wrong domain". This is correct because nginx is serving the aaa.example.net certificate.

What nginx should do is to close the connection as soon as it discovers the domain that is being requested (after reading the SNI data, I suppose). This will communicate to the client and the user that there is HTTPS connectivity on bbb.example.net. Also, this solution will not disclose information about the fact that aaa.example.net is served by the same nginx server.


#239 Support for large (> 64k) FastCGI requests nginx-module enhancement 10/30/12

Currently, a hardcoded limit returns a '[alert] fastcgi request record is too big:...' message on the error output when requests larger than 64k are tempted to be sent with Nginx.

The improvement would be to handle larger requests based on configuration, if possible. Something similar to the work already done on output uffers would be nice.

The only current workaround is not to use FastCGI, ie revert to some Apache for example, which is a huge step backwards...


#55 Неправильно определяется версия Opera nginx-module defect 11/19/11

В новых версиях у браузера Opera user-agent выглядит так Opera/9.80 (Windows NT 6.1; U; MRA 5.8 (build 4661); ru) Presto/2.8.131 Version/11.11 Тоесть версию отражает Version/11.11 а не Opera/9.80 В модуле ngx_http_browser_module она определяется так:

{ "opera",

0, sizeof("Opera ") - 1, "Opera"},

Замена на

{ "opera",

sizeof("Opera ") - 1, sizeof("Version/") - 1, "Version/"},

выявляет правильно новые версии но со старыми версиями будет проблема


Yaroslav Zhuravlev (1 match)

#1467 Problem of location matching with a given request documentation defect 01/24/18

Hi, guys. I've got a problem with location request and regexp, 'cause the nginx is not finding match like it describes here: https://nginx.ru/en/docs/http/ngx_http_core_module.html#location

My request is:

http://localhost:8080/catalog/css/asdftail

My conf is:

server {
    listen 8080;

    location ~ ^/catalog/(js|css|i)/(.*)$
    {
            return 405;
    }
    location / {
            location ~ ^.+tail$ {
                    return 403;
            }
            return 402;
    }
}

My problem is: With my request, my conf must return me 405 error, but it return me 403 error, 'cause nginx starts to check regexp location from "the location with the longest matching prefix is selected and remembered.", not from top of config - "Then regular expressions are checked, in the order of their appearance in the configuration file."

If my conf likes this:

server {
    listen 8080;

    location ~ ^/catalog/(js|css|i)/(.*)$
    {
            return 405;
    }
    
    location ~ ^.+tail$ {
            return 403;
    }

    location / {

            return 402;
    }
}

or this:

server {
    listen 8080;

    location catalog/ {
        location ~ ^/catalog/(js|css|i)/(.*)$
        {
                return 405;
        }
    }
    location / {
            location ~ ^.+tail$ {
                    return 403;
            }
            return 402;
    }
}

Then all works like in manual.


(empty) (37 matches)

#1463 Build in --builddir throws error on nginx.h nginx-core defect 01/18/18

When building with --builddir, an error is thrown during compilation.

> [...]
> Running Mkbootstrap for nginx ()
> chmod 644 "nginx.bs"
> "/foo/bar/perl5/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- nginx.bs blib/arch/auto/nginx/nginx.bs 644
> gmake[2]: *** No rule to make target `../../../../../src/core/nginx.h', needed by `nginx.c'. Stop.
> gmake[2]: Leaving directory `/home/user/build/src/http/modules/perl'
> gmake[1]: *** [/home/user/build//src/http/modules/perl/blib/arch/auto/nginx/nginx.so] Error 2
> gmake[1]: Leaving directory `/home/user/nginx-1.13.8'
> gmake: *** [build] Errror 2

gmake --version GNU Make 3.81 Copyright (C) 2006 Free Software Foundation, Inc.

gcc --version gcc (GCC) 5.3.0

cpp --version cpp (GCC) 5.3.0


#621 Could not allocate new session in SSL session shared cache nginx-core defect 09/03/14

Hi,

I'm using nginx as reverse proxy in front of haproxy. I'm using this ssl_session_cache config:

ssl_session_cache   shared:SSL:10m;
ssl_session_timeout 512m;

Now I get from time to time such errors in the log:

2014-09-03T09:51:34+00:00 hostname nginx: 2014/09/03 09:51:34 [alert] 27#0: *144835271 could not allocate new session in SSL session shared cache "SSL" while SSL handshaking, client: a.b.c.d, server: 0.0.0.0:443

Unfortunately that error doesn't say much. Looking at the code shows that it probably failed to allocate via ngx_slab_alloc_locked():

https://github.com/nginx/nginx/blob/master/src/event/ngx_event_openssl.c#L2088

I'll try to raise the session cache and see if it helps, but since it's just a cache I would expect only performance differences.

FWIW: The nginx is running in a docker container, to be specific: https://registry.hub.docker.com/u/fish/haproxy/ (Although I've raised the cache setting there already


#1263 Segmentation Fault when SSI is used in sub-request nginx-module defect 05/03/17

Hi,

nginx worker process crashes with segfault when SSI is used in a sub-request.

Config example:

    location /loc1.html {
        add_after_body /loc2.html;
    }

    location /loc2.html {
        ssi on;
    }

Seg fault happens only when I access /loc1.html location. When I access /loc2.html directly it works fine.

Error log:

==> ../log/error.log <==
2017/05/03 18:47:10 [alert] 14548#23345880: worker process 14566 exited on signal 11
2017/05/03 18:47:10 [alert] 14548#23345880: worker process 14573 exited on signal 11

Just FYI, content of loc1.html:

<p>Hi from location 1 !</p>

content of loc2.html:

<p>Hi from location 2 on <!--#echo var="host" --> !</p>

I tried to debug it and fix it, but due to the time I stopped here: file ngx_http_ssi_filter_module.c:

static ngx_str_t *
ngx_http_ssi_get_variable(ngx_http_request_t *r, ngx_str_t *name,
    ngx_uint_t key)
{
    ngx_uint_t           i;
    ngx_list_part_t     *part;
    ngx_http_ssi_var_t  *var;
    ngx_http_ssi_ctx_t  *ctx;

    ctx = ngx_http_get_module_ctx(r->main, ngx_http_ssi_filter_module);

    ...

ctx is NULL. SSI context is missing when SSI is called in a subrequest.

And then the subsequent code will cause segfault, because ctx is NULL:

    if (ctx->variables == NULL) {
        return NULL;
    }

I added some additional debug logs to the code around the ctx = ngx_http_get_module_ctx(....) line. And this is the output:

2017/05/03 18:47:10 [debug] 16787#8822579: *3 ssi ngx_http_ssi_get_variable r->main: 00007FE3FC006E50
2017/05/03 18:47:10 [debug] 16787#8822579: *3 ssi ngx_http_ssi_get_variable r->main->ctx: 00007FE3FC007770, module.ctx_index: 46
2017/05/03 18:47:10 [debug] 16787#8822579: *3 ssi ngx_http_ssi_get_variable ctx: 0000000000000000

Cheers Peter Magdina


#1330 OCSP stapling non-functional on IPv6-only host nginx-core defect 07/24/17

I have an IPv6-only host running CentOS 7. I have a Lets Encrypt certificate on the host and I've enabled OCSP stapling per the Mozilla preferred SSL stuff. My provider has NAT64 set-up so I've configured their NAT64 resolvers in the resolve entry in nginx.conf.

        # OCSP Stapling ---
        # fetch OCSP records from URL in ssl_certificate and cache them
        ssl_stapling on;
        ssl_stapling_verify on;

        # verify chain of trust of OCSP response using Root CA and Intermediate certs
        ssl_trusted_certificate /etc/dehydrated/certs/flathub.org/chain.pem;

        resolver [2a00:1098:0:80:1000:3b:0:1] [2a00:1098:0:82:1000:3b:0:1];

I see this error:

2017/07/24 14:02:23 [error] 16637#0: connect() to 88.221.134.147:80 failed (101: Network is unreachable) while requesting certificate status, responder: ocsp.int-x3.letsencrypt.org

I believe that it's because this host returns two A and two AAAA results:

[root@front nginx]# host ocsp.int-x3.letsencrypt.org
ocsp.int-x3.letsencrypt.org is an alias for ocsp.int-x3.letsencrypt.org.edgesuite.net.
ocsp.int-x3.letsencrypt.org.edgesuite.net is an alias for a771.dscq.akamai.net.
a771.dscq.akamai.net has address 88.221.134.114
a771.dscq.akamai.net has address 88.221.134.147
a771.dscq.akamai.net has IPv6 address 2a02:26f0:e8::6856:6fb0
a771.dscq.akamai.net has IPv6 address 2a02:26f0:e8::6856:6f88

However the SSL stapling code only attempts to connect the first one: https://github.com/nginx/nginx/blob/9197a3c8741a8832e6f6ed24a72dc5b078d840fd/src/event/ngx_event_openssl_stapling.c#L1028

I've tried to work around with /etc/hosts but that seems unused, and OCSP stapling seems to disable itself if I have no resolver configuration entry. I can't seem to place an IPv6 address in the ssl_stapling_responder either.


#348 Excessive urlencode in if-set nginx-core defect 05/02/13

Hello,

I had setup Apache with mod_dav_svn behind nginx acting as front-end proxy and while commiting a copied file with brackets ([]) in filename into that subversion I found a bug in nginx.

How to reproduce it (configuration file is as simple as possible while still causing the bug):

$ cat nginx.conf 
error_log  stderr debug;
pid nginx.pid;
events {
    worker_connections  1024;
}
http {
    access_log access.log;
    server {
        listen 8000;
        server_name localhost;
        location / {
            set $fixed_destination $http_destination;
            if ( $http_destination ~* ^(.*)$ )
            {
                set $fixed_destination $1;
            }
            proxy_set_header        Destination $fixed_destination;            
            proxy_pass http://127.0.0.1:8010;
        }
    }
}

$ nginx -p $PWD -c nginx.conf -g 'daemon off;'
...

In second terminal window:

$ nc -l 8010

In third terminal window:

$ curl --verbose --header 'Destination: http://localhost:4000/foo%5Bbar%5D.txt' '0:8000/%41.txt'
* About to connect() to 0 port 8000 (#0)
*   Trying 0.0.0.0...
* Adding handle: conn: 0x7fa91b00b600
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fa91b00b600) send_pipe: 1, recv_pipe: 0
* Connected to 0 (0.0.0.0) port 8000 (#0)
> GET /%41.txt HTTP/1.1
> User-Agent: curl/7.30.0
> Host: 0:8000
> Accept: */*
> Destination: http://localhost:4000/foo%5Bbar%5D.txt
> 

Back in the second terminal window:

($ nc -l 8010)
GET /%41.txt HTTP/1.0
Destination: http://localhost:4000/foo%255Bbar%255D.txt
Host: 127.0.0.1:8010
Connection: close
User-Agent: curl/7.30.0
Accept: */*

The problem is that the Destination header was changed from ...foo%5Bbar%5D.txt to ...foo%255Bbar%255D.txt. This happens only when

  • that if ( $http_destination ~* ^(.*)$ ) is processed
  • and URL (HTTP GET URL, not that Destination URL) also contains urlencoded (%41) character(s).

In other cases (URL does not contain urlencoded character or that if is not matched) the Destination header is proxy_passed untouched, which is expected behavior.


Note: Why do I need that if ( $http_destination ~* ^(.*)$ )? In this example it is simplified, but for that Subversion setup I have mentioned I need to rewrite the Destination from https to http when nginx proxy_passes from https to Apache over http.

This bug also happens on nginx/0.7.67 in Debian Squeeze.


#458 Win32: autoindex module doesn't support Unicode names nginx-core defect 12/06/13

Functions for traversing directories use ANSI versions of FindFirstFile() and FindNextFile(), so any characters in filenames besides basic latin become broken.

Proposed patch fix this issue converting WCHAR names to utf-8.


#564 map regex matching affects rewrite directive nginx-core defect 05/28/14

Using a regex in the map directive changes the capture groups in a rewrite directive. This happens only if the regex in map is matched. A minimal exampe config:

http {
        map $http_accept_language $lang {
                default en;
                 ~(de) de;
        }
        server {
                server_name test.local
                listen 80;
                rewrite ^/(.*)$ http://example.com/$lang/$1 permanent;
        }
}

Expected:

$ curl -sI http://test.local/foo | grep Location
Location: http://example.com/en/foo
$ curl -H "Accept-Language: de" -sI http://test.local/foo | grep Location
Location: http://example.com/de/foo

Actual:

$ curl -sI http://test.local/foo | grep Location
Location: http://example.com/en/foo
$ curl -H "Accept-Language: de" -sI http://test.local/foo | grep Location
Location: http://example.com/de/de

If I leave out the parentheses in ~(de) de; (so it becomes ~de de;), $1 is simply empty:

$ curl -H "Accept-Language: de" -sI http://test.local/foo | grep Location
Location: http://example.com/de/

#752 try_files + subrequest + proxy-handler problem nginx-core defect 04/23/15

When using subrequests with try_files the following behaviour is observed.

   server {
       listen       8081;
       default_type text/html;

       location /uno {   return 200 "uno  ";   }
       location /duo {   return 200 "duo  ";   }
       location /tres {  return 200 "tres  ";  }
   }


   server {
       listen       8080;

       location / {
           root /tmp;
           try_files /tres =404;
           proxy_pass http://127.0.0.1:8081;
           add_after_body /duo;
       }
   }

Assuming /tmp/tres exists, a request to

http://127.0.0.1:8080/uno

returns "uno tres ", not "uno duo " or "tres tres ".

I.e., main request assumes that the request URI is unmodified and passes original request URI, "/uno". But in a subrequest the URI is modified and nginx uses modified URI, "/tres".

This is believed to be a bug, and one of the following should be done:

  • try_files should reset the r->valid_unparsed_uri flag if it modifies the URI;
  • or try_files should not modify the URI at all.

See this thread (in Russian) for additional details.


#753 Nginx leaves UNIX domain sockets after SIGQUIT nginx-core defect 04/24/15

According to the Nginx documentation, SIGQUIT will cause a "graceful shutdown" while SIGTERM will cause a "fast shutdown". If you send SIGQUIT to Nginx, it will leave behind stale UNIX domain socket files that were created using the listen directive. If there are any stale UNIX domain socket files when Nginx starts up, it will fail to listen on the socket because it already exists. However if you use SIGTERM, the UNIX domain socket files will be properly removed. I've encountered this with Nginx 1.6.2, 1.6.3, and 1.8.0 on Ubuntu 14.04.

Example /etc/nginx/nginx.conf:

http {
    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    gzip on;
    gzip_disable "msie6";

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/sites-enabled/*;
}

Example /etc/nginx/sites-enabled/serve-files:

server {
    listen unix:/run/serve-files.socket;
    root /var/www/files;
    location / {
        try_files $uri =404;
    }
}

Then start Nginx:

sudo nginx
# OR
sudo service nginx start

On first start, /run/serve-files.socket will be created because of the listen unix:/run/serve-files.socket; directive.

Then stop Nginx with SIGQUIT:

sudo kill -SIGQUIT $(cat /run/nginx.pid)
# OR
sudo service nginx stop # Sends SIGQUIT

The socket at /run/serve-files.socket will remain because it was not properly removed. If you try to restart Nginx, it will fail to start with the following logged to /var/log/nginx/error.log:

2015/04/24 10:16:27 [emerg] 5782#0: bind() to unix:/run/serve-files.socket failed (98: Address already in use)
2015/04/24 10:16:27 [emerg] 5782#0: bind() to unix:/run/serve-files.socket failed (98: Address already in use)
2015/04/24 10:16:27 [emerg] 5782#0: bind() to unix:/run/serve-files.socket failed (98: Address already in use)
2015/04/24 10:16:27 [emerg] 5782#0: bind() to unix:/run/serve-files.socket failed (98: Address already in use)
2015/04/24 10:16:27 [emerg] 5782#0: bind() to unix:/run/serve-files.socket failed (98: Address already in use)
2015/04/24 10:16:27 [emerg] 5782#0: still could not bind()

#756 Client disconnect in ngx_http_image_filter_module nginx-module defect 04/29/15

I have encountered a bug in ngx_http_image_filter_module when used in conjunction with ngx_http_proxy_module ; the configuration is as following:

location /img/ {

proxy_pass http://mybucket.s3.amazonaws.com; image_filter resize 150 100;

}

The steps to reproduce are rather complicated as they depend on how TCP fragments the response coming from the proxy:

  • If http://mybucket.s3.amazonaws.com returns, in the first TCP packet, a small amount of data (HTTP header, or HTTP header + a few bytes), the content is marked as not an image and NGX_HTTP_UNSUPPORTED_MEDIA_TYPE is returned (disconnecting the client), irrespective on whether or not subsequent data would complete the response to a valid image.

Nginx appears to give up right away on waiting for data if the contents of the first TCP packet received from the proxy does not contain a valid image header- i.e. ngx_http_image_test() will return NGX_HTTP_IMAGE_SIZE, etc.

In my experience this was triggered by a subtle change in AWS S3 that introduced further fragmentation of the TCP responses.

Versions affected: 1.6.2, 1.6.3, 1.7.2, 1.8.0, etc. (all?)

Attaching a 1.8.0 patch that resolves it; the other versions can be fixed similarly.

I think a better fix would be to "return NGX_OK" if we do not have enough data in "case NGX_HTTP_IMAGE_START", and "return NGX_HTTP_UNSUPPORTED_MEDIA_TYPE" (as per the original code) if enough data has been read, but it’s really not an image- but this exceeds the scope of the fix and my use case.

nginx-devel thread: http://mailman.nginx.org/pipermail/nginx-devel/2015-April/006876.html


#774 modern_browser // gecko version overwrites msie version nginx-module defect 07/21/15

I am not sure, if this behavior is still the case in the current version, but it occurs in 1.4 on ubuntu 14.04.

giving the following config:

##########################################

modern_browser gecko 27.0; modern_browser opera 19.0; modern_browser safari 8.0; modern_browser msie 9.0; modern_browser unlisted;

ancient_browser Links Lynx netscape4;

##########################################

on an IE11 (Win 8) $ancient_browser == 1. I am not sure if its only me, but this seems wrong in my understanding of how the module should work. This applies for a 'real' IE11, but does not for a spoofed UA (in chromium 46.0.2462.0) of IE10, IE9, IE8, IE7 - so in that case everything works as expected. Interestingly though the next config:

##########################################

modern_browser gecko 9.0; modern_browser opera 19.0; modern_browser safari 8.0; modern_browser msie 9.0; modern_browser unlisted;

ancient_browser Links Lynx netscape4;

##########################################

works as expected (in terms of the IE behavior), meaning $ancient_browser != 1. But now I would support older firefox versions - and that is not intended. The following config also gets $ancient_browser to be != 1

##########################################

modern_browser gecko 9.0; modern_browser opera 19.0; modern_browser safari 8.0; modern_browser msie 12.0; modern_browser unlisted;

ancient_browser Links Lynx netscape4;

##########################################

_Conclusion_: it looks like the gecko version is overwriting the defined msie version. This does not mean, that its exactly what is happening internally.


#861 Possibility of Inconsistent HPACK Dynamic Table Size in HTTP/2 Implementation nginx-module defect 12/15/15

The hpack dynamic table is only initialized upon addition of the first entry (see ngx_http_v2_add_header) in http/v2/ngx_http_v2_table.c.

If a dynamic table size update is sent before the first header to be added, the size will be set appropriately. However, once the first header is added, the table size is updated with NGX_HTTP_V2_TABLE_SIZE, resulting in a different size than the client.

After a brief reading of the HTTP/2 and HPACK specification, it appears that updating the dynamic table size before adding any headers is allowed.


#882 Unencoded Location: header when redirecting nginx-core defect 01/25/16

As posted on the mailing list (http://mailman.nginx.org/pipermail/nginx/2016-January/049650.html):

We’re seeing the following behavior in nginx 1.4.6:

  • nginx returns “301 Moved Permanently” with the Location: URL unencoded and a trailing slash added:
Location: http://example.org/When Harry Met Sally/
  • Some software (i.e. PHP) will automatically follow the redirect, but because it expects an encoded Location: header, it sends exactly what was returned from the server. (Note that curl, wget, and others will fixup unencoded Location: headers, but that’s not what HTTP spec requires.)

In other words, this is the transaction chain:

C: GET http://example.org/When%20Harry%20Met%20Sally HTTP/1.1

S: HTTP/1.1 301 Moved Permanently
S: Location: http://example.org/When Harry Met Sally/

C: GET http://example.org/When Harry Met Sally/ HTTP/1.1

S: 400 Bad Request

I believe the 301 originates from within the nginx code itself (ngx_http_static_module.c:147-193? in trunk) and not from our rewrite rules. As I read the HTTP spec, Location: must be encoded.


#964 Expires header incorrectly prioritised over Cache-Control: max-age nginx-core defect 04/28/16

When using nginx as a caching reverse proxy, items may be cached for the wrong amount of time if the Expires header is inconsistent with max-age. Caching will be disabled if the Expires header value is in the past or malformed.

Per RFC 2616 section 14.9.3, max-age takes precedence over Expires. However, nginx prefers whichever header/directive occurs first in the response, which causes unexpected results when migrating to nginx from an RFC-compilant caching reverse proxy.

A minimally-reproducible config is attached. Observe that no file is cached when accessing http://127.0.0.2:8080/fail, but a file is cached when accessing http://127.0.0.2:8080/success.


#994 perl_require directive has effect only at first config other defect 06/08/16

my configs are included as:

include /etc/nginx/sites-enabled/*.conf;

if I want to use 'perl_require' directive I should place it ONLY at first conf file (in alfabetical order) If I put directive into any other conf file it even does not complain if I try to load unexisting module


#1058 недокументированный редирект? documentation defect 08/24/16

при запросе URL без концевого слэша всегда происходит 301 редирект на тот же URL со слэшем в конце

пример конфига: location /dir {

alias /www/dir;

}

тоже самое происходит и в таком варианте: location /dir/ {

alias /www/dir/;

}

Однако, в документации такое поведение, вроде бы, описано только для локэйшнов с *_pass, либо я не там искал, но нашёл я только вот это:

Если location задан префиксной строкой со слэшом в конце и запросы обрабатываются при помощи proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass или memcached_pass, происходит специальная обработка. В ответ на запрос с URI равным этой строке, но без завершающего слэша, будет возвращено постоянное перенаправление с кодом 301 на URI с добавленным в конец слэшом.

пример готовой конфигурации

location /ig {

alias /www/ig_build;

}

$curl -I http://localhost:90/ig/infografika HTTP/1.1 301 Moved Permanently Server: nginx/1.11.3 Date: Wed, 24 Aug 2016 09:52:10 GMT Content-Type: text/html Content-Length: 185 Location: http://localhost:90/ig/infografika/ Connection: keep-alive

Также проверял на версии 1.4.2, всё тоже самое.

Если директории нет - то сразу возвращает 404, но если она есть, а запрос был без слэша - возникает редирект.


#1168 Nginx не корректно обрабатывает опцию max_size в директиве proxy_cache_path nginx-core defect 12/29/16

Например, есть конфигурация с директивой вида:

proxy_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=images:64m inactive=7d max_size=12g;

где /var/lib/nginx/cache является примонтированной по NFS директорией. Директория монтируется с флагами

rsize=1048576 wsize=1048576

Было замечено что nginx поддерживает количество файлов в кэше на уровне 12.5к, при том что 12 тысяч файлов (превьюшки картинок) для 12g слишком мало.

Изучив проблему более детально стало ясно что по факту размер кэша равен 12g/bsize, bsize извлекается для /var/lib/nginx/cache/... с помощью statfs и равен значениям из rsize/wsize (https://github.com/nginx/nginx/blob/master/src/http/ngx_http_file_cache.c#L154). То есть 12884901888/1048576=12288.

Когда количество файлов в кэше достигает значения 12288 - начинается принудительная инвалидация (https://github.com/nginx/nginx/blob/master/src/http/ngx_http_file_cache.c#L1950)

В NFS bsize вычисляется исходя из параметров rsize и wsize - нельзя полагаться на него при расчёте максимального размера файлов в кэше так как rsize и wsize имеют значение только для сетевого стека, и никак не отображают параметры физического хранилища.

Варианты решения проблемы:

  1. Использовать константный bsize размером 512/4096/8192;
  2. Сделать возможность явно указывать bsize с помощью дополнительного параметра;

Также не плохо было бы указать эту "особенность" в документации.


#1226 nginx behaves weirdly when using eventport as event engine on Solaris nginx-core defect 03/22/17

nginx behaves weirdly when using eventport as event engine. I tried to use eventport on Solaris when I first started using nginx, but the experience was discouraging: nginx behaved weirdly, mostly when entering indefinite timeouts during handling requests. I switched to /dev/poll and since then it worked flawlessly. Some time ago I saw a discussion stating that a work has been done on the eventport better support, so I compiled new 1.11.11 version and decided to give eventport in nginx another chance. Sadly, nothing has changed: nginx still enters idefinite loops when handling requests with eventport. This looks like the web application really waits for something, and the browser just keeps spinning the loader icon. When changed to /dev/poll and restarting nginx everything starts working back. No errors are logged in the logs.


#1238 Core dump when $limit_rate is set both in a map and in a location nginx-core defect 04/06/17

This is a minimal server configuration used to reproduce the problem (only the map & server section, the rest is the default configuration from nginx.org centos 7 nginx-1.10.3 package).

map $arg_test $limit_rate {
        default 128k;
        test 4k;
}

server {
        listen 8080;
        location / {
                root /var/www;
                set $limit_rate 4k;
        }
}

If a request to an affected location is made, nginx crashes with the following stack.

Program terminated with signal 7, Bus error.
#0  ngx_http_variable_request_set_size (r=0x7fb5c2761650, v=<optimized out>, data=140418628385320) at src/http/ngx_http_variables.c:730
730	    *sp = s;

(gdb) thread apply all bt

Thread 1 (Thread 0x7fb5c1237840 (LWP 2648)):
#0  ngx_http_variable_request_set_size (r=0x7fb5c2761650, v=<optimized out>, data=140418628385320) at src/http/ngx_http_variables.c:730
#1  0x00007fb5c12e992d in ngx_http_rewrite_handler (r=0x7fb5c2761650) at src/http/modules/ngx_http_rewrite_module.c:180
#2  0x00007fb5c12a669c in ngx_http_core_rewrite_phase (r=0x7fb5c2761650, ph=<optimized out>) at src/http/ngx_http_core_module.c:901
#3  0x00007fb5c12a1b3d in ngx_http_core_run_phases (r=r@entry=0x7fb5c2761650) at src/http/ngx_http_core_module.c:847
#4  0x00007fb5c12a1c3a in ngx_http_handler (r=r@entry=0x7fb5c2761650) at src/http/ngx_http_core_module.c:830
#5  0x00007fb5c12ad0de in ngx_http_process_request (r=0x7fb5c2761650) at src/http/ngx_http_request.c:1910
#6  0x00007fb5c12ad952 in ngx_http_process_request_line (rev=0x7fb5c27bae10) at src/http/ngx_http_request.c:1022
#7  0x00007fb5c128de60 in ngx_event_process_posted (cycle=cycle@entry=0x7fb5c2745930, posted=0x7fb5c1575290 <ngx_posted_events>) at src/event/ngx_event_posted.c:33
#8  0x00007fb5c128d9d7 in ngx_process_events_and_timers (cycle=cycle@entry=0x7fb5c2745930) at src/event/ngx_event.c:259
#9  0x00007fb5c12944f0 in ngx_worker_process_cycle (cycle=cycle@entry=0x7fb5c2745930, data=data@entry=0x1) at src/os/unix/ngx_process_cycle.c:753
#10 0x00007fb5c1292e66 in ngx_spawn_process (cycle=cycle@entry=0x7fb5c2745930, proc=proc@entry=0x7fb5c1294460 <ngx_worker_process_cycle>, data=data@entry=0x1, 
    name=name@entry=0x7fb5c131c197 "worker process", respawn=respawn@entry=-3) at src/os/unix/ngx_process.c:198
#11 0x00007fb5c12946f0 in ngx_start_worker_processes (cycle=cycle@entry=0x7fb5c2745930, n=2, type=type@entry=-3) at src/os/unix/ngx_process_cycle.c:358
#12 0x00007fb5c1295283 in ngx_master_process_cycle (cycle=cycle@entry=0x7fb5c2745930) at src/os/unix/ngx_process_cycle.c:130
#13 0x00007fb5c127039d in main (argc=<optimized out>, argv=<optimized out>) at src/core/nginx.c:367

#1255 map regexp fail to match documentation defect 04/21/17

I have this code:

map $http_incap_client_ip:$http_incap_tls_version:${http_x_forwarded_proto}:$ssl_protocol $x_forwarded_proto {

default "http"; ~[0-9.]*:- "http"; # incapsula http-https connection ~[0-9.]*:TLSv1 "https"; # incapsula https-https connection ~-:.*:https "https"; # internal tests x-forwarded-proto ~-:.*:TLSv1 "https"; # internal https connection

}

When i'm trying a local test connection, the log show that $http_incap_client_ip:$http_incap_tls_version:${http_x_forwarded_proto}:$ssl_protocol is:

-:-:https:TLSv1.2

Yet, the $x_forwarded_proto result is http.


#1269 $upstream_response_time is improperly evaluated in header filter handlers documentation defect 05/11/17

$upstream_response time is incorrectly defined as the direct assignment of ngx_current_msec when used in a header phase handler. Consider the following config:

http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for" '
                      '$upstream_response_time';

    access_log  logs/access.log  main;

    upstream foo {
        server 127.0.0.1:9000;
    }

    server {
        listen 9000;
        server_name localhost;
        root html;

        access_log off;
        error_log off;
    }

    server {
        listen       80;
        server_name  localhost;

        location / {
            proxy_pass http://foo;

            add_header Upstream-Response-Time $upstream_response_time;
        }
}

The log format generated in such a context correctly shows $upstream_response_time:

127.0.0.1 - - [10/May/2017:19:09:48 -0700] "GET / HTTP/1.1" 200 612 "-" "curl/7.50.1" "-" 0.000

The assigned header, however, contains the value from the initial assignment:

$ curl -vv localhost
* Rebuilt URL to: localhost/
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: localhost
> User-Agent: curl/7.50.1
> Accept: */*
> 
< HTTP/1.1 200 OK
< Server: nginx/1.12.0
< Date: Thu, 11 May 2017 02:09:48 GMT
< Content-Type: text/html
< Content-Length: 612
< Connection: keep-alive
< Last-Modified: Thu, 11 May 2017 01:38:00 GMT
< ETag: "5913c078-264"
< Accept-Ranges: bytes
< Upstream-Response-Time: 1494468588.302

In cases where ngx_http_upstream_connect is only called one time (e.g. on first a successful upstream connection), ngx_http_upstream_finalize_request it not called before header phase modules execute, thus never reaching the path where u->state->response_time is reassigned to the diff between its initial value and the current ngx_current_msec (https://github.com/nginx/nginx/blob/master/src/http/ngx_http_upstream.c#L4249). In cases where ngx_http_upstream_connect is called again (e.g. on a failed upstream connection), we do see a proper evaluation of the variable as a result of pushing the current state onto r->upstream_states (https://github.com/nginx/nginx/blob/master/src/http/ngx_http_upstream.c#L1468-L1481). but obviously only for the previous connection).

I do not know if this is behavior should be treated as a bug per se, or if the documentation should only be updated to reflect the fact that this variable is meaningless for successful connections in header filter phases.


#1383 Error if using proxy_pass with variable and limit_except nginx-core defect 09/18/17

Hi nginx guys,

i use a nginx in front of a varnish server. I purge my varnish via purge method.

Nginx uses the following VHost config:

server {
    listen       *:80 default_server;

    location / {
        limit_except GET POST {
            allow 127.0.0.1/32;
            deny all;
        }

        set $upstream http://127.0.0.1:8080;

        if ($http_user_agent = 'mobile') {
            set $upstream http://127.0.0.1:8080;
        }

        proxy_pass              $upstream;
        proxy_set_header        Host $host;
        proxy_set_header        X-Forwarded-For $remote_addr;
    }
}

Suggested: From not localhost i only can request GET/HEAD/POST, localhost can do everything.

From remote it works as expected:

root@test:~# curl -X PURGE -I EXTIP
HTTP/1.1 403 Forbidden
Server: nginx
Date: Mon, 18 Sep 2017 10:39:23 GMT
Content-Type: text/html
Content-Length: 162
Connection: keep-alive
Vary: Accept-Encoding

But from localhost:

root@test:~# curl -X PURGE -I http://127.0.0.1
HTTP/1.1 500 Internal Server Error
Server: nginx
Date: Mon, 18 Sep 2017 10:39:06 GMT
Content-Type: text/html
Content-Length: 186
Connection: close

Nginx error log tells me:

==> /var/log/nginx/error.log <==
2017/09/18 12:39:06 [error] 2483#2483: *2 invalid URL prefix in "", client: 127.0.0.1, server: , request: "PURGE / HTTP/1.1", host: "127.0.0.1"

Without using Variables in VHost:

server {
    listen       *:80 default_server;

    location / {
        limit_except GET POST {
            allow 127.0.0.1/32;
            deny all;
        }

        proxy_pass              http://127.0.0.1:8080;
        proxy_set_header        Host $host;
        proxy_set_header        X-Forwarded-For $remote_addr;
    }
}

Works as expected:

root@test:~# curl -X PURGE -I http://127.0.0.1
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 18 Sep 2017 10:45:35 GMT
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding

Other tests with a variable proxy_pass e.g. using the get method instead of purge also fails with same error.

Please take a look why nginx fails when combining limit_except with proxypass and variables. Thanks


#1423 response vary headers not used in the cache key other defect 11/11/17

Scenario: use nginx to cache server responses based on

  • scheme of the request
  • host of the proxy
  • request uri
  • values of the request headers as selected by the response vary headers

Request 1:

curl -H "Authorization: one" -I http://localhost:8081/some_uri
HTTP/1.1 200 OK
Server: nginx
Date: Sat, 11 Nov 2017 01:56:24 GMT
Content-Type: application/json
Content-Length: 1389
Connection: keep-alive
Access-Control-Allow-Headers: Accept, Authorization, Content-Type, x-api-key, Accept-Language
Access-Control-Allow-Methods: GET, POST, PUT, PATCH, OPTIONS, DELETE
Access-Control-Allow-Origin: *
Vary: Authorization
Cache-Control: no-transform, max-age=10, s-maxage=10
request-id: da208a4310e0a67f
loc: local
processing-time: 307
Vary: Accept-Encoding
X-Cached: MISS

Comment: the request was a cache miss. all good

Request 2

 curl -H "Authorization: two" -I http://localhost:8081/api/some_uri
HTTP/1.1 200 OK
Server: nginx
Date: Sat, 11 Nov 2017 01:56:28 GMT
Content-Type: application/json
Content-Length: 1389
Connection: keep-alive
Access-Control-Allow-Headers: Accept, Authorization, Content-Type, x-api-key, Accept-Language
Access-Control-Allow-Methods: GET, POST, PUT, PATCH, OPTIONS, DELETE
Access-Control-Allow-Origin: *
Vary: Authorization
Cache-Control: no-transform, max-age=10, s-maxage=10
request-id: da208a4310e0a67f
loc: local
processing-time: 307
Vary: Accept-Encoding
X-Cached: HIT

Comment: a different value for the authorization header was specified on the second request. As the response includes Vary: Authorization, I'd expect the second request to be a cache MISS. However, it was a cache HIT

Configuration file:

worker_processes 4;


events {
        worker_connections 768;
        multi_accept on;
}

http {

        upstream target {
                server       localhost:8082;
        }

        proxy_cache_path /Users/gluiz/dev/nginx/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;
        add_header X-Cached $upstream_cache_status;


server {
        listen       8081;
        server_name  anonymous;
        proxy_intercept_errors off;
        location / {
                 proxy_cache my_cache;
                proxy_set_header Host $http_host;
                proxy_pass http://target$request_uri;
        }

        ssl_verify_client off;
}



        server_tokens off;

}

#1433 WebDAV module didn't convert UTF8 encode url into GBK on Windows nginx-module defect 11/22/17

I'm using Nginx as a static resource server on Windows 10. And I use php_curl to transfer file from Apache to Nginx by HTTP PUT method. When URL include Chinese in UTF8 codec like "/0/%E6%B5%B7%E6%8A%A5%E8%83%8C%E6%99%AF.jpg", the WebDAV just create a file with the UTF-8 string. But Windows use GBK to store Chinese file name. So the UTF-8 string was used as a GBK string without conversion, which lead to an incorrect name on Windows like 海报背景.jpg(correct) => 娴锋姤鑳屾櫙.jpg(incorrect). If I use GBK encode URL "/0/%BA%A3%B1%A8%B1%B3%BE%B0.jpg"(same as the UTF-8 one in Chinese), every thing works fine, the file name on Windows is correct. If I try to GET the GBK encode URL, the Nginx return 500 with error log: 1113: No mapping for the Unicode character exists in the target multi-byte code page. And if I use the UTF-8 encode URL, it works fine. So I can figure out the Nginx handle the GET method with a codec conversion, but PUT method not, maybe other WebDAV method not as well.


#1598 Windows Path Length Limitation issue nginx-core defect 07/23/18

Windows by default have its PATH length limit as 255 characters. On accessing a file with path more than 255 characters, nginx throws an error saying "The system cannot find the file specified".

CreateFile() "C:\nginx-1.13.12/client-data/patch-resources/linux/redhat/offline-meta/7/7Client/x86_64/extras/os/repodata/245f964e315fa121c203b924ce7328cd704e600b6150c4b7cd951c8707a70394f/245f964e315fa121c203b924ce7328cd704e600b6150c4b7cd951c8707a70394f-primary.sqlite.bz2" failed (3: The system cannot find the path specified)

Refer : https://docs.microsoft.com/en-us/windows/desktop/fileio/naming-a-file


#1607 mirror + limit_req = writing connections nginx-core defect 08/11/18

Hello, Nginx seems to have a bug with mirror+limit_req Configuration:

All servers could be the same for testing purposes (127.0.0.1) Frontend server

limit_req_zone $binary_remote_addr zone=one:10m rate=5r/s;
location = /url1
{
  mirror /url2;
  proxy_pass http://127.0.0.1/test;
}
location = /url2
{
  internal;
  limit_req zone=one burst=10;
  proxy_pass http://127.0.0.1/test2;
}
location = /status { stub_status on; }

Backend server

location = /test { return 200; }

Mirror server

location = /test2 { return 200; }

Now run:

# for i in {1..1000}; do curl http://127.0.0.1/url1 >/dev/null & sleep 0.05; done

Wait for completion of all requests and see writing connections:

# curl http://127.0.0.1/status
Active connections: 271 
server accepts handled requests
 2001 2001 2001 
Reading: 0 Writing: 271 Waiting: 0
# sleep 120
# netstat -atn | grep 127.0.0.1:80 | grep -v CLOSE_WAIT | wc -l
270
# service nginx reload
# pgrep -f shutting
# netstat -atn | grep 127.0.0.1:80 | grep -v CLOSE_WAIT | wc -l
0
# curl http://127.0.0.1/status
Active connections: 271 
server accepts handled requests
 2002 2002 2002 
Reading: 0 Writing: 271 Waiting: 0 

When /url1 doesn't have limit_req, but /url2 has, number of writing connections from stub status begins to grow. Watching netstat, I can also see CLOSE_WAIT connections growing. I did't find any impact on requests processing at least when the number of connections is quite low. Actually, after reloading nginx there seems to be no real connections (writing). But this breaks nginx monitoring. Restart of nginx only resets writing connections number. If both /url1 and /url2 have limit_req, or /url1 only has limit_req - all is OK.

We use amd64 debian stretch, with the nginx-extras package from debian buster (rebuilt on stretch).


#1792 grpc module handles RST_STREAM(NO_ERROR) improperly on closed streams nginx-module defect 06/11/19

Per the code, we have:

            if (ctx->stream_id && ctx->done) {
                ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
                              "upstream sent frame for closed stream %ui",
                              ctx->stream_id);
                return NGX_ERROR;
            }

saying that if we ever receive any frame on a closed stream, the entire request is errored. However, per the HTTP/2 spec, section 8.1:

A server can send a complete response prior to the client sending an entire request if the response does not depend on any portion of the request that has not been sent and received. When this is true, a server MAY request that the client abort transmission of a request without error by sending a RST_STREAM with an error code of NO_ERROR after sending a complete response (i.e., a frame with the END_STREAM flag). Clients MUST NOT discard responses as a result of receiving such a RST_STREAM, though clients can always discard responses at their discretion for other reasons.

So it is in fact legal for the upstream to send an RST_STREAM in such a scenario (which is also racy, since it depends on whether the upstream happens to have received the final END_STREAM frame from nginx). And indeed, gRPC server implementations do do this - see e.g. https://github.com/grpc/grpc/pull/1661. We are using the Go google.golang.org/grpc module and it too does this, and indeed we sometimes see spurious failures coupled with the log message:

2019/06/11 08:10:46 [error] 175#175: *6289 upstream sent frame for closed stream 1 while reading response header from upstream, client: [..], server: [..], request: "POST /[..] HTTP/2.0", upstream: "grpcs://127.0.0.1:6414", host: [..]

So it would be good to follow the spec and pass on the successful response in this scenario.


#1850 Content of the variable $sent_http_connection is incorrect other defect 09/15/19

There is a suspicion that the content of the variable $sent_http_connection is incorrect.

Example Expected: keep-alive Actually: close

Host: anyhost User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0 Accept: image/webp,*/* Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive Referer: http://anyhost/catalog/page/ Cookie: PHPSESSID=vkgt1iiofoav3u24o54et46oc7 Pragma: no-cache

HTTP/1.1 200 OK Server: nginx Date: Sun, 15 Sep 2019 22:28:53 GMT Content-Type: image/jpeg Content-Length: 21576 Last-Modified: Wed, 06 Dec 2017 15:38:23 GMT Connection: keep-alive ETag: "5a280eef-5448" X-Content-Type-Options: nosniff Accept-Ranges: bytes

log_format test

'$remote_addr - $remote_user [$time_local] ' '$status $bytes_sent $request_time $pipe $connection $connection_requests $http_connection $sent_http_connection ' '"$request" ' '"$http_referer" "$http_user_agent" ' '"$gzip_ratio"';

123.123.123.123 - - [16/Sep/2019:01:28:53 +0300] 200 21844 0.000 . 13117169 3 keep-alive close "GET /images/anypicture.jpg HTTP/1.0" "http://anyhost/catalog/page/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0" "-"


#1904 sendfile with io-threads - nginx mistakenly considers premature client connection close if client sends FIN at response end nginx-core defect 12/17/19

Hi, The scenario is as follows:

  1. Nginx is configured to work with sendfile and io-threads.
  2. Client sends a request, and after receiving the entire content it sends a FIN-ACK, closing the connection.
  3. Nginx occasionally considers the transaction as prematurely closed by the client even though the FIN-ACK packet acks the entire content.

The effect i've seen is that "$body_bytes_sent" holds partial data (up to the last "successful" sendfile call) and "$request_completion" is empty. I guess there are other effects though these are the one i'm using, so they popped up.

From what i've managed to understand from the code it looks like the scenario is that the read_event_handler "ngx_http_test_reading" is called before the completed task from the io-thread is handled by the main thread, effectively making Nginx think the client connection close happened earlier.

I've managed to reproduce it or latest nginx with rather simple config, but it's time sensitive so it doesn't happen on each transaction. I saw that using a bigger file with rate-limit increases the chances.

Config:

worker_processes  1;

events {
    worker_connections 1024;
}

http {
    keepalive_timeout 120s;
    keepalive_requests 1000;

    log_format main "$status\t$sent_http_content_length\t$body_bytes_sent\t$request_completion";
    access_log  logs/access.log  main;
    error_log  logs/error.log  info;

    aio threads;
    sendfile on;
    limit_rate 10m;

    server {
        listen 0.0.0.0:1234 reuseport;

        location = /test-sendfile-close {
            alias files/10mb;
        }
    }
}
  • files/10mb is a file of size 10MB, created with "dd" (dd if=/dev/zero of=files/10mb bs=10M count=1)

I then tail -F the access log and the error log file, and send these requests from the same machine:

while true; do wget -q "http://10.1.1.1:1234/test-sendfile-close"; done

The output i get in error log and access log (in this order) in case of a good transaction is:

2019/12/17 14:52:34 [info] 137444#137444: *1 client 10.1.1.1 closed keepalive connection
200	10485760	10485760	OK

But every few transactions i get this output instead:

2019/12/17 14:52:38 [info] 137444#137444: *7 client prematurely closed connection while sending response to client, client: 10.1.1.1, server: , request: "GET /test-sendfile-close HTTP/1.1", host: "10.1.1.1:1234"
200	10485760	3810520	

As you can see, the reported sent bytes is lower than the actual value, and the request_completion is empty.

I understand that the closer the client is to Nginx the higher chances this could happen, but it's not just a lab issue - we've seen this in a field trial with clients in a distance of ~30ms RTT, with higher load of course.

If there is need for any other information, or anything else - i'll be glad to provide it. I appreciate the help, and in general - this great product you've built!

Thank you, Shmulik Biran


#289 Add support for HTTP Strict Transport Security (HSTS / RFC 6797) nginx-core enhancement 01/29/13

It would be great if support for HSTS (RFC 6797) would be added to the nginx-core.

Currently HSTS is "enabled" like this (according to https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security):

add_header Strict-Transport-Security max-age=31536000;

However this has at least two downsides:

  1. The header is only added when the HTTP status code is 200, 204, 301, 302 or 304.
    • It would be great if the header would always be added
  2. The header is added on HTTPS and HTTP responses, but according to RFC 6797 (7.2.) it should not:
    • An HSTS Host MUST NOT include the STS header field in HTTP responses conveyed over non-secure transport.

RFC 6797: https://tools.ietf.org/html/rfc6797


#376 log file reopen should pass opened fd from master process nginx-core enhancement 06/14/13

When starting nginx all the log files (error_log, access_log) are created and opened by the master process and the filehandles passed to the worker while forking.

On SIGUSR1 the master reopens the files, chown's them and then the worker reopens the files himself. This has several drawbacks:

  • It is inconsistent behaviour and rather surprising (sudden change of ownership upon signal). If you really want to do it this way you should chown the files from the very beginning.
  • It permits the unprivileged nginx user read and write access to the current log files which is bad from the security perspective since the unprivileged user also needs to be able to change into/read the log directory

A better solution may be to reopen the log files in the master process as currently done and then use the already available ngx_{read,write}_channel functions to pass the new filehandles down to the worker.


#853 Поведение cache_use_stale updating если новые ответы нельзя кешировать nginx-core enhancement 12/08/15

Конфигурация следующая: fastcgi_cache_path /var/tmp/nginx/fastcgi_cache levels=1:2 keys_zone=fcgi_cache:16m max_size=1024m inactive=35m; fastcgi_cache_revalidate on;

fastcgi_cache fcgi_cache; fastcgi_cache_valid 200 301 302 304 10m; fastcgi_cache_valid 404 2m; fastcgi_cache_use_stale updating error timeout invalid_header http_500 http_503; fastcgi_cache_key "$request_method|$host|$uri|$args"; fastcgi_no_cache $cookie_nocache $arg_nocache $cookie_NRGNSID $cookie_NRGNTourSID $cookie_failed_login; fastcgi_cache_bypass $cookie_nocache $arg_nocache $cookie_NRGNSID $cookie_NRGNTourSID $cookie_failed_login;

Сейчас бекенд отвечает 200 с заголовками "Cache-Control: no-store, no-cache, must-revalidate" и "Pragma: no-cache". Но две недели назад некоторое время там было 302 без запрета кеширования и ответ попал в кеш по fastcgi_cache_valid 10m. После этого одинокие запросы получают upstream_cache_status EXPIRED и ответ бекенда, но если несколько приходят одновременно, то срабатывает UPDATING и редирект из кеша двухнедельной давности. Запросы приходят регулярно и удаление по inactive=35m не происходит.

Поведение полностью объяснимо механикой работы кеша, но не с точки зрения человеческих ожиданий. Хотелось бы иметь механизм инвалидации таких устаревших данных из кеша кроме удаления элементов на файловой системе внешним скриптом. Например, ещё один параметр для cache_path, который будет задавать максимальное время жизни в кеше expired элементов, даже если к ним есть обращения.


#1459 Can't vary on request headers set by proxy_set_header (rev. proxy mode) nginx-core enhancement 01/15/18

Hi

We're using NGINX in reverse proxy mode for an internal traffic management service and I noticed that NGINX doesn't vary the cached object on request headers which we calculate and add in NGINX itself via proxy_set_header. This causes a major problem for our service as it's multi-tenant. I think it'd be logical and expected if NGINX did vary on request headers set by proxy_set_header. I have tested and setting the headers via more_set_input_headers and by setting the variable directly (and in Lua) but these also don't work, sadly.

I have included a reduced test case which hopefully illustrates the situation (a few comments help explain). Output from testing (against local/Docker) is:

# curl -k https://127.0.0.1:8443/a\?vv1\=1 -i
HTTP/1.1 200 OK
Server: nginx/1.13.8
Date: Mon, 15 Jan 2018 14:33:54 GMT
Content-Type: text/plain
Content-Length: 25
Connection: keep-alive
Cache-Control: public,max-age=30
Vary: vvrh1
vvrh1-val-rec: val is 1
Edge-Cache-Status: EXPIRED
Origin-Response-Status: 200
Origin-IP: 127.0.0.1:9000

2018-01-15T14:33:54+00:00%                                                                                                                                                               

# curl -k https://127.0.0.1:8443/a\?vv1\=1 -i
HTTP/1.1 200 OK
Server: nginx/1.13.8
Date: Mon, 15 Jan 2018 14:33:55 GMT
Content-Type: text/plain
Content-Length: 25
Connection: keep-alive
Cache-Control: public,max-age=30
Vary: vvrh1
vvrh1-val-rec: val is 1
Edge-Cache-Status: HIT

2018-01-15T14:33:54+00:00%                                                                                                                                                               

# curl -k https://127.0.0.1:8443/a\?vv1\=2 -i
HTTP/1.1 200 OK
Server: nginx/1.13.8
Date: Mon, 15 Jan 2018 14:33:58 GMT
Content-Type: text/plain
Content-Length: 25
Connection: keep-alive
Cache-Control: public,max-age=30
Vary: vvrh1
vvrh1-val-rec: val is 1
Edge-Cache-Status: HIT

2018-01-15T14:33:54+00:00%

I'd expect a cache miss on the final response because the query string argument "vv1" has changed and this would mean that proxy_request_header would set a different value for the "vvrh1" request header. To illustrate that this mechanism works, once the cached object has expired, we see:

# curl -k https://127.0.0.1:8443/a\?vv1\=2 -i
HTTP/1.1 200 OK
Server: nginx/1.13.8
Date: Mon, 15 Jan 2018 14:39:12 GMT
Content-Type: text/plain
Content-Length: 25
Connection: keep-alive
Cache-Control: public,max-age=30
Vary: vvrh1
vvrh1-val-rec: val is 2
Edge-Cache-Status: val EXPIRED
Origin-Response-Status: 200
Origin-IP: 127.0.0.1:9000

2018-01-15T14:39:12+00:00%                                                                                                                                                               

# curl -k https://127.0.0.1:8443/a\?vv1\=2 -i
HTTP/1.1 200 OK
Server: nginx/1.13.8
Date: Mon, 15 Jan 2018 14:39:15 GMT
Content-Type: text/plain
Content-Length: 25
Connection: keep-alive
Cache-Control: public,max-age=30
Vary: vvrh1
vvrh1-val-rec: val is 2
Edge-Cache-Status: val HIT

2018-01-15T14:39:12+00:00%

Might this be something which could be fixed (if not, is there a workaround you can think of? Or have I made a mistake?

Cheers


#1472 Downloads stop after 1GB depending of network nginx-module enhancement 01/26/18

Hi, we tried nginx version 1.6.2 till version 1.12.2 and have a problem when used as proxy before artifactory. Downloads get interrupted at 1GB.

This behavior depends on the internal VLAN. On one VLAN this always happens. On an other VLAN it never happens. This is size limited, not time limited. From some network it stops after 30 seconds and from one other slow network it stops after 13 minutes.

We made a minimal proxy setup with apache and this works with all VLANs. This is why we expect it has something to do with nginx or the combination of nginx and TCP/IP stack of linux.

In wireshark we see "TCP Dup ACK" on the client side sent to the nginx server.

Wget fails with connection closed at byte 1083793011 but continues download with partial content. docker can't handle this and our customers can't download docker images with layers greater 1 GB.

The following text shows two anonymized minimal configs. The nginx config that is problematic and the apache config that works:

NGINX config:
server {
    listen *:80;
    server_name NAME;
    client_max_body_size 3G;
    access_log /var/log/nginx/NAME.access.log;
    error_log /var/log/nginx/NAME.error.log;

    if ($request_uri !~ /artifactory/) {
        rewrite ^ $scheme://NAME/artifactory/ permanent;
    }

    location /artifactory {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_pass http://ARTIFACTORY:PORT;
        proxy_pass_header Server;
        proxy_read_timeout 90s;
    }
}

APACHE config:
<VirtualHost *:80>
    ServerName NAME
    ServerAdmin NAME

    ErrorLog ${APACHE_LOG_DIR}/error.log

    LogLevel warn

    ProxyRequests Off
    <Proxy *>
      Order allow,deny
      Allow from All
    </Proxy>

    ProxyPass / http://ARTIFACTORY:PORT/
    ProxyPassReverse / http://ARTIFACTORY:PORT/
</VirtualHost>


#1500 ngx_hash_t can have only lower case key other enhancement 03/07/18

ngx_hash_init convert all the keys in lower case, so when use ngx_hash_find it returns null. Below is the code line in ngx_hash.c.

key = ngx_hash(key, ngx_tolower(data[i]));

I think, you can make it generic which supports case sensitive keys.


#1659 nginx intermittent delay before sending response nginx-core enhancement 10/22/18

Hi,

We have a Flask Python application served by uWSGI through nginx using 'uwsgi-pass' via a load-balancer, hosted in AWS.

We are noticing that nginx intermittently (but frequently) seems to add a substantial delay (as much as 200 seconds, but more typically 5-10 seconds) to some requests.

I've spent a couple of weeks looking into this, with activities ranging from changes to logging and analysis of logs, configuration changes, packet traces, nginx debug logs and strace, and more, but have been unable to solve the problem.

This is a production service, so I'm limited to what I can change (particularly around uWSGI and the app), and I've been unable to replicate the issue on a test setup so far, presumably because the incoming requests and load aren't "realistic enough".

Our production servers have:

$ nginx -V
nginx version: nginx/1.4.6 (Ubuntu)
built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module
$ uname -a
Linux webserver 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
$ uwsgi --version
2.0.15

I did try upgrading nginx on one instance and passed some traffic through it (no improvement), it had:

$ nginx -V
nginx version: nginx/1.12.2
built with OpenSSL 1.0.1f 6 Jan 2014
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -fPIC -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-mail=dynamic --with-mail_ssl_module --add-dynamic-module=/build/nginx-WaTJd7/nginx-1.12.2/debian/modules/nginx-auth-pam --add-dynamic-module=/build/nginx-WaTJd7/nginx-1.12.2/debian/modules/nginx-dav-ext-module --add-dynamic-module=/build/nginx-WaTJd7/nginx-1.12.2/debian/modules/nginx-echo --add-dynamic-module=/build/nginx-WaTJd7/nginx-1.12.2/debian/modules/nginx-upstream-fair --add-dynamic-module=/build/nginx-WaTJd7/nginx-1.12.2/debian/modules/ngx_http_substitutions_filter_module

I also tried a standalone nginx instance and pointed it at the same uWSGI across the network (still no improvement):

$ nginx -V
nginx version: nginx/1.14.0 (Ubuntu)
built with OpenSSL 1.1.0g  2 Nov 2017
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-mcUg8N/nginx-1.14.0=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-mail=dynamic --with-mail_ssl_module
$ uname -a
Linux webserver 4.15.0-1019-aws #19-Ubuntu SMP Fri Aug 10 09:34:35 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Config is pretty standard and I've tried enabling/disabling various bits with no improvement; key statements are:

events {
    use epoll;
}
http {
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
}

Here's log evidence of the problem (bits {LIKE_THIS} are redacted):

uWSGI:
[pid: 3923|app: -|req: -/-] {LB_IP} (-) {46 vars in 922 bytes} [Fri Oct 19 13:26:23 2018] POST {URL} => generated 82 bytes in 2474 msecs (HTTP/1.1 200) 4 headers in 253 bytes (1 switches on core 4) 1539955583.879-4615-438-18-{CLIENT_IP}

nginx:
{LB_IP} - - [19/Oct/2018:13:26:43 +0000] "POST {URL} HTTP/1.1" 200 82 "-" "-" 19.534 1539955583.879-4615-438-18-{CLIENT_IP}

The "1539955583.879-4615-438-18-{CLIENT_IP}" is an HTTP_X_TRACEID header I added with nginx and is passed to uWSGI and logged by both, so I can always tie the nginx log entry to the relevant uWSGI entry. It also contains the nginx PID and connection ID which is handy for finding related debug entries; see below.

Note the timings; uWSGI logs as 13:26:23 (which is when the request was received, not completed) and took 2474ms to generate the response, whereas nginx logs as 13:26:43 (when the response was delivered).

Before anyone assumes this is client-side delay (between browser and nginx), note that I have also included the $upstream_response_time in the nginx log, which for this request was 19534ms, 17s longer than uWSGI claims the response took.

We normally use a UNIX socket for the nginx <-> uWSGI communication, but I changed one of our servers to use TCP (port 8001); firstly to see if it made a difference - it did not - but also to then use tcpdump to capture what was happening between the two processes.

I saw this:

13:26:23.929522 IP 127.0.0.1.48880 > 127.0.0.1.8001: Flags [S], seq 2215864258, win 43690, options [mss 65495,sackOK,TS val 413969 ecr 0,nop,wscale 7], length 0
13:26:23.929528 IP 127.0.0.1.8001 > 127.0.0.1.48880: Flags [S.], seq 3951451851, ack 2215864259, win 43690, options [mss 65495,sackOK,TS val 413969 ecr 413969,nop,wscale 7], length 0
13:26:23.929537 IP 127.0.0.1.48880 > 127.0.0.1.8001: Flags [.], ack 1, win 342, options [nop,nop,TS val 413969 ecr 413969], length 0
13:26:23.931409 IP 127.0.0.1.48880 > 127.0.0.1.8001: Flags [P.], seq 1:1991, ack 1, win 342, options [nop,nop,TS val 413970 ecr 413969], length 1990
13:26:23.931424 IP 127.0.0.1.8001 > 127.0.0.1.48880: Flags [.], ack 1991, win 1365, options [nop,nop,TS val 413970 ecr 413970], length 0
13:26:26.406311 IP 127.0.0.1.8001 > 127.0.0.1.48880: Flags [P.], seq 1:336, ack 1991, win 1365, options [nop,nop,TS val 414589 ecr 413970], length 335
13:26:26.406335 IP 127.0.0.1.48880 > 127.0.0.1.8001: Flags [.], ack 336, win 350, options [nop,nop,TS val 414589 ecr 414589], length 0
13:26:43.456382 IP 127.0.0.1.8001 > 127.0.0.1.48880: Flags [F.], seq 336, ack 1991, win 1365, options [nop,nop,TS val 418851 ecr 414589], length 0
13:26:43.459119 IP 127.0.0.1.48880 > 127.0.0.1.8001: Flags [F.], seq 1991, ack 337, win 350, options [nop,nop,TS val 418852 ecr 418851], length 0
13:26:43.459135 IP 127.0.0.1.8001 > 127.0.0.1.48880: Flags [.], ack 1992, win 1365, options [nop,nop,TS val 418852 ecr 418852], length 0

So, the 335 byte response (253 bytes of headers + 82 bytes of data = 335 bytes payload) was sent (and ACK'd) at 13:26:26 (as per uWSGI's claim of 2474ms).

However, nginx doesn't appear to send the response until 13:26:43, when uWSGI closes the connection (FIN/ACK packet); this matches with nginx's $upstream_response_time of 19534ms.

By the way, the response did contain a "Content-Length: 82" header.

So, on to debug logs (sorry, bit long, and again with {REDACTED_BITS}):

2018/10/19 13:26:26 [debug] 4615#4615: *438 http upstream request: "{URL}"
2018/10/19 13:26:26 [debug] 4615#4615: *438 http upstream process header
2018/10/19 13:26:26 [debug] 4615#4615: *438 malloc: 00007F2FD2F10000:4096
2018/10/19 13:26:26 [debug] 4615#4615: *438 recv: eof:0, avail:1
2018/10/19 13:26:26 [debug] 4615#4615: *438 recv: fd:21 335 of 4096
2018/10/19 13:26:26 [debug] 4615#4615: *438 http uwsgi status 200 "200 OK"
2018/10/19 13:26:26 [debug] 4615#4615: *438 http uwsgi header: "Content-Type: application/json"
2018/10/19 13:26:26 [debug] 4615#4615: *438 http uwsgi header: "Content-Length: 82"
2018/10/19 13:26:26 [debug] 4615#4615: *438 http uwsgi header: "Cache-Control: no-cache"
2018/10/19 13:26:26 [debug] 4615#4615: *438 http uwsgi header: "Set-Cookie: session={SESSION_ID}; HttpOnly; Path=/"
2018/10/19 13:26:26 [debug] 4615#4615: *438 http uwsgi header done
2018/10/19 13:26:26 [debug] 4615#4615: *438 HTTP/1.1 200 OK
Server: nginx/1.12.2
Date: Fri, 19 Oct 2018 13:26:26 GMT
Content-Type: application/json
Content-Length: 82
Connection: keep-alive
Keep-Alive: timeout=120
Cache-Control: no-cache
Set-Cookie: session={SESSION_ID}; HttpOnly; Path=/
Accept-Ranges: bytes

2018/10/19 13:26:26 [debug] 4615#4615: *438 write new buf t:1 f:0 00007F2FD2F2F800, pos 00007F2FD2F2F800, size: 383 file: 0, size: 0
2018/10/19 13:26:26 [debug] 4615#4615: *438 http write filter: l:0 f:0 s:383
2018/10/19 13:26:26 [debug] 4615#4615: *438 http cacheable: 0
2018/10/19 13:26:26 [debug] 4615#4615: *438 http upstream process upstream
2018/10/19 13:26:26 [debug] 4615#4615: *438 pipe read upstream: 0
2018/10/19 13:26:26 [debug] 4615#4615: *438 pipe preread: 82
2018/10/19 13:26:26 [debug] 4615#4615: *438 pipe buf free s:0 t:1 f:0 00007F2FD2F10000, pos 00007F2FD2F100FD, size: 82 file: 0, size: 0
2018/10/19 13:26:26 [debug] 4615#4615: *438 pipe length: -1
2018/10/19 13:26:26 [debug] 4615#4615: *438 pipe write downstream: 1
2018/10/19 13:26:26 [debug] 4615#4615: *438 pipe write busy: 0
2018/10/19 13:26:26 [debug] 4615#4615: *438 pipe write: out:0000000000000000, f:0
2018/10/19 13:26:26 [debug] 4615#4615: *438 pipe read upstream: 0
2018/10/19 13:26:26 [debug] 4615#4615: *438 pipe buf free s:0 t:1 f:0 00007F2FD2F10000, pos 00007F2FD2F100FD, size: 82 file: 0, size: 0
2018/10/19 13:26:26 [debug] 4615#4615: *438 pipe length: -1
2018/10/19 13:26:26 [debug] 4615#4615: *438 event timer del: 21: 1539959183930
2018/10/19 13:26:26 [debug] 4615#4615: *438 event timer add: 21: 3600000:1539959186406
2018/10/19 13:26:26 [debug] 4615#4615: *438 http upstream request: "{URL}"
2018/10/19 13:26:26 [debug] 4615#4615: *438 http upstream dummy handler
[...other stuff happens here on other connections...]
2018/10/19 13:26:43 [debug] 4615#4615: *438 http upstream request: "{URL}"
2018/10/19 13:26:43 [debug] 4615#4615: *438 http upstream process upstream
2018/10/19 13:26:43 [debug] 4615#4615: *438 pipe read upstream: 1
2018/10/19 13:26:43 [debug] 4615#4615: *438 readv: eof:1, avail:1
2018/10/19 13:26:43 [debug] 4615#4615: *438 readv: 1, last:3761
2018/10/19 13:26:43 [debug] 4615#4615: *438 pipe recv chain: 0
2018/10/19 13:26:43 [debug] 4615#4615: *438 pipe buf free s:0 t:1 f:0 00007F2FD2F10000, pos 00007F2FD2F100FD, size: 82 file: 0, size: 0
2018/10/19 13:26:43 [debug] 4615#4615: *438 pipe length: -1
2018/10/19 13:26:43 [debug] 4615#4615: *438 input buf #0
2018/10/19 13:26:43 [debug] 4615#4615: *438 pipe write downstream: 1
2018/10/19 13:26:43 [debug] 4615#4615: *438 pipe write downstream flush in
2018/10/19 13:26:43 [debug] 4615#4615: *438 http output filter "{URL}"
2018/10/19 13:26:43 [debug] 4615#4615: *438 http copy filter: "{URL}"
2018/10/19 13:26:43 [debug] 4615#4615: *438 http postpone filter "{URL}" 00007F2FD2F2F4D0
2018/10/19 13:26:43 [debug] 4615#4615: *438 write old buf t:1 f:0 00007F2FD2F2F800, pos 00007F2FD2F2F800, size: 383 file: 0, size: 0
2018/10/19 13:26:43 [debug] 4615#4615: *438 write new buf t:1 f:0 00007F2FD2F10000, pos 00007F2FD2F100FD, size: 82 file: 0, size: 0
2018/10/19 13:26:43 [debug] 4615#4615: *438 http write filter: l:0 f:0 s:465
2018/10/19 13:26:43 [debug] 4615#4615: *438 http copy filter: 0 "{URL}"
2018/10/19 13:26:43 [debug] 4615#4615: *438 pipe write downstream done
2018/10/19 13:26:43 [debug] 4615#4615: *438 event timer del: 21: 1539959186406
2018/10/19 13:26:43 [debug] 4615#4615: *438 event timer add: 21: 3600000:1539959203456
2018/10/19 13:26:43 [debug] 4615#4615: *438 http upstream exit: 0000000000000000
2018/10/19 13:26:43 [debug] 4615#4615: *438 finalize http upstream request: 0
2018/10/19 13:26:43 [debug] 4615#4615: *438 finalize http uwsgi request
2018/10/19 13:26:43 [debug] 4615#4615: *438 free rr peer 1 0
2018/10/19 13:26:43 [debug] 4615#4615: *438 close http upstream connection: 21
etc.

The relevant bits of strace...

4615  13:26:26.151042 epoll_wait(72,  <unfinished ...>
4615  13:26:26.406395 <... epoll_wait resumed> {{EPOLLIN|EPOLLOUT, {u32=3521545648, u64=139843361736112}}}, 512, 97723) = 1
4615  13:26:26.406855 recvfrom(21, "HTTP/1.1 200 OK\r\nContent-Type: a"..., 4096, 0, NULL, NULL) = 335
4615  13:26:26.409079 epoll_wait(72, {{EPOLLIN|EPOLLOUT, {u32=3521545889, u64=139843361736353}}}, 512, 97467) = 1
[...lots of other unrelated activity including a lot of epoll_wait-ing...]
4615  13:26:41.180539 epoll_wait(72, {{EPOLLIN|EPOLLOUT|EPOLLRDHUP, {u32=3521545648, u64=139843361736112}}}, 512, 102058) = 1
4615  13:26:43.457042 readv(21, [{"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 3761}], 1) = 0
4615  13:26:43.459098 close(21)         = 0

At this point, I'm out of ideas; for some reason nginx appears to read all the data it needs promptly, but won't respond to the end user until uWSGI closes the connection.

Any thoughts, suggestions, and explanations welcome!


#712 limit_conn and internal redirects documentation defect 02/03/15

It seems that limit_conn is only checked at the beginning of the request processing and is ignored in other processing stages. This sometimes results in somewhat unanticipated behaviour when dealing with internal redirects.

Consider an example:

limit_conn_zone $binary_remote_addr zone=addr:10m;

server {
    listen       80;
    server_name  site.com;

    index index.html;

    limit_conn addr 20; # first rule

    location / {
        limit_conn addr 10; # second rule
        root /var/www;
    }
}

Since any request ends up in the only defined location, one would expect that the second rule would always be used. However, only the first rule is applied if we try to request http://site.com (that is, without relative reference part). If we move index directive inside the location though, the second rule will be used without exception.

This may not be exactly a bug, but if this behaviour is "by design" some additional explanation might be worth mentioning in the documentation.


Note: See TracReports for help on using and creating reports.