{5} Accepted, Active Tickets by Owner (Full Description) (51 matches)
List tickets accepted, group by ticket owner. This report demonstrates the use of full-row display.
(empty) (1 match)
| Ticket | Summary | Component | Milestone | Type | Created | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Description | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #384 | trailing dot in server_name | nginx-core | defect | 13 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
nginx should treat server_name values with and without trailing dot as identical to each other. Thus, it shall warn and continue during configuration syntax check for the below snippet due to conflicting server_name. server {
server_name localhost;
}
server {
server_name localhost.;
}
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
somebody (9 matches) |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #86 | the "if" directive have problems in location context | nginx-core | defect | 14 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
To start, I'm doing tricky stuff so please don't point out at the weird things and stay focused on the issue at hand.
I'm mixing a configuration with userdir and symfony2 (http://wiki.nginx.org/Symfony) for a development environment, php is using php-fpm and a unix socket.
The userdir configuration is classic, all your files in Here you go for the configuration : # match 1:username, 2:project name, 3:the rest
location ~ ^/~(.+?)/symfony/(.+?)/(.+)$ {
alias /home/$1/public_html/symfony/$2/web/$3;
if (-f $request_filename) {
break;
}
# if no app.php or app_dev.php, redirect to app.php (prod)
rewrite ^/~(.+?)/symfony(/.+?)/(.+)$ /~$1/symfony/$2/app.php/$3 last;
}
# match 1:username, 2:project name, 3:env (prod/dev), 4:trailing ('/' or
# end)
location ~ ^/~(.+?)/symfony(/.+)/(app|app_dev)\.php(/|$) {
root /home/$1/public_html/symfony$2/web;
# fake $request_filename
set $req_filename /home/$1/public_html/symfony$2/web/$3.php;
include fastcgi_params;
fastcgi_split_path_info ^((?U).+\.php)(/?.+)$;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
fastcgi_param SCRIPT_FILENAME $req_filename;
fastcgi_pass unix:/tmp/php-fpm.sock;
}
The second block (PHP backend) works on its own. The first block (files direct access) works on its own. You can see that I already had a problem with PHP but went around it with creating my own variable. To help understand, here is a sample of a symfony project layout (I removed some folders to help the comprehension): project/
src/
[... my php code ...]
web/
app_dev.php
app.php
favicon.ico
If I try to access 2012/01/17 16:36:25 [error] 27736#0: *1 open() "/home/user/public_html/symfony/project/web/favicon.icoavicon.ico" failed (2: No such file or directory), client: 10.11.60.36, server: server, request: "HEAD /~user/symfony/project/favicon.ico HTTP/1.1", host: "server"
If I remove the block that tests The server is a CentOS 5.7 and the nginx is coming from the EPEL repository. Unfortunately my C skills are down the floor so I can't really provide a better understanding of the problem. I tried to poke around the code but with not much luck. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #97 | try_files and alias problems | nginx-core | defect | 14 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
# bug: request to "/test/x" will try "/tmp/x" (good) and
# "/tmp//test/y" (bad?)
location /test/ {
alias /tmp/;
try_files $uri /test/y =404;
}
# bug: request to "/test/x" will fallback to "fallback" instead of "/test/fallback"
location /test/ {
alias /tmp/;
try_files $uri /test/fallback?$args;
}
# bug: request to "/test/x" will try "/tmp/x/test/x" instead of "/tmp/x"
location ~ /test/(.*) {
alias /tmp/$1;
try_files $uri =403;
}
Or document special case for regexp locations with alias? See 3711bb1336c3. # bug: request "/foo/test.gif" will try "/tmp//foo/test.gif"
location /foo/ {
alias /tmp/;
location ~ gif {
try_files $uri =405;
}
}
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #191 | literal newlines logged in error log | nginx-module | defect | 14 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I noticed that when a %0a exists in the URL, nginx includes a literal newline in the error_log when logging a file not found: 2012/07/26 17:24:14 [error] 5478#0: *8 "/var/www/localhost/htdocs/ html/index.html" is not found (2: No such file or directory), client: 1.2.3.4, server: , request: "GET /%0a%0a%0ahtml/ HTTP/1.1", host: "test.example.com" This wreaks havoc with my log monitoring utility 8-/. It seems desirable to escape the newline in the log message? I tested with the latest 1.2.2. Is there any way with the existing configuration options to make this not happen, or any interest in updating the logging module to handle this situation differently? |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #217 | Wrong "Content-Type" HTTP response header in certain configuration scenarios | nginx-core | defect | 14 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
In certain configuration scenarios the "Content-Type" HTTP response header is not of the expected type but rather falls back to the default setting. I was able to shrink down the configuration to a bare minimum test case which gives some indication that this might happen in conjunction with regex captured in "location", "try_files" and "alias" definitions. Verfied with Nginx 1.3.6 (with patch.spdy-52.txt applied), but was also reproducible with earlier versions, see http://mailman.nginx.org/pipermail/nginx/2012-August/034900.html http://mailman.nginx.org/pipermail/nginx/2012-August/035170.html (no response was given on those posts) # nginx -V nginx version: nginx/1.3.6 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/sbin/nginx --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --user=nginx --group=nginx --with-openssl=openssl-1.0.1c --with-debug --with-http_stub_status_module --with-http_ssl_module --with-ipv6 Minimal test configuration for that specific scenario: server {
listen 80;
server_name t1.example.com;
root /data/web/t1.example.com/htdoc;
location ~ ^/quux(/.*)?$ {
alias /data/web/t1.example.com/htdoc$1;
try_files '' =404;
}
}
First test request where Content-Type is being correctly set to "image/gif" as expected: $ curl -s -o /dev/null -D - -H 'Host: t1.example.com' http://127.0.0.1/foo/bar.gif HTTP/1.1 200 OK Server: nginx/1.3.6 Date: Wed, 12 Sep 2012 14:20:09 GMT Content-Type: image/gif Content-Length: 68 Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT Connection: keep-alive ETag: "501a0a78-44" Accept-Ranges: bytes Second test request where Content-Type is wrong, "application/octet-stream" instead of "image/gif" (actually matches the value of whatever "default_type" is set to): $ curl -s -o /dev/null -D - -H 'Host: t1.example.com' http://127.0.0.1/quux/foo/bar.gif HTTP/1.1 200 OK Server: nginx/1.3.6 Date: Wed, 12 Sep 2012 14:20:14 GMT Content-Type: application/octet-stream Content-Length: 68 Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT Connection: keep-alive ETag: "501a0a78-44" Accept-Ranges: bytes Debug log during the first test request: 2012/09/12 16:20:09 [debug] 15171#0: *1 delete posted event 09C2A710 2012/09/12 16:20:09 [debug] 15171#0: *1 malloc: 09BDA0C8:672 2012/09/12 16:20:09 [debug] 15171#0: *1 malloc: 09BE3210:1024 2012/09/12 16:20:09 [debug] 15171#0: *1 posix_memalign: 09C0AE10:4096 @16 2012/09/12 16:20:09 [debug] 15171#0: *1 http process request line 2012/09/12 16:20:09 [debug] 15171#0: *1 recv: fd:11 178 of 1024 2012/09/12 16:20:09 [debug] 15171#0: *1 http request line: "GET /foo/bar.gif HTTP/1.1" 2012/09/12 16:20:09 [debug] 15171#0: *1 http uri: "/foo/bar.gif" 2012/09/12 16:20:09 [debug] 15171#0: *1 http args: "" 2012/09/12 16:20:09 [debug] 15171#0: *1 http exten: "gif" 2012/09/12 16:20:09 [debug] 15171#0: *1 http process request header line 2012/09/12 16:20:09 [debug] 15171#0: *1 http header: "User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2" 2012/09/12 16:20:09 [debug] 15171#0: *1 http header: "Accept: */*" 2012/09/12 16:20:09 [debug] 15171#0: *1 http header: "Host: t1.example.com" 2012/09/12 16:20:09 [debug] 15171#0: *1 http header done 2012/09/12 16:20:09 [debug] 15171#0: *1 event timer del: 11: 3134905866 2012/09/12 16:20:09 [debug] 15171#0: *1 rewrite phase: 0 2012/09/12 16:20:09 [debug] 15171#0: *1 test location: ~ "^/quux(/.*)?$" 2012/09/12 16:20:09 [debug] 15171#0: *1 using configuration "" 2012/09/12 16:20:09 [debug] 15171#0: *1 http cl:-1 max:1048576 2012/09/12 16:20:09 [debug] 15171#0: *1 rewrite phase: 2 2012/09/12 16:20:09 [debug] 15171#0: *1 post rewrite phase: 3 2012/09/12 16:20:09 [debug] 15171#0: *1 generic phase: 4 2012/09/12 16:20:09 [debug] 15171#0: *1 generic phase: 5 2012/09/12 16:20:09 [debug] 15171#0: *1 access phase: 6 2012/09/12 16:20:09 [debug] 15171#0: *1 access phase: 7 2012/09/12 16:20:09 [debug] 15171#0: *1 post access phase: 8 2012/09/12 16:20:09 [debug] 15171#0: *1 try files phase: 9 2012/09/12 16:20:09 [debug] 15171#0: *1 content phase: 10 2012/09/12 16:20:09 [debug] 15171#0: *1 content phase: 11 2012/09/12 16:20:09 [debug] 15171#0: *1 content phase: 12 2012/09/12 16:20:09 [debug] 15171#0: *1 http filename: "/data/web/t1.example.com/htdoc/foo/bar.gif" 2012/09/12 16:20:09 [debug] 15171#0: *1 add cleanup: 09C0B3D8 2012/09/12 16:20:09 [debug] 15171#0: *1 http static fd: 14 2012/09/12 16:20:09 [debug] 15171#0: *1 http set discard body 2012/09/12 16:20:09 [debug] 15171#0: *1 HTTP/1.1 200 OK Server: nginx/1.3.6 Date: Wed, 12 Sep 2012 14:20:09 GMT Content-Type: image/gif Content-Length: 68 Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT Connection: keep-alive ETag: "501a0a78-44" Accept-Ranges: bytes 2012/09/12 16:20:09 [debug] 15171#0: *1 write new buf t:1 f:0 09C0B500, pos 09C0B500, size: 235 file: 0, size: 0 2012/09/12 16:20:09 [debug] 15171#0: *1 http write filter: l:0 f:0 s:235 2012/09/12 16:20:09 [debug] 15171#0: *1 http output filter "/foo/bar.gif?" 2012/09/12 16:20:09 [debug] 15171#0: *1 http copy filter: "/foo/bar.gif?" 2012/09/12 16:20:09 [debug] 15171#0: *1 read: 14, 09C0B67C, 68, 0 2012/09/12 16:20:09 [debug] 15171#0: *1 http postpone filter "/foo/bar.gif?" 09C0B6C0 2012/09/12 16:20:09 [debug] 15171#0: *1 write old buf t:1 f:0 09C0B500, pos 09C0B500, size: 235 file: 0, size: 0 2012/09/12 16:20:09 [debug] 15171#0: *1 write new buf t:1 f:0 09C0B67C, pos 09C0B67C, size: 68 file: 0, size: 0 2012/09/12 16:20:09 [debug] 15171#0: *1 http write filter: l:1 f:0 s:303 2012/09/12 16:20:09 [debug] 15171#0: *1 http write filter limit 0 2012/09/12 16:20:09 [debug] 15171#0: *1 writev: 303 2012/09/12 16:20:09 [debug] 15171#0: *1 http write filter 00000000 2012/09/12 16:20:09 [debug] 15171#0: *1 http copy filter: 0 "/foo/bar.gif?" 2012/09/12 16:20:09 [debug] 15171#0: *1 http finalize request: 0, "/foo/bar.gif?" a:1, c:1 2012/09/12 16:20:09 [debug] 15171#0: *1 set http keepalive handler 2012/09/12 16:20:09 [debug] 15171#0: *1 http close request 2012/09/12 16:20:09 [debug] 15171#0: *1 http log handler 2012/09/12 16:20:09 [debug] 15171#0: *1 run cleanup: 09C0B3D8 2012/09/12 16:20:09 [debug] 15171#0: *1 file cleanup: fd:14 2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09C0AE10, unused: 1645 2012/09/12 16:20:09 [debug] 15171#0: *1 event timer add: 11: 75000:3134920866 2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09BDA0C8 2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09BE3210 2012/09/12 16:20:09 [debug] 15171#0: *1 hc free: 00000000 0 2012/09/12 16:20:09 [debug] 15171#0: *1 hc busy: 00000000 0 2012/09/12 16:20:09 [debug] 15171#0: *1 tcp_nodelay 2012/09/12 16:20:09 [debug] 15171#0: *1 reusable connection: 1 2012/09/12 16:20:09 [debug] 15171#0: *1 post event 09C2A710 2012/09/12 16:20:09 [debug] 15171#0: posted event 09C2A710 2012/09/12 16:20:09 [debug] 15171#0: *1 delete posted event 09C2A710 2012/09/12 16:20:09 [debug] 15171#0: *1 http keepalive handler 2012/09/12 16:20:09 [debug] 15171#0: *1 malloc: 09BE3210:1024 2012/09/12 16:20:09 [debug] 15171#0: *1 recv: fd:11 -1 of 1024 2012/09/12 16:20:09 [debug] 15171#0: *1 recv() not ready (11: Resource temporarily unavailable) 2012/09/12 16:20:09 [debug] 15171#0: posted event 00000000 2012/09/12 16:20:09 [debug] 15171#0: worker cycle 2012/09/12 16:20:09 [debug] 15171#0: accept mutex locked 2012/09/12 16:20:09 [debug] 15171#0: epoll timer: 75000 2012/09/12 16:20:09 [debug] 15171#0: epoll: fd:11 ev:0001 d:09C117C8 2012/09/12 16:20:09 [debug] 15171#0: *1 post event 09C2A710 2012/09/12 16:20:09 [debug] 15171#0: timer delta: 2 2012/09/12 16:20:09 [debug] 15171#0: posted events 09C2A710 2012/09/12 16:20:09 [debug] 15171#0: posted event 09C2A710 2012/09/12 16:20:09 [debug] 15171#0: *1 delete posted event 09C2A710 2012/09/12 16:20:09 [debug] 15171#0: *1 http keepalive handler 2012/09/12 16:20:09 [debug] 15171#0: *1 recv: fd:11 0 of 1024 2012/09/12 16:20:09 [info] 15171#0: *1 client 127.0.0.1 closed keepalive connection 2012/09/12 16:20:09 [debug] 15171#0: *1 close http connection: 11 2012/09/12 16:20:09 [debug] 15171#0: *1 event timer del: 11: 3134920866 2012/09/12 16:20:09 [debug] 15171#0: *1 reusable connection: 0 2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09BE3210 2012/09/12 16:20:09 [debug] 15171#0: *1 free: 00000000 2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09BD9FC0, unused: 56 Debug log during the second test request: 2012/09/12 16:20:14 [debug] 15171#0: *2 delete posted event 09C2A710 2012/09/12 16:20:14 [debug] 15171#0: *2 malloc: 09BDA0C8:672 2012/09/12 16:20:14 [debug] 15171#0: *2 malloc: 09BE3210:1024 2012/09/12 16:20:14 [debug] 15171#0: *2 posix_memalign: 09C0AE10:4096 @16 2012/09/12 16:20:14 [debug] 15171#0: *2 http process request line 2012/09/12 16:20:14 [debug] 15171#0: *2 recv: fd:11 183 of 1024 2012/09/12 16:20:14 [debug] 15171#0: *2 http request line: "GET /quux/foo/bar.gif HTTP/1.1" 2012/09/12 16:20:14 [debug] 15171#0: *2 http uri: "/quux/foo/bar.gif" 2012/09/12 16:20:14 [debug] 15171#0: *2 http args: "" 2012/09/12 16:20:14 [debug] 15171#0: *2 http exten: "gif" 2012/09/12 16:20:14 [debug] 15171#0: *2 http process request header line 2012/09/12 16:20:14 [debug] 15171#0: *2 http header: "User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2" 2012/09/12 16:20:14 [debug] 15171#0: *2 http header: "Accept: */*" 2012/09/12 16:20:14 [debug] 15171#0: *2 http header: "Host: t1.example.com" 2012/09/12 16:20:14 [debug] 15171#0: *2 http header done 2012/09/12 16:20:14 [debug] 15171#0: *2 event timer del: 11: 3134910906 2012/09/12 16:20:14 [debug] 15171#0: *2 rewrite phase: 0 2012/09/12 16:20:14 [debug] 15171#0: *2 test location: ~ "^/quux(/.*)?$" 2012/09/12 16:20:14 [debug] 15171#0: *2 using configuration "^/quux(/.*)?$" 2012/09/12 16:20:14 [debug] 15171#0: *2 http cl:-1 max:1048576 2012/09/12 16:20:14 [debug] 15171#0: *2 rewrite phase: 2 2012/09/12 16:20:14 [debug] 15171#0: *2 post rewrite phase: 3 2012/09/12 16:20:14 [debug] 15171#0: *2 generic phase: 4 2012/09/12 16:20:14 [debug] 15171#0: *2 generic phase: 5 2012/09/12 16:20:14 [debug] 15171#0: *2 access phase: 6 2012/09/12 16:20:14 [debug] 15171#0: *2 access phase: 7 2012/09/12 16:20:14 [debug] 15171#0: *2 post access phase: 8 2012/09/12 16:20:14 [debug] 15171#0: *2 try files phase: 9 2012/09/12 16:20:14 [debug] 15171#0: *2 http script copy: "/data/web/t1.example.com/htdoc" 2012/09/12 16:20:14 [debug] 15171#0: *2 http script capture: "/foo/bar.gif" 2012/09/12 16:20:14 [debug] 15171#0: *2 trying to use file: "" "/data/web/t1.example.com/htdoc/foo/bar.gif" 2012/09/12 16:20:14 [debug] 15171#0: *2 try file uri: "" 2012/09/12 16:20:14 [debug] 15171#0: *2 content phase: 10 2012/09/12 16:20:14 [debug] 15171#0: *2 content phase: 11 2012/09/12 16:20:14 [debug] 15171#0: *2 content phase: 12 2012/09/12 16:20:14 [debug] 15171#0: *2 http script copy: "/data/web/t1.example.com/htdoc" 2012/09/12 16:20:14 [debug] 15171#0: *2 http script capture: "/foo/bar.gif" 2012/09/12 16:20:14 [debug] 15171#0: *2 http filename: "/data/web/t1.example.com/htdoc/foo/bar.gif" 2012/09/12 16:20:14 [debug] 15171#0: *2 add cleanup: 09C0B414 2012/09/12 16:20:14 [debug] 15171#0: *2 http static fd: 14 2012/09/12 16:20:14 [debug] 15171#0: *2 http set discard body 2012/09/12 16:20:14 [debug] 15171#0: *2 HTTP/1.1 200 OK Server: nginx/1.3.6 Date: Wed, 12 Sep 2012 14:20:14 GMT Content-Type: application/octet-stream Content-Length: 68 Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT Connection: keep-alive ETag: "501a0a78-44" Accept-Ranges: bytes 2012/09/12 16:20:14 [debug] 15171#0: *2 write new buf t:1 f:0 09C0B53C, pos 09C0B53C, size: 250 file: 0, size: 0 2012/09/12 16:20:14 [debug] 15171#0: *2 http write filter: l:0 f:0 s:250 2012/09/12 16:20:14 [debug] 15171#0: *2 http output filter "?" 2012/09/12 16:20:14 [debug] 15171#0: *2 http copy filter: "?" 2012/09/12 16:20:14 [debug] 15171#0: *2 read: 14, 09C0B6C4, 68, 0 2012/09/12 16:20:14 [debug] 15171#0: *2 http postpone filter "?" 09C0B708 2012/09/12 16:20:14 [debug] 15171#0: *2 write old buf t:1 f:0 09C0B53C, pos 09C0B53C, size: 250 file: 0, size: 0 2012/09/12 16:20:14 [debug] 15171#0: *2 write new buf t:1 f:0 09C0B6C4, pos 09C0B6C4, size: 68 file: 0, size: 0 2012/09/12 16:20:14 [debug] 15171#0: *2 http write filter: l:1 f:0 s:318 2012/09/12 16:20:14 [debug] 15171#0: *2 http write filter limit 0 2012/09/12 16:20:14 [debug] 15171#0: *2 writev: 318 2012/09/12 16:20:14 [debug] 15171#0: *2 http write filter 00000000 2012/09/12 16:20:14 [debug] 15171#0: *2 http copy filter: 0 "?" 2012/09/12 16:20:14 [debug] 15171#0: *2 http finalize request: 0, "?" a:1, c:1 2012/09/12 16:20:14 [debug] 15171#0: *2 set http keepalive handler 2012/09/12 16:20:14 [debug] 15171#0: *2 http close request 2012/09/12 16:20:14 [debug] 15171#0: *2 http log handler 2012/09/12 16:20:14 [debug] 15171#0: *2 run cleanup: 09C0B414 2012/09/12 16:20:14 [debug] 15171#0: *2 file cleanup: fd:14 2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09C0AE10, unused: 1568 2012/09/12 16:20:14 [debug] 15171#0: *2 event timer add: 11: 75000:3134925906 2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09BDA0C8 2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09BE3210 2012/09/12 16:20:14 [debug] 15171#0: *2 hc free: 00000000 0 2012/09/12 16:20:14 [debug] 15171#0: *2 hc busy: 00000000 0 2012/09/12 16:20:14 [debug] 15171#0: *2 tcp_nodelay 2012/09/12 16:20:14 [debug] 15171#0: *2 reusable connection: 1 2012/09/12 16:20:14 [debug] 15171#0: *2 post event 09C2A710 2012/09/12 16:20:14 [debug] 15171#0: posted event 09C2A710 2012/09/12 16:20:14 [debug] 15171#0: *2 delete posted event 09C2A710 2012/09/12 16:20:14 [debug] 15171#0: *2 http keepalive handler 2012/09/12 16:20:14 [debug] 15171#0: *2 malloc: 09BE3210:1024 2012/09/12 16:20:14 [debug] 15171#0: *2 recv: fd:11 -1 of 1024 2012/09/12 16:20:14 [debug] 15171#0: *2 recv() not ready (11: Resource temporarily unavailable) 2012/09/12 16:20:14 [debug] 15171#0: posted event 00000000 2012/09/12 16:20:14 [debug] 15171#0: worker cycle 2012/09/12 16:20:14 [debug] 15171#0: accept mutex locked 2012/09/12 16:20:14 [debug] 15171#0: epoll timer: 75000 2012/09/12 16:20:14 [debug] 15171#0: epoll: fd:11 ev:0001 d:09C117C9 2012/09/12 16:20:14 [debug] 15171#0: *2 post event 09C2A710 2012/09/12 16:20:14 [debug] 15171#0: timer delta: 2 2012/09/12 16:20:14 [debug] 15171#0: posted events 09C2A710 2012/09/12 16:20:14 [debug] 15171#0: posted event 09C2A710 2012/09/12 16:20:14 [debug] 15171#0: *2 delete posted event 09C2A710 2012/09/12 16:20:14 [debug] 15171#0: *2 http keepalive handler 2012/09/12 16:20:14 [debug] 15171#0: *2 recv: fd:11 0 of 1024 2012/09/12 16:20:14 [info] 15171#0: *2 client 127.0.0.1 closed keepalive connection 2012/09/12 16:20:14 [debug] 15171#0: *2 close http connection: 11 2012/09/12 16:20:14 [debug] 15171#0: *2 event timer del: 11: 3134925906 2012/09/12 16:20:14 [debug] 15171#0: *2 reusable connection: 0 2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09BE3210 2012/09/12 16:20:14 [debug] 15171#0: *2 free: 00000000 2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09BD9FC0, unused: 56 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #242 | DAV module does not respect if-unmodified-since | nginx-module | defect | 13 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I.e. if you PUT or DELETE a resource with an if-unmodified-since header, the overwrite or delete will go through happily even if the header should have prevented it. (This is a common use case, where you've previously a version of a resource, and you know its modified date, and then, when updating it or deleting it, you want to check for race conditions with other clients, and can use if-unmodified-since to get an error back if someone else messed with the resource in the meantime.) Find a patch for this attached (also at https://gist.github.com/4013062). It's my first Nginx contribution -- feel free to point out style mistakes or general wrong-headedness. I did not find a clean way to make the existing code in ngx_http_not_modified_filter_module.c handle this. It looks directly at the last-modified header, and, as a header filter, will only run *after* the actions for the request have already been taken. I also did not add code for if-match, which is analogous, and code for which could probably be added to the ngx_http_test_if_unmodified function I added (which would be renamed in that case). But I don't really understand handling of etags by nginx yet, so I didn't touch that. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #52 | urlencode/urldecode needed in rewrite and other places | nginx-module | enhancement | 14 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Если в $http_accept есть пробелы, то они передаются без кодирования rewrite /cgi-bin/index.pl?_requri=$uri&_accept=$http_accept break; ... proxy_pass http://127.0.0.1:82; # mini-httpd listening |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #165 | Nginx worker processes don't seem to have the right group permissions | nginx-core | enhancement | 14 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Package: nginx Version: 1.2.0-1~squeeze (from Nginx repository, Debian version) When a UNIX domain socket permissions are set to allow the primary group of the nginx worker processes to read/write on it, the Nginx worker process fail to access it with a 'permission denied' error logged. Way to reproduce it: Binding Nginx on PHP-FPM UNIX domain socket PHP-FPM socket configured as follow:
Nginx configured as follow:
Details on the configuration can be found here: http://forum.nginx.org/read.php?2,226182 It would be also nice to check than any group of the Nginx worker processes can be used for setting access permissions on sockets, not only its primary one. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #239 | Support for large (> 64k) FastCGI requests | nginx-module | enhancement | 13 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Currently, a hardcoded limit returns a '[alert] fastcgi request record is too big:...' message on the error output when requests larger than 64k are tempted to be sent with Nginx. The improvement would be to handle larger requests based on configuration, if possible. Something similar to the work already done on output uffers would be nice. The only current workaround is not to use FastCGI, ie revert to some Apache for example, which is a huge step backwards... |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #55 | Неправильно определяется версия Opera | nginx-module | defect | 14 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
В новых версиях у браузера Opera user-agent выглядит так Opera/9.80 (Windows NT 6.1; U; MRA 5.8 (build 4661); ru) Presto/2.8.131 Version/11.11 Тоесть версию отражает Version/11.11 а не Opera/9.80 В модуле ngx_http_browser_module она определяется так:
Замена на
выявляет правильно новые версии но со старыми версиями будет проблема |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Yaroslav Zhuravlev (1 match) |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #1467 | Problem of location matching with a given request | documentation | defect | 8 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Hi, guys. I've got a problem with location request and regexp, 'cause the nginx is not finding match like it describes here: https://nginx.ru/en/docs/http/ngx_http_core_module.html#location My request is: http://localhost:8080/catalog/css/asdftail My conf is: server {
listen 8080;
location ~ ^/catalog/(js|css|i)/(.*)$
{
return 405;
}
location / {
location ~ ^.+tail$ {
return 403;
}
return 402;
}
}
My problem is: With my request, my conf must return me 405 error, but it return me 403 error, 'cause nginx starts to check regexp location from "the location with the longest matching prefix is selected and remembered.", not from top of config - "Then regular expressions are checked, in the order of their appearance in the configuration file." If my conf likes this: server {
listen 8080;
location ~ ^/catalog/(js|css|i)/(.*)$
{
return 405;
}
location ~ ^.+tail$ {
return 403;
}
location / {
return 402;
}
}
or this: server {
listen 8080;
location catalog/ {
location ~ ^/catalog/(js|css|i)/(.*)$
{
return 405;
}
}
location / {
location ~ ^.+tail$ {
return 403;
}
return 402;
}
}
Then all works like in manual. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
(empty) (40 matches) |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #1463 | Build in --builddir throws error on nginx.h | nginx-core | defect | 8 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
When building with --builddir, an error is thrown during compilation. > [...] > Running Mkbootstrap for nginx () > chmod 644 "nginx.bs" > "/foo/bar/perl5/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- nginx.bs blib/arch/auto/nginx/nginx.bs 644 > gmake[2]: *** No rule to make target `../../../../../src/core/nginx.h', needed by `nginx.c'. Stop. > gmake[2]: Leaving directory `/home/user/build/src/http/modules/perl' > gmake[1]: *** [/home/user/build//src/http/modules/perl/blib/arch/auto/nginx/nginx.so] Error 2 > gmake[1]: Leaving directory `/home/user/nginx-1.13.8' > gmake: *** [build] Errror 2 gmake --version GNU Make 3.81 Copyright (C) 2006 Free Software Foundation, Inc. gcc --version gcc (GCC) 5.3.0 cpp --version cpp (GCC) 5.3.0 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #348 | Excessive urlencode in if-set | nginx-core | defect | 13 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Hello, I had setup Apache with mod_dav_svn behind nginx acting as front-end proxy and while commiting a copied file with brackets ([]) in filename into that subversion I found a bug in nginx. How to reproduce it (configuration file is as simple as possible while still causing the bug): $ cat nginx.conf
error_log stderr debug;
pid nginx.pid;
events {
worker_connections 1024;
}
http {
access_log access.log;
server {
listen 8000;
server_name localhost;
location / {
set $fixed_destination $http_destination;
if ( $http_destination ~* ^(.*)$ )
{
set $fixed_destination $1;
}
proxy_set_header Destination $fixed_destination;
proxy_pass http://127.0.0.1:8010;
}
}
}
$ nginx -p $PWD -c nginx.conf -g 'daemon off;'
...
In second terminal window: $ nc -l 8010 In third terminal window: $ curl --verbose --header 'Destination: http://localhost:4000/foo%5Bbar%5D.txt' '0:8000/%41.txt' * About to connect() to 0 port 8000 (#0) * Trying 0.0.0.0... * Adding handle: conn: 0x7fa91b00b600 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0x7fa91b00b600) send_pipe: 1, recv_pipe: 0 * Connected to 0 (0.0.0.0) port 8000 (#0) > GET /%41.txt HTTP/1.1 > User-Agent: curl/7.30.0 > Host: 0:8000 > Accept: */* > Destination: http://localhost:4000/foo%5Bbar%5D.txt > Back in the second terminal window: ($ nc -l 8010) GET /%41.txt HTTP/1.0 Destination: http://localhost:4000/foo%255Bbar%255D.txt Host: 127.0.0.1:8010 Connection: close User-Agent: curl/7.30.0 Accept: */*
The problem is that the Destination header was changed from
In other cases (URL does not contain urlencoded character or that
Note: Why do I need that This bug also happens on nginx/0.7.67 in Debian Squeeze. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #564 | map regex matching affects rewrite directive | nginx-core | defect | 12 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Using a regex in the http {
map $http_accept_language $lang {
default en;
~(de) de;
}
server {
server_name test.local
listen 80;
rewrite ^/(.*)$ http://example.com/$lang/$1 permanent;
}
}
Expected: $ curl -sI http://test.local/foo | grep Location Location: http://example.com/en/foo $ curl -H "Accept-Language: de" -sI http://test.local/foo | grep Location Location: http://example.com/de/foo Actual: $ curl -sI http://test.local/foo | grep Location Location: http://example.com/en/foo $ curl -H "Accept-Language: de" -sI http://test.local/foo | grep Location Location: http://example.com/de/de
If I leave out the parentheses in $ curl -H "Accept-Language: de" -sI http://test.local/foo | grep Location Location: http://example.com/de/ |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #752 | try_files + subrequest + proxy-handler problem | nginx-core | defect | 11 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
When using subrequests with try_files the following behaviour is observed. server {
listen 8081;
default_type text/html;
location /uno { return 200 "uno "; }
location /duo { return 200 "duo "; }
location /tres { return 200 "tres "; }
}
server {
listen 8080;
location / {
root /tmp;
try_files /tres =404;
proxy_pass http://127.0.0.1:8081;
add_after_body /duo;
}
}
Assuming /tmp/tres exists, a request to returns "uno tres ", not "uno duo " or "tres tres ". I.e., main request assumes that the request URI is unmodified and passes original request URI, "/uno". But in a subrequest the URI is modified and nginx uses modified URI, "/tres". This is believed to be a bug, and one of the following should be done:
See this thread (in Russian) for additional details. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #756 | Client disconnect in ngx_http_image_filter_module | nginx-module | defect | 11 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I have encountered a bug in ngx_http_image_filter_module when used in conjunction with ngx_http_proxy_module ; the configuration is as following: location /img/ {
} The steps to reproduce are rather complicated as they depend on how TCP fragments the response coming from the proxy:
Nginx appears to give up right away on waiting for data if the contents of the first TCP packet received from the proxy does not contain a valid image header- i.e. ngx_http_image_test() will return NGX_HTTP_IMAGE_SIZE, etc. In my experience this was triggered by a subtle change in AWS S3 that introduced further fragmentation of the TCP responses. Versions affected: 1.6.2, 1.6.3, 1.7.2, 1.8.0, etc. (all?) Attaching a 1.8.0 patch that resolves it; the other versions can be fixed similarly. I think a better fix would be to "return NGX_OK" if we do not have enough data in "case NGX_HTTP_IMAGE_START", and "return NGX_HTTP_UNSUPPORTED_MEDIA_TYPE" (as per the original code) if enough data has been read, but it’s really not an image- but this exceeds the scope of the fix and my use case. nginx-devel thread: http://mailman.nginx.org/pipermail/nginx-devel/2015-April/006876.html |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #774 | modern_browser // gecko version overwrites msie version | nginx-module | defect | 11 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I am not sure, if this behavior is still the case in the current version, but it occurs in 1.4 on ubuntu 14.04. giving the following config: ##########################################
########################################## on an IE11 (Win 8) $ancient_browser == 1. I am not sure if its only me, but this seems wrong in my understanding of how the module should work. This applies for a 'real' IE11, but does not for a spoofed UA (in chromium 46.0.2462.0) of IE10, IE9, IE8, IE7 - so in that case everything works as expected. Interestingly though the next config: ##########################################
########################################## works as expected (in terms of the IE behavior), meaning $ancient_browser != 1. But now I would support older firefox versions - and that is not intended. The following config also gets $ancient_browser to be != 1 ##########################################
########################################## _Conclusion_: it looks like the gecko version is overwriting the defined msie version. This does not mean, that its exactly what is happening internally. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #861 | Possibility of Inconsistent HPACK Dynamic Table Size in HTTP/2 Implementation | nginx-module | defect | 10 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The hpack dynamic table is only initialized upon addition of the first entry (see ngx_http_v2_add_header) in http/v2/ngx_http_v2_table.c. If a dynamic table size update is sent before the first header to be added, the size will be set appropriately. However, once the first header is added, the table size is updated with NGX_HTTP_V2_TABLE_SIZE, resulting in a different size than the client. After a brief reading of the HTTP/2 and HPACK specification, it appears that updating the dynamic table size before adding any headers is allowed. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #994 | perl_require directive has effect only at first config | other | defect | 10 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
my configs are included as:
if I want to use 'perl_require' directive I should place it ONLY at first conf file (in alfabetical order) If I put directive into any other conf file it even does not complain if I try to load unexisting module |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #1058 | недокументированный редирект? | documentation | defect | 10 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
при запросе URL без концевого слэша всегда происходит 301 редирект на тот же URL со слэшем в конце пример конфига: location /dir {
} тоже самое происходит и в таком варианте: location /dir/ {
} Однако, в документации такое поведение, вроде бы, описано только для локэйшнов с *_pass, либо я не там искал, но нашёл я только вот это: Если location задан префиксной строкой со слэшом в конце и запросы обрабатываются при помощи proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass или memcached_pass, происходит специальная обработка. В ответ на запрос с URI равным этой строке, но без завершающего слэша, будет возвращено постоянное перенаправление с кодом 301 на URI с добавленным в конец слэшом. пример готовой конфигурации
$curl -I http://localhost:90/ig/infografika HTTP/1.1 301 Moved Permanently Server: nginx/1.11.3 Date: Wed, 24 Aug 2016 09:52:10 GMT Content-Type: text/html Content-Length: 185 Location: http://localhost:90/ig/infografika/ Connection: keep-alive Также проверял на версии 1.4.2, всё тоже самое. Если директории нет - то сразу возвращает 404, но если она есть, а запрос был без слэша - возникает редирект. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #1238 | Core dump when $limit_rate is set both in a map and in a location | nginx-core | defect | 9 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
This is a minimal server configuration used to reproduce the problem (only the map & server section, the rest is the default configuration from nginx.org centos 7 nginx-1.10.3 package). map $arg_test $limit_rate {
default 128k;
test 4k;
}
server {
listen 8080;
location / {
root /var/www;
set $limit_rate 4k;
}
}
If a request to an affected location is made, nginx crashes with the following stack. Program terminated with signal 7, Bus error.
#0 ngx_http_variable_request_set_size (r=0x7fb5c2761650, v=<optimized out>, data=140418628385320) at src/http/ngx_http_variables.c:730
730 *sp = s;
(gdb) thread apply all bt
Thread 1 (Thread 0x7fb5c1237840 (LWP 2648)):
#0 ngx_http_variable_request_set_size (r=0x7fb5c2761650, v=<optimized out>, data=140418628385320) at src/http/ngx_http_variables.c:730
#1 0x00007fb5c12e992d in ngx_http_rewrite_handler (r=0x7fb5c2761650) at src/http/modules/ngx_http_rewrite_module.c:180
#2 0x00007fb5c12a669c in ngx_http_core_rewrite_phase (r=0x7fb5c2761650, ph=<optimized out>) at src/http/ngx_http_core_module.c:901
#3 0x00007fb5c12a1b3d in ngx_http_core_run_phases (r=r@entry=0x7fb5c2761650) at src/http/ngx_http_core_module.c:847
#4 0x00007fb5c12a1c3a in ngx_http_handler (r=r@entry=0x7fb5c2761650) at src/http/ngx_http_core_module.c:830
#5 0x00007fb5c12ad0de in ngx_http_process_request (r=0x7fb5c2761650) at src/http/ngx_http_request.c:1910
#6 0x00007fb5c12ad952 in ngx_http_process_request_line (rev=0x7fb5c27bae10) at src/http/ngx_http_request.c:1022
#7 0x00007fb5c128de60 in ngx_event_process_posted (cycle=cycle@entry=0x7fb5c2745930, posted=0x7fb5c1575290 <ngx_posted_events>) at src/event/ngx_event_posted.c:33
#8 0x00007fb5c128d9d7 in ngx_process_events_and_timers (cycle=cycle@entry=0x7fb5c2745930) at src/event/ngx_event.c:259
#9 0x00007fb5c12944f0 in ngx_worker_process_cycle (cycle=cycle@entry=0x7fb5c2745930, data=data@entry=0x1) at src/os/unix/ngx_process_cycle.c:753
#10 0x00007fb5c1292e66 in ngx_spawn_process (cycle=cycle@entry=0x7fb5c2745930, proc=proc@entry=0x7fb5c1294460 <ngx_worker_process_cycle>, data=data@entry=0x1,
name=name@entry=0x7fb5c131c197 "worker process", respawn=respawn@entry=-3) at src/os/unix/ngx_process.c:198
#11 0x00007fb5c12946f0 in ngx_start_worker_processes (cycle=cycle@entry=0x7fb5c2745930, n=2, type=type@entry=-3) at src/os/unix/ngx_process_cycle.c:358
#12 0x00007fb5c1295283 in ngx_master_process_cycle (cycle=cycle@entry=0x7fb5c2745930) at src/os/unix/ngx_process_cycle.c:130
#13 0x00007fb5c127039d in main (argc=<optimized out>, argv=<optimized out>) at src/core/nginx.c:367
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #1383 | Error if using proxy_pass with variable and limit_except | nginx-core | defect | 9 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Hi nginx guys, i use a nginx in front of a varnish server. I purge my varnish via purge method. Nginx uses the following VHost config: server {
listen *:80 default_server;
location / {
limit_except GET POST {
allow 127.0.0.1/32;
deny all;
}
set $upstream http://127.0.0.1:8080;
if ($http_user_agent = 'mobile') {
set $upstream http://127.0.0.1:8080;
}
proxy_pass $upstream;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
Suggested: From not localhost i only can request GET/HEAD/POST, localhost can do everything. From remote it works as expected: root@test:~# curl -X PURGE -I EXTIP HTTP/1.1 403 Forbidden Server: nginx Date: Mon, 18 Sep 2017 10:39:23 GMT Content-Type: text/html Content-Length: 162 Connection: keep-alive Vary: Accept-Encoding But from localhost: root@test:~# curl -X PURGE -I http://127.0.0.1 HTTP/1.1 500 Internal Server Error Server: nginx Date: Mon, 18 Sep 2017 10:39:06 GMT Content-Type: text/html Content-Length: 186 Connection: close Nginx error log tells me: ==> /var/log/nginx/error.log <== 2017/09/18 12:39:06 [error] 2483#2483: *2 invalid URL prefix in "", client: 127.0.0.1, server: , request: "PURGE / HTTP/1.1", host: "127.0.0.1" Without using Variables in VHost: server {
listen *:80 default_server;
location / {
limit_except GET POST {
allow 127.0.0.1/32;
deny all;
}
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
Works as expected: root@test:~# curl -X PURGE -I http://127.0.0.1 HTTP/1.1 200 OK Server: nginx Date: Mon, 18 Sep 2017 10:45:35 GMT Content-Type: text/html; charset=UTF-8 Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding Other tests with a variable proxy_pass e.g. using the get method instead of purge also fails with same error. Please take a look why nginx fails when combining limit_except with proxypass and variables. Thanks |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #1598 | Windows Path Length Limitation issue | nginx-core | defect | 8 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Windows by default have its PATH length limit as 255 characters. On accessing a file with path more than 255 characters, nginx throws an error saying "The system cannot find the file specified". CreateFile() "C:\nginx-1.13.12/client-data/patch-resources/linux/redhat/offline-meta/7/7Client/x86_64/extras/os/repodata/245f964e315fa121c203b924ce7328cd704e600b6150c4b7cd951c8707a70394f/245f964e315fa121c203b924ce7328cd704e600b6150c4b7cd951c8707a70394f-primary.sqlite.bz2" failed (3: The system cannot find the path specified) Refer : https://docs.microsoft.com/en-us/windows/desktop/fileio/naming-a-file |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #1607 | mirror + limit_req = writing connections | nginx-core | defect | 8 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Hello, Nginx seems to have a bug with mirror+limit_req Configuration: All servers could be the same for testing purposes (127.0.0.1) Frontend server limit_req_zone $binary_remote_addr zone=one:10m rate=5r/s;
location = /url1
{
mirror /url2;
proxy_pass http://127.0.0.1/test;
}
location = /url2
{
internal;
limit_req zone=one burst=10;
proxy_pass http://127.0.0.1/test2;
}
location = /status { stub_status on; }
Backend server location = /test { return 200; }
Mirror server location = /test2 { return 200; }
Now run: # for i in {1..1000}; do curl http://127.0.0.1/url1 >/dev/null & sleep 0.05; done
Wait for completion of all requests and see writing connections: # curl http://127.0.0.1/status Active connections: 271 server accepts handled requests 2001 2001 2001 Reading: 0 Writing: 271 Waiting: 0 # sleep 120 # netstat -atn | grep 127.0.0.1:80 | grep -v CLOSE_WAIT | wc -l 270 # service nginx reload # pgrep -f shutting # netstat -atn | grep 127.0.0.1:80 | grep -v CLOSE_WAIT | wc -l 0 # curl http://127.0.0.1/status Active connections: 271 server accepts handled requests 2002 2002 2002 Reading: 0 Writing: 271 Waiting: 0 When /url1 doesn't have limit_req, but /url2 has, number of writing connections from stub status begins to grow. Watching netstat, I can also see CLOSE_WAIT connections growing. I did't find any impact on requests processing at least when the number of connections is quite low. Actually, after reloading nginx there seems to be no real connections (writing). But this breaks nginx monitoring. Restart of nginx only resets writing connections number. If both /url1 and /url2 have limit_req, or /url1 only has limit_req - all is OK. We use amd64 debian stretch, with the nginx-extras package from debian buster (rebuilt on stretch). |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #1850 | Content of the variable $sent_http_connection is incorrect | other | defect | 7 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
There is a suspicion that the content of the variable $sent_http_connection is incorrect. Example Expected: keep-alive Actually: close Host: anyhost User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0 Accept: image/webp,*/* Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive Referer: http://anyhost/catalog/page/ Cookie: PHPSESSID=vkgt1iiofoav3u24o54et46oc7 Pragma: no-cache HTTP/1.1 200 OK Server: nginx Date: Sun, 15 Sep 2019 22:28:53 GMT Content-Type: image/jpeg Content-Length: 21576 Last-Modified: Wed, 06 Dec 2017 15:38:23 GMT Connection: keep-alive ETag: "5a280eef-5448" X-Content-Type-Options: nosniff Accept-Ranges: bytes log_format test
123.123.123.123 - - [16/Sep/2019:01:28:53 +0300] 200 21844 0.000 . 13117169 3 keep-alive close "GET /images/anypicture.jpg HTTP/1.0" "http://anyhost/catalog/page/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0" "-" |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #1904 | sendfile with io-threads - nginx mistakenly considers premature client connection close if client sends FIN at response end | nginx-core | defect | 6 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Hi, The scenario is as follows:
The effect i've seen is that "$body_bytes_sent" holds partial data (up to the last "successful" sendfile call) and "$request_completion" is empty. I guess there are other effects though these are the one i'm using, so they popped up. From what i've managed to understand from the code it looks like the scenario is that the read_event_handler "ngx_http_test_reading" is called before the completed task from the io-thread is handled by the main thread, effectively making Nginx think the client connection close happened earlier. I've managed to reproduce it or latest nginx with rather simple config, but it's time sensitive so it doesn't happen on each transaction. I saw that using a bigger file with rate-limit increases the chances. Config: worker_processes 1;
events {
worker_connections 1024;
}
http {
keepalive_timeout 120s;
keepalive_requests 1000;
log_format main "$status\t$sent_http_content_length\t$body_bytes_sent\t$request_completion";
access_log logs/access.log main;
error_log logs/error.log info;
aio threads;
sendfile on;
limit_rate 10m;
server {
listen 0.0.0.0:1234 reuseport;
location = /test-sendfile-close {
alias files/10mb;
}
}
}
I then tail -F the access log and the error log file, and send these requests from the same machine: while true; do wget -q "http://10.1.1.1:1234/test-sendfile-close"; done The output i get in error log and access log (in this order) in case of a good transaction is: 2019/12/17 14:52:34 [info] 137444#137444: *1 client 10.1.1.1 closed keepalive connection 200 10485760 10485760 OK But every few transactions i get this output instead: 2019/12/17 14:52:38 [info] 137444#137444: *7 client prematurely closed connection while sending response to client, client: 10.1.1.1, server: , request: "GET /test-sendfile-close HTTP/1.1", host: "10.1.1.1:1234" 200 10485760 3810520 As you can see, the reported sent bytes is lower than the actual value, and the request_completion is empty. I understand that the closer the client is to Nginx the higher chances this could happen, but it's not just a lab issue - we've seen this in a field trial with clients in a distance of ~30ms RTT, with higher load of course. If there is need for any other information, or anything else - i'll be glad to provide it. I appreciate the help, and in general - this great product you've built! Thank you, Shmulik Biran |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #1958 | `modern_browser` definition for Safari version is wrong/unexpected | nginx-module | defect | 6 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
http://nginx.org/en/docs/http/ngx_http_browser_module.html
One of the great use cases for
With the current implementation of
The Safari WebKit build number can be the same across different releases (see the above example user agent strings below). I am currently working around this using a map $http_user_agent $is_safari_lt_12 {
"~ Version/((?:[1-9]|1[0-1])(?:\.\d+)+) (?:Mobile/\w+ )?Safari/(?:\d+(?:\.\d+)*)$" 1;
default 0;
}
and then combining it with # Redirect requests from IE to the unsupported browser page.
ancient_browser "MSIE ";
ancient_browser "Trident/";
modern_browser unlisted;
if ($ancient_browser) {
rewrite ^/.* /unsupported-browser/ last;
}
if ($is_safari_lt_12) {
rewrite ^/.* /unsupported-browser/ last;
}
It would be much nicer if one could just do modern_browser safari_version 12;
instead of needing the map and the additional More detailshttps://trac.nginx.org/nginx/browser/nginx/src/http/modules/ngx_http_browser_module.c#L181
The current implementation for https://en.wikipedia.org/wiki/Safari_version_history
The number after Mozilla/5.0 (iPad; CPU iPhone OS 12_1_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1 Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.1.2 Safari/605.1.15 Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.0.5 Safari/605.1.15
The commonly referred to version number is the number after I would like to propose:
1) changing the documentation to make it clearer that the version number one passes to
2) Adding a new named option for |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #1965 | $request_time less than $upstream_response_time | nginx-core | defect | 6 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
nginx logformat: log_format main escape=json '{ "http_x_forwarded_for": "[$http_x_forwarded_for]", ' '"remote_addr": "$remote_addr", ' '"remote_user": "$remote_user", ' '"time_local": "[$time_local]", ' '"request_method": "$request_method", ' '"request_host": "$scheme://$host", ' '"request_host_1": "$host", ' '"service_line": "itservice.api", ' '"request_uri": "$uri", ' '"query_string": "$query_string", ' '"server_protocol": "$server_protocol", ' '"status": "$status", ' '"body_bytes_sent": "$body_bytes_sent", ' '"http_referer": "$http_referer", ' '"http_user_agent": "$http_user_agent",' '"request_time": "$request_time", ' '"upstream_addr": "[$upstream_addr]", ' '"req_id": "$request_id", ' '"upstream_response_time": "$upstream_response_time" ' ' }'; nginx log: { "http_x_forwarded_for": "[]", "remote_addr": "192.168.11.130", "remote_user": "", "time_local": "[29/Apr/2020:01:11:33 +0800]", "request_method": "GET", "request_host": "https://xxx.abc.com", "request_host_1": "xxx.abc.com", "service_line": "itservice.api", "request_uri": "/api/v1/sensitive-info/batch/getUserInfo", "query_string": "batchNumber=xxx&userId=xxx&dataType=1", "server_protocol": "HTTP/1.1", "status": "200", "body_bytes_sent": "113", "http_referer": "", "http_user_agent": "Apache-HttpClient/4.5.10 (Java/1.8.0_211)","request_time": "0.011", "upstream_addr": "[192.168.10.182:80]", "req_id": "6bdcc5ce837247323599d37aaceba33c", "upstream_response_time": "0.012" } issue: upstream_response_time: 0.012 request_time: 0.011 In this log, the requset_time is less than upstream_response_time. Why does this happen? |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #2012 | Wrong header Connection, when keepalive is disabled | nginx-core | defect | 6 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I disabled keepalive with directives keepalive_timeout 0 and keepalive_requests 0, but nginx continues to return header Connection: keep-alive. Steps to reproduce curl -v http://mydomain.org Expected response Server: nginx Date: Sat, 04 Jul 2020 21:52:23 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 225 Connection: close my-trace: myhost-abcd Actual response Server: nginx Date: Sat, 04 Jul 2020 21:52:23 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 225 Connection: keep-alive my-trace: myhost-abcd My nginx -T output is in attached file. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #2060 | Nginx doesn't take case http_502 as unsuccessful attempt in ngx_http_grpc_module | nginx-module | defect | 6 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
From the nginx document http://nginx.org/en/docs/http/ngx_http_grpc_module.html, syntax "grpc_next_upstream error timeout http_502;" is valid and case http_502 will take as unsuccessful attempt. However, Nginx doesn't take case http_502 as unsuccessful attempt in fact. Below is an example. A grpc client sent request to nginx server every seconds, and nginx kept sending request to the upstream server which returned 502 and the other one in round-robin way. Nginx didn't take case http_502 as unsuccessful attempt. nginx config file: upstream testserver {
server 10.46.46.161:9999 max_fails=1 fail_timeout=60; # another nginx server which can retrun responses with error code 502.
server 10.46.46.160:9999; # a server which can retrun normal responses with status code 200.
}
server {
listen 8888 http2;
location /com.company.test {
grpc_pass grpc://testserver;
grpc_next_upstream error timeout http_504 http_502 non_idempotent;
}
}
access log file: [11:24:40 +0000]|| "POST /com.company.test/order HTTP/2.0"|| 502|| 150|| "-"|| grpc-java-netty/1.17.2|| -|| 0.000|| 0.001|| 10.46.46.161:9999|| 502 [11:24:41 +0000]|| "POST /com.company.test/order HTTP/2.0"|| 200|| 28|| "-"|| grpc-java-netty/1.17.2|| -|| 0.003|| 0.003|| 10.46.46.160:9999|| 200 [11:24:42 +0000]|| "POST /com.company.test/order HTTP/2.0"|| 502|| 150|| "-"|| grpc-java-netty/1.17.2|| -|| 0.000|| 0.001|| 10.46.46.161:9999|| 502 [11:24:43 +0000]|| "POST /com.company.test/order HTTP/2.0"|| 200|| 28|| "-"|| grpc-java-netty/1.17.2|| -|| 0.005|| 0.005|| 10.46.46.160:9999|| 200 [11:24:44 +0000]|| "POST /com.company.test/order HTTP/2.0"|| 502|| 150|| "-"|| grpc-java-netty/1.17.2|| -|| 0.001|| 0.000|| 10.46.46.161:9999|| 502 [11:24:45 +0000]|| "POST /com.company.test/order HTTP/2.0"|| 200|| 28|| "-"|| grpc-java-netty/1.17.2|| -|| 0.005|| 0.004|| 10.46.46.160:9999|| 200 [11:24:46 +0000]|| "POST /com.company.test/order HTTP/2.0"|| 502|| 150|| "-"|| grpc-java-netty/1.17.2|| -|| 0.000|| 0.001|| 10.46.46.161:9999|| 502 [11:24:47 +0000]|| "POST /com.company.test/order HTTP/2.0"|| 200|| 28|| "-"|| grpc-java-netty/1.17.2|| -|| 0.003|| 0.003|| 10.46.46.160:9999|| 200 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #2109 | Content-Type header is dropped when HTTP2 is used( HTTP status 204 only) | nginx-core | defect | 5 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
When backend server returned HTTP status 204 with Content-Type header. The Content-Type header is set and sent correctly when using HTTP1.1(over plain-text HTTP or over HTTPS). However, when making a request using HTTP2 (over TLS), that header is not sent. I can't see a reason for this and would guess that this is a bug in Nginx. Or am I missing something? |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #2127 | ngx_http_realip_module changes $remote_addr which leads to wrong ips in X-Forwarded-For received by upstream service | nginx-module | defect | 5 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I have a webapp under NGinx and another frontal load balancer, something like below (x.x.x.x = IP address): Client(a.a.a.a) -> LB (b.b.b.b) -> NGX (c.c.c.c) -> WEBAPP (d.d.d.d) Here is a snippet of my NGinx configuration: location / {
} The load balancer add X-Forwarded-For field with client IP X-Forwarded-For = a.a.a.a NGinx search for client real IP in X-Forwarded-For header by omiting LB IP (b.b.b.b) and change $remote_addr from b.b.b.b to a.a.a.a so proxy_set_header X-Real-IP $remote_addr become true (OK that's what I want !) BUT, NGinx also complete X-Forwarded-For header with a.a.a.a IP instead of b.b.b.b WEBAPP receive the following headers: X-Forwarded-For = a.a.a.a, a.a.a.a X-Real-IP = a.a.a.a -> X-Forwarded-For should be a.a.a.a, b.b.b.b So here I am loosing info about my load balancer. Right now for getting proper ips in my webapp I need to use a workaround of setting X-forwarded-for as: proxy_set_header X-Forwarded-For "$http_x_forwarded_for, $realip_remote_addr"; What I need is the ability to set first proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for and then search for real IP and replace $remote_addr value. Or maybe another variable similar to $proxy_add_x_forwarded_for which retains the load balancer ip. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #2242 | DNS UDP proxy with UNIX socket is not working | nginx-core | defect | 5 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Hi, things go in such a way, that I need to pass DNS traffic from LXC container to host system without real network between them. I decided to try NGINX as a proxy server to pass DNS requests/responses via shared unix socket, which is passed from host system as a mountpoint. I've removed LXC container from my scheme to concentrate on the problem itself, as it reproduces on a normal system without containers involved. I've got two separate unix sockets: one for tcp-originated requests and one for udp, as nginx configures unix sockets to be stream or dgram based on server's configuration (tcp vs udp). nginx.conf: user nginx;
worker_processes 1;
worker_rlimit_nofile 100000;
pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log warn;
events {
use epoll;
worker_connections 1024;
multi_accept on;
}
stream {
# TCP
server {
listen 5353;
proxy_pass unix:/var/lib/nginx/dns-tcp.sock;
}
server {
listen unix://var/lib/nginx/dns-tcp.sock;
proxy_pass 10.70.112.1:53;
}
# UDP
server {
listen 5353 udp;
proxy_pass unix:/var/lib/nginx/dns-udp.sock;
}
server {
listen unix://var/lib/nginx/dns-udp.sock udp;
proxy_pass 10.70.112.1:53;
}
}
For tcp, DNS traffic works excellent: [root@dev ~]# dig @127.0.0.1 -p 5353 ya.ru +tcp ; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> @127.0.0.1 -p 5353 ya.ru +tcp ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 59275 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;ya.ru. IN A ;; ANSWER SECTION: ya.ru. 384 IN A 87.250.250.242 ;; Query time: 2 msec ;; SERVER: 127.0.0.1#5353(127.0.0.1) ;; WHEN: Fri Sep 03 15:20:55 MSK 2021 ;; MSG SIZE rcvd: 50 strace output: [root@dev ~]# strace -s 1024 -fp 3876008
strace: Process 3876008 attached
epoll_wait(10, [{EPOLLIN, {u32=1176072208, u64=139720757178384}}], 512, 588295) = 1
accept4(5, {sa_family=AF_INET, sin_port=htons(40085), sin_addr=inet_addr("127.0.0.1")}, [16], SOCK_NONBLOCK) = 13
setsockopt(13, SOL_TCP, TCP_NODELAY, [1], 4) = 0
socket(AF_LOCAL, SOCK_STREAM, 0) = 14
ioctl(14, FIONBIO, [1]) = 0
epoll_ctl(10, EPOLL_CTL_ADD, 14, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=1176073648, u64=139720757179824}}) = 0
connect(14, {sa_family=AF_LOCAL, sun_path="/var/lib/nginx/dns-tcp.sock"}, 110) = 0
epoll_ctl(10, EPOLL_CTL_ADD, 13, {EPOLLIN|EPOLLRDHUP|EPOLLET, {u32=1176073408, u64=139720757179584}}) = 0
accept4(5, 0x7fff487cc150, 0x7fff487cc14c, SOCK_NONBLOCK) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(10, [{EPOLLOUT, {u32=1176073648, u64=139720757179824}}, {EPOLLIN, {u32=1176072448, u64=139720757178624}}, {EPOLLIN, {u32=1176073408, u64=139720757179584}}], 512, 583915) = 3
accept4(6, {sa_family=AF_LOCAL, NULL}, [2], SOCK_NONBLOCK) = 15
socket(AF_INET, SOCK_STREAM, IPPROTO_IP) = 16
ioctl(16, FIONBIO, [1]) = 0
epoll_ctl(10, EPOLL_CTL_ADD, 16, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=1176074608, u64=139720757180784}}) = 0
connect(16, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("10.70.112.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
accept4(6, 0x7fff487cc150, 0x7fff487cc14c, SOCK_NONBLOCK) = -1 EAGAIN (Resource temporarily unavailable)
recvfrom(13, "\0\"\347\213\1 \0\1\0\0\0\0\0\1\2ya\2ru\0\0\1\0\1\0\0)\20\0\0\0\0\0\0\0", 16384, 0, NULL, NULL) = 36
writev(14, [{"\0\"\347\213\1 \0\1\0\0\0\0\0\1\2ya\2ru\0\0\1\0\1\0\0)\20\0\0\0\0\0\0\0", 36}], 1) = 36
epoll_wait(10, [{EPOLLOUT, {u32=1176074608, u64=139720757180784}}], 512, 60000) = 1
getsockopt(16, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
setsockopt(16, SOL_TCP, TCP_NODELAY, [1], 4) = 0
epoll_ctl(10, EPOLL_CTL_ADD, 15, {EPOLLIN|EPOLLRDHUP|EPOLLET, {u32=1176074368, u64=139720757180544}}) = 0
epoll_wait(10, [{EPOLLIN, {u32=1176074368, u64=139720757180544}}], 512, 583913) = 1
recvfrom(15, "\0\"\347\213\1 \0\1\0\0\0\0\0\1\2ya\2ru\0\0\1\0\1\0\0)\20\0\0\0\0\0\0\0", 16384, 0, NULL, NULL) = 36
writev(16, [{"\0\"\347\213\1 \0\1\0\0\0\0\0\1\2ya\2ru\0\0\1\0\1\0\0)\20\0\0\0\0\0\0\0", 36}], 1) = 36
epoll_wait(10, [{EPOLLOUT, {u32=1176073648, u64=139720757179824}}], 512, 583913) = 1
epoll_wait(10, [{EPOLLIN|EPOLLOUT, {u32=1176074608, u64=139720757180784}}], 512, 583913) = 1
recvfrom(16, "\0002\347\213\201\200\0\1\0\1\0\0\0\1\2ya\2ru\0\0\1\0\1\300\f\0\1\0\1\0\0\1\200\0\4W\372\372\362\0\0)\20\0\0\0\0\0\0\0", 16384, 0, NULL, NULL) = 52
writev(15, [{"\0002\347\213\201\200\0\1\0\1\0\0\0\1\2ya\2ru\0\0\1\0\1\300\f\0\1\0\1\0\0\1\200\0\4W\372\372\362\0\0)\20\0\0\0\0\0\0\0", 52}], 1) = 52
epoll_wait(10, [{EPOLLIN|EPOLLOUT, {u32=1176073648, u64=139720757179824}}], 512, 583912) = 1
recvfrom(14, "\0002\347\213\201\200\0\1\0\1\0\0\0\1\2ya\2ru\0\0\1\0\1\300\f\0\1\0\1\0\0\1\200\0\4W\372\372\362\0\0)\20\0\0\0\0\0\0\0", 16384, 0, NULL, NULL) = 52
writev(13, [{"\0002\347\213\201\200\0\1\0\1\0\0\0\1\2ya\2ru\0\0\1\0\1\300\f\0\1\0\1\0\0\1\200\0\4W\372\372\362\0\0)\20\0\0\0\0\0\0\0", 52}], 1) = 52
epoll_wait(10, [{EPOLLIN|EPOLLRDHUP, {u32=1176073408, u64=139720757179584}}], 512, 583912) = 1
recvfrom(13, "", 16384, 0, NULL, NULL) = 0
close(14) = 0
close(13) = 0
epoll_wait(10, [{EPOLLIN|EPOLLHUP|EPOLLRDHUP, {u32=1176074368, u64=139720757180544}}], 512, 583912) = 1
recvfrom(15, "", 16384, 0, NULL, NULL) = 0
close(16) = 0
close(15) = 0
epoll_wait(10, ^Cstrace: Process 3876008 detached
<detached ...>
But in UDP case, nginx process:
[{EPOLLIN, {u32=1176072688, u64=139720757178864}}], 512, 440326) = 1
recvmsg(7, {msg_name(16)={sa_family=AF_INET, sin_port=htons(55102), sin_addr=inet_addr("127.0.0.1")}, msg_iov(1)=[{"\6\261\1 \0\1\0\0\0\0\0\1\2ya\2ru\0\0\1\0\1\0\0)\20\0\0\0\0\0\0\0", 65535}], msg_controllen=32, [{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, {ipi_ifindex=if_nametoindex("lo"), ipi_spec_dst=inet_addr("127.0.0.1"), ipi_addr=inet_addr("127.0.0.1")}}], msg_flags=0}, 0) = 34
socket(AF_LOCAL, SOCK_DGRAM, 0) = 13
ioctl(13, FIONBIO, [1]) = 0
epoll_ctl(10, EPOLL_CTL_ADD, 13, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=1176074609, u64=139720757180785}}) = 0
connect(13, {sa_family=AF_LOCAL, sun_path="/var/lib/nginx/dns-udp.sock"}, 110) = 0
sendmsg(13, {msg_name(0)=NULL, msg_iov(1)=[{"\6\261\1 \0\1\0\0\0\0\0\1\2ya\2ru\0\0\1\0\1\0\0)\20\0\0\0\0\0\0\0", 34}], msg_controllen=0, msg_flags=0}, 0) = 34
recvmsg(7, 0x7fff487cc010, 0) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(10, [{EPOLLOUT, {u32=1176074609, u64=139720757180785}}, {EPOLLIN, {u32=1176072928, u64=139720757179104}}], 512, 438024) = 2
recvmsg(8, {msg_name(0)=0x7fff487cc0a0, msg_iov(1)=[{"\6\261\1 \0\1\0\0\0\0\0\1\2ya\2ru\0\0\1\0\1\0\0)\20\0\0\0\0\0\0\0", 65535}], msg_controllen=0, msg_flags=0}, 0) = 34
socket(AF_INET, SOCK_DGRAM, IPPROTO_IP) = 14
ioctl(14, FIONBIO, [1]) = 0
epoll_ctl(10, EPOLL_CTL_ADD, 14, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=1176073649, u64=139720757179825}}) = 0
connect(14, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("10.70.112.1")}, 16) = 0
sendmsg(14, {msg_name(0)=NULL, msg_iov(1)=[{"\6\261\1 \0\1\0\0\0\0\0\1\2ya\2ru\0\0\1\0\1\0\0)\20\0\0\0\0\0\0\0", 34}], msg_controllen=0, msg_flags=0}, 0) = 34
recvmsg(8, 0x7fff487cc010, 0) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(10, [{EPOLLOUT, {u32=1176074609, u64=139720757180785}}, {EPOLLOUT, {u32=1176073649, u64=139720757179825}}], 512, 438024) = 2
epoll_wait(10, [{EPOLLIN|EPOLLOUT, {u32=1176073649, u64=139720757179825}}], 512, 438023) = 1
recvfrom(14, "\6\261\201\200\0\1\0\1\0\0\0\1\2ya\2ru\0\0\1\0\1\300\f\0\1\0\1\0\0\0\356\0\4W\372\372\362\0\0)\20\0\0\0\0\0\0\0", 16384, 0, NULL, NULL) = 50
sendmsg(8, {msg_name(16)={sa_family=AF_LOCAL, sun_path=@""}, msg_iov(1)=[{"\6\261\201\200\0\1\0\1\0\0\0\1\2ya\2ru\0\0\1\0\1\300\f\0\1\0\1\0\0\0\356\0\4W\372\372\362\0\0)\20\0\0\0\0\0\0\0", 50}], msg_controllen=0, msg_flags=0}, 0) = -1 ECONNREFUSED (Connection refused)
close(14) = 0
epoll_wait(10,
In tcpdump I see request to upstream server and response: [root@dev ~]# tcpdump -ni eth0 port 53 and host 10.70.112.1 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 15:24:00.825725 IP 10.70.112.35.55180 > 10.70.112.1.domain: 23283+ [1au] A? ya.ru. (34) 15:24:00.826905 IP 10.70.112.1.domain > 10.70.112.35.55180: 23283 1/0/1 A 87.250.250.242 (50) Please help understand what could go wrong and how to fix this. Feel free to ask any additional information. Thanks. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #2268 | http2 client set both host and :authority header, server throws 400 bad request error | nginx-module | defect | 4 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
when use http2 client. we both set host and :authority header. nginx server throw 400 bad request. the error log is *1 client sent duplicate host header: "host: xxx", previous value: "host: 127.0.0.1:27710" while reading client request headers, client: 127.0.0.1, server: _, host: "127.0.0.1:27710" this is very confused. need some help. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #2291 | Regex plus variable in Nginx `proxy_redirect` | documentation | defect | 4 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
It is not currently documented or apparent if it is possible to use a regex that also includes Nginx variables in For example, none of these work: proxy_redirect ~*https?://\\$proxy_host/(.*)$ /app1/$1 proxy_redirect ~*https?://\$proxy_host/(.*)$ /app1/$1 proxy_redirect ~*https?://$proxy_host/(.*)$ /app1/$1 This is described here in further detail: https://stackoverflow.com/q/70205048/7954504 The use-case for this is the scenario where one only wants to change Location header when the redirect location is for the internal app, not for an external redirect. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #2310 | Document behaviour for all config statements in nested location blocks | documentation | defect | 4 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
From my understanding each request is only ever handled in a single top level location block and some, but not all statements are inherited in nested location blocks. Each location may also only have exactly one block. Ideally this could be changed to actually allow for modularity and reduced duplication of statements but since this system is unlikely to change for backwards compatibility reasons it would at least be useful to know which statements need to be duplicated in every nested location block and which are inherited. I ran into this issue when some location blocks that were only meant to disable password protection for specific domains also disabled the reverse proxy for those locations. Presumably proxy_pass is what this post calls a command type directive https://stackoverflow.com/questions/32104731/directive-inheritance-in-nested-location-blocks |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #2441 | pkg-oss - build error | nginx-package | defect | 3 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Hi guys, Trying to build a module for nginx, but build error arises: ===> Building nginx-module-rtmp package
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.dK9eXn
+ umask 022
+ cd /root/rpmbuild/BUILD
+ cd /root/rpmbuild/BUILD
+ rm -rf nginx-plus-module-rtmp-1.17.6
+ /usr/bin/mkdir -p nginx-plus-module-rtmp-1.17.6
+ cd nginx-plus-module-rtmp-1.17.6
+ /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w .
+ tar --strip-components=1 -zxf /root/rpmbuild/SOURCES/nginx-1.17.6.tar.gz
tar (child): /root/rpmbuild/SOURCES/nginx-1.17.6.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
error: Bad exit status from /var/tmp/rpm-tmp.dK9eXn (%prep)
Bad exit status from /var/tmp/rpm-tmp.dK9eXn (%prep)
How to reproduce: docker run --rm rockylinux:8 bash -c 'yum install -y wget && wget https://hg.nginx.org/pkg-oss/raw-file/default/build_module.sh && bash build_module.sh -y -r 20 https://github.com/arut/nginx-rtmp-module.git' Same error on different platforms - aarch64 and amd64. Full output is attached(aarch64). |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #2530 | ACK of packet containing PATH_RESPONSE frame can't update rtt state | nginx-core | defect | 3 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The packet sent by calling ngx_quic_frame_sendto will not be insert into qc->sent. This causes the rtt state to lose some update because ngx_quic_handle_ack_frame_range can't know send_time.max_pn. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #2667 | Ubuntu repository documentation: keyring may need permissions set | documentation | defect | 22 months ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The documentation describes installing the GPG public keyring and source file for the Ubuntu repository. https://nginx.org/en/linux_packages.html#Ubuntu However, apt requires that the keyring file be readable by non-privileged users. This is unintuitive, but even when run as root, apt uses a non-privileged user to read the keyring file (see: https://askubuntu.com/a/1401911/4512 ). Depending on the system's umask defaults, the keyring may be created as unreadable by a non-privileged user, and in this case apt will not tell the user there is a permissions issue, rather it gives the following ambiguous error: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY ABF5BD827BD9BF62 Thus, to avoid confusion, it may be helpful to add a line of code similar to the following in the documentation after the curl command: sudo chmod 644 /usr/share/keyrings/nginx-archive-keyring.gpg |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #289 | Add support for HTTP Strict Transport Security (HSTS / RFC 6797) | nginx-core | enhancement | 13 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
It would be great if support for HSTS (RFC 6797) would be added to the nginx-core. Currently HSTS is "enabled" like this (according to https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security): add_header Strict-Transport-Security max-age=31536000; However this has at least two downsides:
RFC 6797: https://tools.ietf.org/html/rfc6797 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #320 | nginx should reliably check client connection close with pending data | nginx-core | enhancement | 13 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
To detect if a connection was closed by a client, nginx uses:
Most notably, this doesn't cover Linux and SSL connections, which are usually closed with pending data (a shutdown alert). To improve things, the following should be implemented (in no particular order):
References: http://mailman.nginx.org/pipermail/nginx/2011-June/027669.html http://mailman.nginx.org/pipermail/nginx/2011-November/030614.html http://mailman.nginx.org/pipermail/nginx/2013-March/038119.html |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #376 | log file reopen should pass opened fd from master process | nginx-core | enhancement | 13 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
When starting nginx all the log files (error_log, access_log) are created and opened by the master process and the filehandles passed to the worker while forking. On SIGUSR1 the master reopens the files, chown's them and then the worker reopens the files himself. This has several drawbacks:
A better solution may be to reopen the log files in the master process as currently done and then use the already available ngx_{read,write}_channel functions to pass the new filehandles down to the worker. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #853 | Поведение cache_use_stale updating если новые ответы нельзя кешировать | nginx-core | enhancement | 10 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Конфигурация следующая: fastcgi_cache_path /var/tmp/nginx/fastcgi_cache levels=1:2 keys_zone=fcgi_cache:16m max_size=1024m inactive=35m; fastcgi_cache_revalidate on; fastcgi_cache fcgi_cache; fastcgi_cache_valid 200 301 302 304 10m; fastcgi_cache_valid 404 2m; fastcgi_cache_use_stale updating error timeout invalid_header http_500 http_503; fastcgi_cache_key "$request_method|$host|$uri|$args"; fastcgi_no_cache $cookie_nocache $arg_nocache $cookie_NRGNSID $cookie_NRGNTourSID $cookie_failed_login; fastcgi_cache_bypass $cookie_nocache $arg_nocache $cookie_NRGNSID $cookie_NRGNTourSID $cookie_failed_login; Сейчас бекенд отвечает 200 с заголовками "Cache-Control: no-store, no-cache, must-revalidate" и "Pragma: no-cache". Но две недели назад некоторое время там было 302 без запрета кеширования и ответ попал в кеш по fastcgi_cache_valid 10m. После этого одинокие запросы получают upstream_cache_status EXPIRED и ответ бекенда, но если несколько приходят одновременно, то срабатывает UPDATING и редирект из кеша двухнедельной давности. Запросы приходят регулярно и удаление по inactive=35m не происходит. Поведение полностью объяснимо механикой работы кеша, но не с точки зрения человеческих ожиданий. Хотелось бы иметь механизм инвалидации таких устаревших данных из кеша кроме удаления элементов на файловой системе внешним скриптом. Например, ещё один параметр для cache_path, который будет задавать максимальное время жизни в кеше expired элементов, даже если к ним есть обращения. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #1459 | Can't vary on request headers set by proxy_set_header (rev. proxy mode) | nginx-core | enhancement | 8 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Hi We're using NGINX in reverse proxy mode for an internal traffic management service and I noticed that NGINX doesn't vary the cached object on request headers which we calculate and add in NGINX itself via proxy_set_header. This causes a major problem for our service as it's multi-tenant. I think it'd be logical and expected if NGINX did vary on request headers set by proxy_set_header. I have tested and setting the headers via more_set_input_headers and by setting the variable directly (and in Lua) but these also don't work, sadly. I have included a reduced test case which hopefully illustrates the situation (a few comments help explain). Output from testing (against local/Docker) is: # curl -k https://127.0.0.1:8443/a\?vv1\=1 -i HTTP/1.1 200 OK Server: nginx/1.13.8 Date: Mon, 15 Jan 2018 14:33:54 GMT Content-Type: text/plain Content-Length: 25 Connection: keep-alive Cache-Control: public,max-age=30 Vary: vvrh1 vvrh1-val-rec: val is 1 Edge-Cache-Status: EXPIRED Origin-Response-Status: 200 Origin-IP: 127.0.0.1:9000 2018-01-15T14:33:54+00:00% # curl -k https://127.0.0.1:8443/a\?vv1\=1 -i HTTP/1.1 200 OK Server: nginx/1.13.8 Date: Mon, 15 Jan 2018 14:33:55 GMT Content-Type: text/plain Content-Length: 25 Connection: keep-alive Cache-Control: public,max-age=30 Vary: vvrh1 vvrh1-val-rec: val is 1 Edge-Cache-Status: HIT 2018-01-15T14:33:54+00:00% # curl -k https://127.0.0.1:8443/a\?vv1\=2 -i HTTP/1.1 200 OK Server: nginx/1.13.8 Date: Mon, 15 Jan 2018 14:33:58 GMT Content-Type: text/plain Content-Length: 25 Connection: keep-alive Cache-Control: public,max-age=30 Vary: vvrh1 vvrh1-val-rec: val is 1 Edge-Cache-Status: HIT 2018-01-15T14:33:54+00:00% I'd expect a cache miss on the final response because the query string argument "vv1" has changed and this would mean that proxy_request_header would set a different value for the "vvrh1" request header. To illustrate that this mechanism works, once the cached object has expired, we see: # curl -k https://127.0.0.1:8443/a\?vv1\=2 -i HTTP/1.1 200 OK Server: nginx/1.13.8 Date: Mon, 15 Jan 2018 14:39:12 GMT Content-Type: text/plain Content-Length: 25 Connection: keep-alive Cache-Control: public,max-age=30 Vary: vvrh1 vvrh1-val-rec: val is 2 Edge-Cache-Status: val EXPIRED Origin-Response-Status: 200 Origin-IP: 127.0.0.1:9000 2018-01-15T14:39:12+00:00% # curl -k https://127.0.0.1:8443/a\?vv1\=2 -i HTTP/1.1 200 OK Server: nginx/1.13.8 Date: Mon, 15 Jan 2018 14:39:15 GMT Content-Type: text/plain Content-Length: 25 Connection: keep-alive Cache-Control: public,max-age=30 Vary: vvrh1 vvrh1-val-rec: val is 2 Edge-Cache-Status: val HIT 2018-01-15T14:39:12+00:00% Might this be something which could be fixed (if not, is there a workaround you can think of? Or have I made a mistake? Cheers |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #1472 | Downloads stop after 1GB depending of network | nginx-module | enhancement | 8 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Hi, we tried nginx version 1.6.2 till version 1.12.2 and have a problem when used as proxy before artifactory. Downloads get interrupted at 1GB. This behavior depends on the internal VLAN. On one VLAN this always happens. On an other VLAN it never happens. This is size limited, not time limited. From some network it stops after 30 seconds and from one other slow network it stops after 13 minutes. We made a minimal proxy setup with apache and this works with all VLANs. This is why we expect it has something to do with nginx or the combination of nginx and TCP/IP stack of linux. In wireshark we see "TCP Dup ACK" on the client side sent to the nginx server. Wget fails with connection closed at byte 1083793011 but continues download with partial content. docker can't handle this and our customers can't download docker images with layers greater 1 GB. The following text shows two anonymized minimal configs. The nginx config that is problematic and the apache config that works: NGINX config:
server {
listen *:80;
server_name NAME;
client_max_body_size 3G;
access_log /var/log/nginx/NAME.access.log;
error_log /var/log/nginx/NAME.error.log;
if ($request_uri !~ /artifactory/) {
rewrite ^ $scheme://NAME/artifactory/ permanent;
}
location /artifactory {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://ARTIFACTORY:PORT;
proxy_pass_header Server;
proxy_read_timeout 90s;
}
}
APACHE config:
<VirtualHost *:80>
ServerName NAME
ServerAdmin NAME
ErrorLog ${APACHE_LOG_DIR}/error.log
LogLevel warn
ProxyRequests Off
<Proxy *>
Order allow,deny
Allow from All
</Proxy>
ProxyPass / http://ARTIFACTORY:PORT/
ProxyPassReverse / http://ARTIFACTORY:PORT/
</VirtualHost>
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #1500 | ngx_hash_t can have only lower case key | other | enhancement | 8 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
ngx_hash_init convert all the keys in lower case, so when use ngx_hash_find it returns null. Below is the code line in ngx_hash.c.
I think, you can make it generic which supports case sensitive keys. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #2148 | proxy_ssl_verify does not support iPAddress subjectAlternativeName | nginx-module | enhancement | 5 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Module ngx_http_proxy_module proxy_ssl_trusted_certificate ignores x509 extension ipAddress location config: proxy_pass https://10.10.10.10:8443; proxy_ssl_certificate /nginx/certs/chain.pem; proxy_ssl_certificate_key /nginx/certs/client.key; proxy_ssl_trusted_certificate /nginx/certs/proxied_server.pem; proxy_ssl_verify on; proxy_ssl_verify_depth 2; When specifies proxy_pass https://10.10.10.10:8443; there is an error in error.log and 502 Bad gateway in curl 2021/03/09 23:22:34 [error] 18566#0: *1 upstream SSL certificate does not match "10.10.10.10" while SSL handshaking to upstream, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "https://10.10.10.10:8443/", host: "localhost" but when specifies proxy_pass https://somehost:8443; then it works certificate: $> openssl x509 -text -in /nginx/certs/proxied_server.pem ... X509v3 Subject Alternative Name:
... |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #2410 | Add a doctype to autoindex HTML output | nginx-module | enhancement | 3 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Currently the output of a directory by the autoindex module looks like this: <html> <head><title>Index of /jquery/3.6.0/</title></head> <body> <h1>Index of /jquery/3.6.0/</h1><hr><pre><a href="../">../</a> <a href="dist/">dist/</a> 30-May-2022 11:52 - <a href="external/">external/</a> 30-May-2022 11:52 - <a href="src/">src/</a> 30-May-2022 11:52 - <a href="AUTHORS.txt">AUTHORS.txt</a> 30-May-2022 11:52 12448 <a href="LICENSE.txt">LICENSE.txt</a> 30-May-2022 11:52 1097 <a href="README.md">README.md</a> 30-May-2022 11:52 1996 <a href="package.json">package.json</a> 30-May-2022 11:52 3027 </pre><hr></body> </html> Would it be possible to update the output to be proper HTML5 (including a DOCTYPE)? The reason for this that we're using a autoindex directory for an additional download source for Python packages. But using PIP 22.0.2 gives the following warning:
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #2421 | proxy_next_upstream_tries might be ignored with upstream keepalive | nginx-core | enhancement | 3 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
there is a bug with proxy_next_upstream_tries, which is ignored if Nginx is under load and the upstream server close the connection prematurely. It seems to only occurs with a connection closed. If too many requests are producing this error, It is possible to bring all the upstreams down for a certain time. Here is a reproducer on docker, but the issue was noticed on debian9/11 as well with nginx 1.10<=1.23.0 https://github.com/tibz7/nginx_next_upstream_retries_bug |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #2438 | Improve fastcgi_cache_key documentation | documentation | task | 3 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Сразу извиняюсь что не на английском, быстрее зарепорчу на родном языке. В документации к директивам fastcgi_cache_key и proxy_cache_key не указана одна особенность: для ключа лучше использовать, в том числе и переменную $request_method, т.к. по дефолту: fastcgi_cache_methods GET HEAD; proxy_cache_methods GET HEAD; означает что, в случае использования чего-то вроде (скопировал из доки): fastcgi_cache_key localhost:9000$request_uri; proxy_cache_key $scheme$proxy_host$uri$is_args$args; и запросе HEAD-методом будет закеширован пустой ответ (без контента), который будет отдаваться nginx'ом и в GET-запросе. (Конечно, если бэкенд далее поддерживает HEAD запросы и обрабатывает их правильным образом) Я столкнулся с этой неочевидной штукой на своих серверах, раскопал даже чью-то заметку на этот счет: https://www.claudiokuenzler.com/blog/705/empty-blank-page-nginx-fastcgi-cache-head-get Думаю, стоит указать эту особенность в документации к директивам fastcgi_cache_key/proxy_cache_key. PS: могу ли я прислать merge request для документации? Смущает что нужно прислать в merge request перевод сразу на нескольких языках. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| #712 | limit_conn and internal redirects | documentation | defect | 11 years ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
It seems that limit_conn is only checked at the beginning of the request processing and is ignored in other processing stages. This sometimes results in somewhat unanticipated behaviour when dealing with internal redirects. Consider an example: limit_conn_zone $binary_remote_addr zone=addr:10m;
server {
listen 80;
server_name site.com;
index index.html;
limit_conn addr 20; # first rule
location / {
limit_conn addr 10; # second rule
root /var/www;
}
}
Since any request ends up in the only defined location, one would expect that the second rule would always be used. However, only the first rule is applied if we try to request http://site.com (that is, without relative reference part). If we move index directive inside the location though, the second rule will be used without exception. This may not be exactly a bug, but if this behaviour is "by design" some additional explanation might be worth mentioning in the documentation. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
