{5} Accepted, Active Tickets by Owner (Full Description) (44 matches)

List tickets accepted, group by ticket owner. This report demonstrates the use of full-row display.

Ticket Summary Component Milestone Type Created
Description
#621 Could not allocate new session in SSL session shared cache nginx-core defect 09/03/14

Hi,

I'm using nginx as reverse proxy in front of haproxy. I'm using this ssl_session_cache config:

ssl_session_cache   shared:SSL:10m;
ssl_session_timeout 512m;

Now I get from time to time such errors in the log:

2014-09-03T09:51:34+00:00 hostname nginx: 2014/09/03 09:51:34 [alert] 27#0: *144835271 could not allocate new session in SSL session shared cache "SSL" while SSL handshaking, client: a.b.c.d, server: 0.0.0.0:443

Unfortunately that error doesn't say much. Looking at the code shows that it probably failed to allocate via ngx_slab_alloc_locked():

https://github.com/nginx/nginx/blob/master/src/event/ngx_event_openssl.c#L2088

I'll try to raise the session cache and see if it helps, but since it's just a cache I would expect only performance differences.

FWIW: The nginx is running in a docker container, to be specific: https://registry.hub.docker.com/u/fish/haproxy/ (Although I've raised the cache setting there already


#1263 Segmentation Fault when SSI is used in sub-request nginx-module defect 05/03/17

Hi,

nginx worker process crashes with segfault when SSI is used in a sub-request.

Config example:

    location /loc1.html {
        add_after_body /loc2.html;
    }

    location /loc2.html {
        ssi on;
    }

Seg fault happens only when I access /loc1.html location. When I access /loc2.html directly it works fine.

Error log:

==> ../log/error.log <==
2017/05/03 18:47:10 [alert] 14548#23345880: worker process 14566 exited on signal 11
2017/05/03 18:47:10 [alert] 14548#23345880: worker process 14573 exited on signal 11

Just FYI, content of loc1.html:

<p>Hi from location 1 !</p>

content of loc2.html:

<p>Hi from location 2 on <!--#echo var="host" --> !</p>

I tried to debug it and fix it, but due to the time I stopped here: file ngx_http_ssi_filter_module.c:

static ngx_str_t *
ngx_http_ssi_get_variable(ngx_http_request_t *r, ngx_str_t *name,
    ngx_uint_t key)
{
    ngx_uint_t           i;
    ngx_list_part_t     *part;
    ngx_http_ssi_var_t  *var;
    ngx_http_ssi_ctx_t  *ctx;

    ctx = ngx_http_get_module_ctx(r->main, ngx_http_ssi_filter_module);

    ...

ctx is NULL. SSI context is missing when SSI is called in a subrequest.

And then the subsequent code will cause segfault, because ctx is NULL:

    if (ctx->variables == NULL) {
        return NULL;
    }

I added some additional debug logs to the code around the ctx = ngx_http_get_module_ctx(....) line. And this is the output:

2017/05/03 18:47:10 [debug] 16787#8822579: *3 ssi ngx_http_ssi_get_variable r->main: 00007FE3FC006E50
2017/05/03 18:47:10 [debug] 16787#8822579: *3 ssi ngx_http_ssi_get_variable r->main->ctx: 00007FE3FC007770, module.ctx_index: 46
2017/05/03 18:47:10 [debug] 16787#8822579: *3 ssi ngx_http_ssi_get_variable ctx: 0000000000000000

Cheers Peter Magdina


#1330 OCSP stapling non-functional on IPv6-only host nginx-core defect 07/24/17

I have an IPv6-only host running CentOS 7. I have a Lets Encrypt certificate on the host and I've enabled OCSP stapling per the Mozilla preferred SSL stuff. My provider has NAT64 set-up so I've configured their NAT64 resolvers in the resolve entry in nginx.conf.

        # OCSP Stapling ---
        # fetch OCSP records from URL in ssl_certificate and cache them
        ssl_stapling on;
        ssl_stapling_verify on;

        # verify chain of trust of OCSP response using Root CA and Intermediate certs
        ssl_trusted_certificate /etc/dehydrated/certs/flathub.org/chain.pem;

        resolver [2a00:1098:0:80:1000:3b:0:1] [2a00:1098:0:82:1000:3b:0:1];

I see this error:

2017/07/24 14:02:23 [error] 16637#0: connect() to 88.221.134.147:80 failed (101: Network is unreachable) while requesting certificate status, responder: ocsp.int-x3.letsencrypt.org

I believe that it's because this host returns two A and two AAAA results:

[root@front nginx]# host ocsp.int-x3.letsencrypt.org
ocsp.int-x3.letsencrypt.org is an alias for ocsp.int-x3.letsencrypt.org.edgesuite.net.
ocsp.int-x3.letsencrypt.org.edgesuite.net is an alias for a771.dscq.akamai.net.
a771.dscq.akamai.net has address 88.221.134.114
a771.dscq.akamai.net has address 88.221.134.147
a771.dscq.akamai.net has IPv6 address 2a02:26f0:e8::6856:6fb0
a771.dscq.akamai.net has IPv6 address 2a02:26f0:e8::6856:6f88

However the SSL stapling code only attempts to connect the first one: https://github.com/nginx/nginx/blob/9197a3c8741a8832e6f6ed24a72dc5b078d840fd/src/event/ngx_event_openssl_stapling.c#L1028

I've tried to work around with /etc/hosts but that seems unused, and OCSP stapling seems to disable itself if I have no resolver configuration entry. I can't seem to place an IPv6 address in the ssl_stapling_responder either.


#274 error_page 400 =444 /; утекают сокеты nginx-core defect 01/06/13

Конфигурация для игнорирования неизвестных доменов. При некорректном запросе соединение не закрывается и остаётся в CLOSE_WAIT. Запрос должен быть с другого хоста.

echo -e '\x04\x01\x00P>\xECl\xC80\x00'|nc server 8000

conf

worker_processes  1;
worker_rlimit_core  500M;
working_directory   /tmp/core/;
debug_points abort;
pid        /tmp/nginx.pid;
events { worker_connections  1024; }
error_log /tmp/err debug;
http {
 types { text/html html; }
 access_log off;
 server {
  server_name _;
  return 444;
  error_page 400 =444 /;
 }
}

error.log:

2013/01/06 13:59:37 [debug] 16876#0: *1 malloc: 083BFC68:656
2013/01/06 13:59:37 [debug] 16876#0: *1 malloc: 083C5E00:1024
2013/01/06 13:59:37 [debug] 16876#0: *1 posix_memalign: 083C6220:4096 @16
2013/01/06 13:59:37 [debug] 16876#0: *1 http process request line
2013/01/06 13:59:37 [debug] 16876#0: *1 recv: fd:5 11 of 1024
2013/01/06 13:59:37 [info] 16876#0: *1 client sent invalid method while reading client request line, client: 1.0.0.8, server: _, request: "^D^A^@P><EC>l<C8>0^@"
2013/01/06 13:59:37 [debug] 16876#0: *1 http finalize request: 400, "?" a:1, c:1
2013/01/06 13:59:37 [debug] 16876#0: *1 event timer del: 5: 256771487
2013/01/06 13:59:37 [debug] 16876#0: *1 http special response: 400, "?"
2013/01/06 13:59:37 [debug] 16876#0: *1 internal redirect: "/?"
2013/01/06 13:59:37 [debug] 16876#0: *1 rewrite phase: 0
2013/01/06 13:59:37 [debug] 16876#0: *1 http finalize request: 444, "/?" a:1, c:2
2013/01/06 13:59:37 [debug] 16876#0: *1 http terminate request count:2
2013/01/06 13:59:37 [debug] 16876#0: *1 http terminate cleanup count:2 blk:0
2013/01/06 13:59:37 [debug] 16876#0: *1 http finalize request: -4, "/?" a:1, c:2
2013/01/06 13:59:37 [debug] 16876#0: *1 http request count:2 blk:0
...
2013/01/06 14:00:23 [alert] 16876#0: open socket #5 left in connection 2
2013/01/06 14:00:23 [alert] 16876#0: aborting
2013/01/06 14:00:23 [notice] 16875#0: signal 17 (SIGCHLD) received
2013/01/06 14:00:23 [alert] 16875#0: worker process 16876 exited on signal 6 (core dumped)

gdb

gdb) set $c = &ngx_cycle->connections[2]
(gdb) p $c->log->connection
$1 = 1
(gdb) p *$c
$2 = {data = 0x83bfc68, read = 0x83e5748, write = 0x83f2750, fd = 5, recv = 0x806e4d0 <ngx_unix_recv>, 
  send = 0x806ea14 <ngx_unix_send>, recv_chain = 0x806e650 <ngx_readv_chain>, send_chain = 0x8074a54 <ngx_linux_sendfile_chain>, 
  listening = 0x83c0924, sent = 0, log = 0x83bfb98, pool = 0x83bfb60, sockaddr = 0x83bfb88, socklen = 16, addr_text = {len = 7, 
    data = 0x83bfbb0 "1.0.0.8"}, local_sockaddr = 0x83cb5d4, buffer = 0x83bfbec, queue = {prev = 0x0, next = 0x0}, number = 1, 
  requests = 1, buffered = 0, log_error = 2, single_connection = 1, unexpected_eof = 0, timedout = 0, error = 0, destroyed = 0, 
  idle = 0, reusable = 0, close = 0, sendfile = 0, sndlowat = 0, tcp_nodelay = 0, tcp_nopush = 2}
(gdb) set $r = (ngx_http_request_t *) $c->data
(gdb) p *$r
$3 = {signature = 1347703880, connection = 0x83cd798, ctx = 0x83c6428, main_conf = 0x83c1080, srv_conf = 0x83c9a40, 
  loc_conf = 0x83c9a90, read_event_handler = 0x8085893 <ngx_http_block_reading>, 
  write_event_handler = 0x80851bb <ngx_http_terminate_handler>, cache = 0x0, upstream = 0x0, upstream_states = 0x0, 
  pool = 0x83c6220, header_in = 0x83bfbec, headers_in = {headers = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, 
      size = 0, nalloc = 0, pool = 0x0}, host = 0x0, connection = 0x0, if_modified_since = 0x0, if_unmodified_since = 0x0, 
    if_match = 0x0, if_none_match = 0x0, user_agent = 0x0, referer = 0x0, content_length = 0x0, content_type = 0x0, range = 0x0, 
    if_range = 0x0, transfer_encoding = 0x0, expect = 0x0, authorization = 0x0, keep_alive = 0x0, user = {len = 0, data = 0x0}, 
    passwd = {len = 0, data = 0x0}, cookies = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, server = {len = 0, 
      data = 0x0}, content_length_n = -1, keep_alive_n = -1, connection_type = 0, chunked = 0, msie = 0, msie6 = 0, opera = 0, 
    gecko = 0, chrome = 0, safari = 0, konqueror = 0}, headers_out = {headers = {last = 0x83bfd38, part = {elts = 0x83c6248, 
        nelts = 0, next = 0x0}, size = 24, nalloc = 20, pool = 0x83c6220}, status = 444, status_line = {len = 0, data = 0x0}, 
    server = 0x0, date = 0x0, content_length = 0x0, content_encoding = 0x0, location = 0x0, refresh = 0x0, last_modified = 0x0, 
    content_range = 0x0, accept_ranges = 0x0, www_authenticate = 0x0, expires = 0x0, etag = 0x0, override_charset = 0x0, 
    content_type_len = 0, content_type = {len = 0, data = 0x0}, charset = {len = 0, data = 0x0}, content_type_lowcase = 0x0, 
    content_type_hash = 0, cache_control = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, content_length_n = -1, 
    date_time = 0, last_modified_time = -1}, request_body = 0x0, lingering_time = 0, start_sec = 1357466377, start_msec = 23, 
  method = 2, http_version = 0, request_line = {len = 10, data = 0x83c5e00 "\004\001"}, uri = {len = 1, data = 0x83c9e47 "/"}, 
  args = {len = 0, data = 0x0}, exten = {len = 0, data = 0x0}, unparsed_uri = {len = 0, data = 0x0}, method_name = {len = 3, 
    data = 0x80b8cf8 "GET "}, http_protocol = {len = 0, data = 0x0}, out = 0x0, main = 0x83bfc68, parent = 0x0, postponed = 0x0, 
  post_subrequest = 0x0, posted_requests = 0x83bfec0, virtual_names = 0x0, phase_handler = 0, content_handler = 0, 
  access_code = 0, variables = 0x83c6478, ncaptures = 0, captures = 0x0, captures_data = 0x0, limit_rate = 0, header_size = 0, 
  request_length = 0, err_status = 444, http_connection = 0x83bfbcc, log_handler = 0x8086fef <ngx_http_log_error_handler>, 
  cleanup = 0x0, subrequests = 201, count = 1, blocked = 0, aio = 0, http_state = 1, complex_uri = 0, quoted_uri = 0, 
  plus_in_uri = 0, space_in_uri = 0, invalid_header = 0, add_uri_to_alias = 0, valid_location = 1, valid_unparsed_uri = 0, 
  uri_changed = 0, uri_changes = 10, request_body_in_single_buf = 0, request_body_in_file_only = 0, 
  request_body_in_persistent_file = 0, request_body_in_clean_file = 0, request_body_file_group_access = 0, 
  request_body_file_log_level = 5, subrequest_in_memory = 0, waited = 0, cached = 0, proxy = 0, bypass_cache = 0, no_cache = 0, 
  limit_conn_set = 0, limit_req_set = 0, pipeline = 0, plain_http = 0, chunked = 0, header_only = 0, keepalive = 0, 
  lingering_close = 0, discard_body = 0, internal = 1, error_page = 1, ignore_content_encoding = 0, filter_finalize = 0, 
  post_action = 0, request_complete = 0, request_output = 0, header_sent = 0, expect_tested = 1, root_tested = 0, done = 0, 
  logged = 0, buffered = 0, main_filter_need_in_memory = 0, filter_need_in_memory = 0, filter_need_temporary = 0, 
  allow_ranges = 0, state = 0, header_hash = 0, lowcase_index = 0, lowcase_header = '\000' <repeats 31 times>, 
  header_name_start = 0x0, header_name_end = 0x0, header_start = 0x0, header_end = 0x0, 
  uri_start = 0x83bfc68 "HTTP\230\327<\b(d<\b\200\020<\b@\232<\b\220\232<\b\223X\b\b\273Q\b\b", uri_end = 0x0, uri_ext = 0x0, 
  args_start = 0x0, request_start = 0x83c5e00 "\004\001", request_end = 0x0, method_end = 0x0, schema_start = 0x0, 
  schema_end = 0x0, host_start = 0x0, host_end = 0x0, port_start = 0x0, port_end = 0x0, http_minor = 0, http_major = 0}

#348 Excessive urlencode in if-set nginx-core defect 05/02/13

Hello,

I had setup Apache with mod_dav_svn behind nginx acting as front-end proxy and while commiting a copied file with brackets ([]) in filename into that subversion I found a bug in nginx.

How to reproduce it (configuration file is as simple as possible while still causing the bug):

$ cat nginx.conf 
error_log  stderr debug;
pid nginx.pid;
events {
    worker_connections  1024;
}
http {
    access_log access.log;
    server {
        listen 8000;
        server_name localhost;
        location / {
            set $fixed_destination $http_destination;
            if ( $http_destination ~* ^(.*)$ )
            {
                set $fixed_destination $1;
            }
            proxy_set_header        Destination $fixed_destination;            
            proxy_pass http://127.0.0.1:8010;
        }
    }
}

$ nginx -p $PWD -c nginx.conf -g 'daemon off;'
...

In second terminal window:

$ nc -l 8010

In third terminal window:

$ curl --verbose --header 'Destination: http://localhost:4000/foo%5Bbar%5D.txt' '0:8000/%41.txt'
* About to connect() to 0 port 8000 (#0)
*   Trying 0.0.0.0...
* Adding handle: conn: 0x7fa91b00b600
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fa91b00b600) send_pipe: 1, recv_pipe: 0
* Connected to 0 (0.0.0.0) port 8000 (#0)
> GET /%41.txt HTTP/1.1
> User-Agent: curl/7.30.0
> Host: 0:8000
> Accept: */*
> Destination: http://localhost:4000/foo%5Bbar%5D.txt
> 

Back in the second terminal window:

($ nc -l 8010)
GET /%41.txt HTTP/1.0
Destination: http://localhost:4000/foo%255Bbar%255D.txt
Host: 127.0.0.1:8010
Connection: close
User-Agent: curl/7.30.0
Accept: */*

The problem is that the Destination header was changed from ...foo%5Bbar%5D.txt to ...foo%255Bbar%255D.txt. This happens only when

  • that if ( $http_destination ~* ^(.*)$ ) is processed
  • and URL (HTTP GET URL, not that Destination URL) also contains urlencoded (%41) character(s).

In other cases (URL does not contain urlencoded character or that if is not matched) the Destination header is proxy_passed untouched, which is expected behavior.


Note: Why do I need that if ( $http_destination ~* ^(.*)$ )? In this example it is simplified, but for that Subversion setup I have mentioned I need to rewrite the Destination from https to http when nginx proxy_passes from https to Apache over http.

This bug also happens on nginx/0.7.67 in Debian Squeeze.


#458 Win32: autoindex module doesn't support Unicode names nginx-core defect 12/06/13

Functions for traversing directories use ANSI versions of FindFirstFile?() and FindNextFile?(), so any characters in filenames besides basic latin become broken.

Proposed patch fix this issue converting WCHAR names to utf-8.


#564 map regex matching affects rewrite directive nginx-core defect 05/28/14

Using a regex in the map directive changes the capture groups in a rewrite directive. This happens only if the regex in map is matched. A minimal exampe config:

http {
        map $http_accept_language $lang {
                default en;
                 ~(de) de;
        }
        server {
                server_name test.local
                listen 80;
                rewrite ^/(.*)$ http://example.com/$lang/$1 permanent;
        }
}

Expected:

$ curl -sI http://test.local/foo | grep Location
Location: http://example.com/en/foo
$ curl -H "Accept-Language: de" -sI http://test.local/foo | grep Location
Location: http://example.com/de/foo

Actual:

$ curl -sI http://test.local/foo | grep Location
Location: http://example.com/en/foo
$ curl -H "Accept-Language: de" -sI http://test.local/foo | grep Location
Location: http://example.com/de/de

If I leave out the parentheses in ~(de) de; (so it becomes ~de de;), $1 is simply empty:

$ curl -H "Accept-Language: de" -sI http://test.local/foo | grep Location
Location: http://example.com/de/

#686 With some condition,ngx_palloc() function will alloc a illegal memory address nginx-core defect 12/19/14

in ngx_palloc.c the function ngx_palloc:

void * ngx_palloc(ngx_pool_t *pool, size_t size)
{
    u_char      *m;
    ngx_pool_t  *p;

    if (size <= pool->max) {

        p = pool->current;

        do {
            m = ngx_align_ptr(p->d.last, NGX_ALIGNMENT);

            if ((size_t) (p->d.end - m) >= size) {
                p->d.last = m + size;

                return m;
            }

            p = p->d.next;

        } while (p);

        return ngx_palloc_block(pool, size);
    }

    return ngx_palloc_large(pool, size);
}

at this line

m = ngx_align_ptr(p->d.last, NGX_ALIGNMENT);
}
at sometimes the value of (p->d.end - p->d.last) may less than align coefficient,then ngx_align_ptr make m larger than p->d.end,after this the "if" compare the value of (p->d.end - m) and size with a type cast,
when m > p->d.end , (p->d.end - m) will get a negative numbera(e.g: -1、-2、-3) .
Underflow happend here and then p->d.last write a address out of p->d.end.
I'm debuging a segmant fault in these days,at last I found 
p->d.last==0x83e96c 
p->d.end==0x83e96f
after ngx_align_ptr
m ==0x83e970
then (size_t) (p->d.end - m)==18446744073709551615 greate larger than size,the p->d.last got a illegl address.
after this ngx_palloc will always alloc illegl address.

there must add a check of p->d.end and m

#752 try_files + subrequest + proxy-handler problem nginx-core defect 04/23/15

When using subrequests with try_files the following behaviour is observed.

   server {
       listen       8081;
       default_type text/html;

       location /uno {   return 200 "uno  ";   }
       location /duo {   return 200 "duo  ";   }
       location /tres {  return 200 "tres  ";  }
   }


   server {
       listen       8080;

       location / {
           root /tmp;
           try_files /tres =404;
           proxy_pass http://127.0.0.1:8081;
           add_after_body /duo;
       }
   }

Assuming /tmp/tres exists, a request to

http://127.0.0.1:8080/uno

returns "uno tres ", not "uno duo " or "tres tres ".

I.e., main request assumes that the request URI is unmodified and passes original request URI, "/uno". But in a subrequest the URI is modified and nginx uses modified URI, "/tres".

This is believed to be a bug, and one of the following should be done:

  • try_files should reset the r->valid_unparsed_uri flag if it modifies the URI;
  • or try_files should not modify the URI at all.

See this thread (in Russian) for additional details.


#753 Nginx leaves UNIX domain sockets after SIGQUIT nginx-core defect 04/24/15

According to the Nginx documentation, SIGQUIT will cause a "graceful shutdown" while SIGTERM will cause a "fast shutdown". If you send SIGQUIT to Nginx, it will leave behind stale UNIX domain socket files that were created using the listen directive. If there are any stale UNIX domain socket files when Nginx starts up, it will fail to listen on the socket because it already exists. However if you use SIGTERM, the UNIX domain socket files will be properly removed. I've encountered this with Nginx 1.6.2, 1.6.3, and 1.8.0 on Ubuntu 14.04.

Example /etc/nginx/nginx.conf:

http {
    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    gzip on;
    gzip_disable "msie6";

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/sites-enabled/*;
}

Example /etc/nginx/sites-enabled/serve-files:

server {
    listen unix:/run/serve-files.socket;
    root /var/www/files;
    location / {
        try_files $uri =404;
    }
}

Then start Nginx:

sudo nginx
# OR
sudo service nginx start

On first start, /run/serve-files.socket will be created because of the listen unix:/run/serve-files.socket; directive.

Then stop Nginx with SIGQUIT:

sudo kill -SIGQUIT $(cat /run/nginx.pid)
# OR
sudo service nginx stop # Sends SIGQUIT

The socket at /run/serve-files.socket will remain because it was not properly removed. If you try to restart Nginx, it will fail to start with the following logged to /var/log/nginx/error.log:

2015/04/24 10:16:27 [emerg] 5782#0: bind() to unix:/run/serve-files.socket failed (98: Address already in use)
2015/04/24 10:16:27 [emerg] 5782#0: bind() to unix:/run/serve-files.socket failed (98: Address already in use)
2015/04/24 10:16:27 [emerg] 5782#0: bind() to unix:/run/serve-files.socket failed (98: Address already in use)
2015/04/24 10:16:27 [emerg] 5782#0: bind() to unix:/run/serve-files.socket failed (98: Address already in use)
2015/04/24 10:16:27 [emerg] 5782#0: bind() to unix:/run/serve-files.socket failed (98: Address already in use)
2015/04/24 10:16:27 [emerg] 5782#0: still could not bind()

#756 Client disconnect in ngx_http_image_filter_module nginx-module defect 04/29/15

I have encountered a bug in ngx_http_image_filter_module when used in conjunction with ngx_http_proxy_module ; the configuration is as following:

location /img/ {

proxy_pass http://mybucket.s3.amazonaws.com; image_filter resize 150 100;

}

The steps to reproduce are rather complicated as they depend on how TCP fragments the response coming from the proxy:

  • If http://mybucket.s3.amazonaws.com returns, in the first TCP packet, a small amount of data (HTTP header, or HTTP header + a few bytes), the content is marked as not an image and NGX_HTTP_UNSUPPORTED_MEDIA_TYPE is returned (disconnecting the client), irrespective on whether or not subsequent data would complete the response to a valid image.

Nginx appears to give up right away on waiting for data if the contents of the first TCP packet received from the proxy does not contain a valid image header- i.e. ngx_http_image_test() will return NGX_HTTP_IMAGE_SIZE, etc.

In my experience this was triggered by a subtle change in AWS S3 that introduced further fragmentation of the TCP responses.

Versions affected: 1.6.2, 1.6.3, 1.7.2, 1.8.0, etc. (all?)

Attaching a 1.8.0 patch that resolves it; the other versions can be fixed similarly.

I think a better fix would be to "return NGX_OK" if we do not have enough data in "case NGX_HTTP_IMAGE_START", and "return NGX_HTTP_UNSUPPORTED_MEDIA_TYPE" (as per the original code) if enough data has been read, but it’s really not an image- but this exceeds the scope of the fix and my use case.

nginx-devel thread: http://mailman.nginx.org/pipermail/nginx-devel/2015-April/006876.html


#774 modern_browser // gecko version overwrites msie version nginx-module defect 07/21/15

I am not sure, if this behavior is still the case in the current version, but it occurs in 1.4 on ubuntu 14.04.

giving the following config:

##########################################

modern_browser gecko 27.0; modern_browser opera 19.0; modern_browser safari 8.0; modern_browser msie 9.0; modern_browser unlisted;

ancient_browser Links Lynx netscape4;

##########################################

on an IE11 (Win 8) $ancient_browser == 1. I am not sure if its only me, but this seems wrong in my understanding of how the module should work. This applies for a 'real' IE11, but does not for a spoofed UA (in chromium 46.0.2462.0) of IE10, IE9, IE8, IE7 - so in that case everything works as expected. Interestingly though the next config:

##########################################

modern_browser gecko 9.0; modern_browser opera 19.0; modern_browser safari 8.0; modern_browser msie 9.0; modern_browser unlisted;

ancient_browser Links Lynx netscape4;

##########################################

works as expected (in terms of the IE behavior), meaning $ancient_browser != 1. But now I would support older firefox versions - and that is not intended. The following config also gets $ancient_browser to be != 1

##########################################

modern_browser gecko 9.0; modern_browser opera 19.0; modern_browser safari 8.0; modern_browser msie 12.0; modern_browser unlisted;

ancient_browser Links Lynx netscape4;

##########################################

_Conclusion_: it looks like the gecko version is overwriting the defined msie version. This does not mean, that its exactly what is happening internally.


#861 Possibility of Inconsistent HPACK Dynamic Table Size in HTTP/2 Implementation nginx-module defect 12/15/15

The hpack dynamic table is only initialized upon addition of the first entry (see ngx_http_v2_add_header) in http/v2/ngx_http_v2_table.c.

If a dynamic table size update is sent before the first header to be added, the size will be set appropriately. However, once the first header is added, the table size is updated with NGX_HTTP_V2_TABLE_SIZE, resulting in a different size than the client.

After a brief reading of the HTTP/2 and HPACK specification, it appears that updating the dynamic table size before adding any headers is allowed.


#882 Unencoded Location: header when redirecting nginx-core defect 01/25/16

As posted on the mailing list (http://mailman.nginx.org/pipermail/nginx/2016-January/049650.html):

We’re seeing the following behavior in nginx 1.4.6:

  • nginx returns “301 Moved Permanently” with the Location: URL unencoded and a trailing slash added:
Location: http://example.org/When Harry Met Sally/
  • Some software (i.e. PHP) will automatically follow the redirect, but because it expects an encoded Location: header, it sends exactly what was returned from the server. (Note that curl, wget, and others will fixup unencoded Location: headers, but that’s not what HTTP spec requires.)

In other words, this is the transaction chain:

C: GET http://example.org/When%20Harry%20Met%20Sally HTTP/1.1

S: HTTP/1.1 301 Moved Permanently
S: Location: http://example.org/When Harry Met Sally/

C: GET http://example.org/When Harry Met Sally/ HTTP/1.1

S: 400 Bad Request

I believe the 301 originates from within the nginx code itself (ngx_http_static_module.c:147-193? in trunk) and not from our rewrite rules. As I read the HTTP spec, Location: must be encoded.


#964 Expires header incorrectly prioritised over Cache-Control: max-age nginx-core defect 04/28/16

When using nginx as a caching reverse proxy, items may be cached for the wrong amount of time if the Expires header is inconsistent with max-age. Caching will be disabled if the Expires header value is in the past or malformed.

Per RFC 2616 section 14.9.3, max-age takes precedence over Expires. However, nginx prefers whichever header/directive occurs first in the response, which causes unexpected results when migrating to nginx from an RFC-compilant caching reverse proxy.

A minimally-reproducible config is attached. Observe that no file is cached when accessing http://127.0.0.2:8080/fail, but a file is cached when accessing http://127.0.0.2:8080/success.


#994 perl_require directive has effect only at first config other defect 06/08/16

my configs are included as:

include /etc/nginx/sites-enabled/*.conf;

if I want to use 'perl_require' directive I should place it ONLY at first conf file (in alfabetical order) If I put directive into any other conf file it even does not complain if I try to load unexisting module


#1058 недокументированный редирект? documentation defect 08/24/16

при запросе URL без концевого слэша всегда происходит 301 редирект на тот же URL со слэшем в конце

пример конфига: location /dir {

alias /www/dir;

}

тоже самое происходит и в таком варианте: location /dir/ {

alias /www/dir/;

}

Однако, в документации такое поведение, вроде бы, описано только для локэйшнов с *_pass, либо я не там искал, но нашёл я только вот это:

Если location задан префиксной строкой со слэшом в конце и запросы обрабатываются при помощи proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass или memcached_pass, происходит специальная обработка. В ответ на запрос с URI равным этой строке, но без завершающего слэша, будет возвращено постоянное перенаправление с кодом 301 на URI с добавленным в конец слэшом.

пример готовой конфигурации

location /ig {

alias /www/ig_build;

}

$curl -I http://localhost:90/ig/infografika HTTP/1.1 301 Moved Permanently Server: nginx/1.11.3 Date: Wed, 24 Aug 2016 09:52:10 GMT Content-Type: text/html Content-Length: 185 Location: http://localhost:90/ig/infografika/ Connection: keep-alive

Также проверял на версии 1.4.2, всё тоже самое.

Если директории нет - то сразу возвращает 404, но если она есть, а запрос был без слэша - возникает редирект.


#1168 Nginx не корректно обрабатывает опцию max_size в директиве proxy_cache_path nginx-core defect 12/29/16

Например, есть конфигурация с директивой вида:

proxy_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=images:64m inactive=7d max_size=12g;

где /var/lib/nginx/cache является примонтированной по NFS директорией. Директория монтируется с флагами

rsize=1048576 wsize=1048576

Было замечено что nginx поддерживает количество файлов в кэше на уровне 12.5к, при том что 12 тысяч файлов (превьюшки картинок) для 12g слишком мало.

Изучив проблему более детально стало ясно что по факту размер кэша равен 12g/bsize, bsize извлекается для /var/lib/nginx/cache/... с помощью statfs и равен значениям из rsize/wsize (https://github.com/nginx/nginx/blob/master/src/http/ngx_http_file_cache.c#L154). То есть 12884901888/1048576=12288.

Когда количество файлов в кэше достигает значения 12288 - начинается принудительная инвалидация (https://github.com/nginx/nginx/blob/master/src/http/ngx_http_file_cache.c#L1950)

В NFS bsize вычисляется исходя из параметров rsize и wsize - нельзя полагаться на него при расчёте максимального размера файлов в кэше так как rsize и wsize имеют значение только для сетевого стека, и никак не отображают параметры физического хранилища.

Варианты решения проблемы:

  1. Использовать константный bsize размером 512/4096/8192;
  2. Сделать возможность явно указывать bsize с помощью дополнительного параметра;

Также не плохо было бы указать эту "особенность" в документации.


#1226 nginx behaves weirdly when using eventport as event engine on Solaris nginx-core defect 03/22/17

nginx behaves weirdly when using eventport as event engine. I tried to use eventport on Solaris when I first started using nginx, but the experience was discouraging: nginx behaved weirdly, mostly when entering indefinite timeouts during handling requests. I switched to /dev/poll and since then it worked flawlessly. Some time ago I saw a discussion stating that a work has been done on the eventport better support, so I compiled new 1.11.11 version and decided to give eventport in nginx another chance. Sadly, nothing has changed: nginx still enters idefinite loops when handling requests with eventport. This looks like the web application really waits for something, and the browser just keeps spinning the loader icon. When changed to /dev/poll and restarting nginx everything starts working back. No errors are logged in the logs.


#1238 Core dump when $limit_rate is set both in a map and in a location nginx-core defect 04/06/17

This is a minimal server configuration used to reproduce the problem (only the map & server section, the rest is the default configuration from nginx.org centos 7 nginx-1.10.3 package).

map $arg_test $limit_rate {
        default 128k;
        test 4k;
}

server {
        listen 8080;
        location / {
                root /var/www;
                set $limit_rate 4k;
        }
}

If a request to an affected location is made, nginx crashes with the following stack.

Program terminated with signal 7, Bus error.
#0  ngx_http_variable_request_set_size (r=0x7fb5c2761650, v=<optimized out>, data=140418628385320) at src/http/ngx_http_variables.c:730
730	    *sp = s;

(gdb) thread apply all bt

Thread 1 (Thread 0x7fb5c1237840 (LWP 2648)):
#0  ngx_http_variable_request_set_size (r=0x7fb5c2761650, v=<optimized out>, data=140418628385320) at src/http/ngx_http_variables.c:730
#1  0x00007fb5c12e992d in ngx_http_rewrite_handler (r=0x7fb5c2761650) at src/http/modules/ngx_http_rewrite_module.c:180
#2  0x00007fb5c12a669c in ngx_http_core_rewrite_phase (r=0x7fb5c2761650, ph=<optimized out>) at src/http/ngx_http_core_module.c:901
#3  0x00007fb5c12a1b3d in ngx_http_core_run_phases (r=r@entry=0x7fb5c2761650) at src/http/ngx_http_core_module.c:847
#4  0x00007fb5c12a1c3a in ngx_http_handler (r=r@entry=0x7fb5c2761650) at src/http/ngx_http_core_module.c:830
#5  0x00007fb5c12ad0de in ngx_http_process_request (r=0x7fb5c2761650) at src/http/ngx_http_request.c:1910
#6  0x00007fb5c12ad952 in ngx_http_process_request_line (rev=0x7fb5c27bae10) at src/http/ngx_http_request.c:1022
#7  0x00007fb5c128de60 in ngx_event_process_posted (cycle=cycle@entry=0x7fb5c2745930, posted=0x7fb5c1575290 <ngx_posted_events>) at src/event/ngx_event_posted.c:33
#8  0x00007fb5c128d9d7 in ngx_process_events_and_timers (cycle=cycle@entry=0x7fb5c2745930) at src/event/ngx_event.c:259
#9  0x00007fb5c12944f0 in ngx_worker_process_cycle (cycle=cycle@entry=0x7fb5c2745930, data=data@entry=0x1) at src/os/unix/ngx_process_cycle.c:753
#10 0x00007fb5c1292e66 in ngx_spawn_process (cycle=cycle@entry=0x7fb5c2745930, proc=proc@entry=0x7fb5c1294460 <ngx_worker_process_cycle>, data=data@entry=0x1, 
    name=name@entry=0x7fb5c131c197 "worker process", respawn=respawn@entry=-3) at src/os/unix/ngx_process.c:198
#11 0x00007fb5c12946f0 in ngx_start_worker_processes (cycle=cycle@entry=0x7fb5c2745930, n=2, type=type@entry=-3) at src/os/unix/ngx_process_cycle.c:358
#12 0x00007fb5c1295283 in ngx_master_process_cycle (cycle=cycle@entry=0x7fb5c2745930) at src/os/unix/ngx_process_cycle.c:130
#13 0x00007fb5c127039d in main (argc=<optimized out>, argv=<optimized out>) at src/core/nginx.c:367

#1255 map regexp fail to match documentation defect 04/21/17

I have this code:

map $http_incap_client_ip:$http_incap_tls_version:${http_x_forwarded_proto}:$ssl_protocol $x_forwarded_proto {

default "http"; ~[0-9.]*:- "http"; # incapsula http-https connection ~[0-9.]*:TLSv1 "https"; # incapsula https-https connection ~-:.*:https "https"; # internal tests x-forwarded-proto ~-:.*:TLSv1 "https"; # internal https connection

}

When i'm trying a local test connection, the log show that $http_incap_client_ip:$http_incap_tls_version:${http_x_forwarded_proto}:$ssl_protocol is:

-:-:https:TLSv1.2

Yet, the $x_forwarded_proto result is http.


#1269 $upstream_response_time is improperly evaluated in header filter handlers documentation defect 05/11/17

$upstream_response time is incorrectly defined as the direct assignment of ngx_current_msec when used in a header phase handler. Consider the following config:

http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for" '
                      '$upstream_response_time';

    access_log  logs/access.log  main;

    upstream foo {
        server 127.0.0.1:9000;
    }

    server {
        listen 9000;
        server_name localhost;
        root html;

        access_log off;
        error_log off;
    }

    server {
        listen       80;
        server_name  localhost;

        location / {
            proxy_pass http://foo;

            add_header Upstream-Response-Time $upstream_response_time;
        }
}

The log format generated in such a context correctly shows $upstream_response_time:

127.0.0.1 - - [10/May/2017:19:09:48 -0700] "GET / HTTP/1.1" 200 612 "-" "curl/7.50.1" "-" 0.000

The assigned header, however, contains the value from the initial assignment:

$ curl -vv localhost
* Rebuilt URL to: localhost/
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: localhost
> User-Agent: curl/7.50.1
> Accept: */*
> 
< HTTP/1.1 200 OK
< Server: nginx/1.12.0
< Date: Thu, 11 May 2017 02:09:48 GMT
< Content-Type: text/html
< Content-Length: 612
< Connection: keep-alive
< Last-Modified: Thu, 11 May 2017 01:38:00 GMT
< ETag: "5913c078-264"
< Accept-Ranges: bytes
< Upstream-Response-Time: 1494468588.302

In cases where ngx_http_upstream_connect is only called one time (e.g. on first a successful upstream connection), ngx_http_upstream_finalize_request it not called before header phase modules execute, thus never reaching the path where u->state->response_time is reassigned to the diff between its initial value and the current ngx_current_msec (https://github.com/nginx/nginx/blob/master/src/http/ngx_http_upstream.c#L4249). In cases where ngx_http_upstream_connect is called again (e.g. on a failed upstream connection), we do see a proper evaluation of the variable as a result of pushing the current state onto r->upstream_states (https://github.com/nginx/nginx/blob/master/src/http/ngx_http_upstream.c#L1468-L1481). but obviously only for the previous connection).

I do not know if this is behavior should be treated as a bug per se, or if the documentation should only be updated to reflect the fact that this variable is meaningless for successful connections in header filter phases.


#1382 proxy_cache doesn't respect no-cache from error_page nginx-module defect 09/15/17

I am trying to cache some 404 upstream responses, but not all of them. What's more, I want to use custom 404 error pages. So I have two kinds of requests:

  • normal requests: client -> nginx -> proxy_cache -> upstream (normal processing) - standard caching proxy with proxy_intercept_errors
  • conditional requests: client -> nginx -> proxy_cache -> upstream (always 404) - get data from cache only when already cached; return 404 when data is not in cache (in that case response body doesn't matter)

It's a bit simplified - I removed all custom logic and upstream balancer code that I add in OpenResty?, and put add_header to simulate what I do for 404 responses that should not be cached.

My config:

http {
    proxy_cache_path cache_temp keys_zone=cache:10m;

    server {
        listen      80;

        location = /test {
            proxy_cache cache;
            proxy_cache_key $uri;
            proxy_pass http://localhost:8080;
            proxy_cache_valid 404 5s;

            add_header X-Cache-Status $upstream_cache_status always;
            add_header X-Upstream-Status $upstream_status always;

            proxy_intercept_errors on;
            error_page 404 /404.htm; # comment out this line and caching will be properly skipped
        }

        location = /404.htm {
            add_header X-Cache-Status $upstream_cache_status always;
            add_header X-Upstream-Status $upstream_status always;

            add_header X-Accel-Expires "0" always;
            add_header Cache-Control "no-cache" always;
            add_header Expires "0" always;
            add_header Via "*" always;

            return 404;
        }
    }

    server {
        listen 8080;

        add_header X-Accel-Expires "0" always;
        add_header Cache-Control "no-cache" always;
        add_header Expires "0" always;
        add_header Via "*" always;
        return 404;
    }
}

Now, according to my knowledge, when I issue GET http://localhost/test the request should go like this:

  1. Request is sent to :8080, which returns 404 error
  2. proxy_intercept_errors + error_page replace response with error page defined in /404.htm, which is fetched via internal subrequest
  3. Cache-Control etc. should be respected and response should not be cached.

I already moved my special 404 processing to another internal location, so I went around this issue, but I'm reporting it anyway because it seems like a bug to me.


#1383 Error if using proxy_pass with variable and limit_except nginx-core defect 09/18/17

Hi nginx guys,

i use a nginx in front of a varnish server. I purge my varnish via purge method.

Nginx uses the following VHost config:

server {
    listen       *:80 default_server;

    location / {
        limit_except GET POST {
            allow 127.0.0.1/32;
            deny all;
        }

        set $upstream http://127.0.0.1:8080;

        if ($http_user_agent = 'mobile') {
            set $upstream http://127.0.0.1:8080;
        }

        proxy_pass              $upstream;
        proxy_set_header        Host $host;
        proxy_set_header        X-Forwarded-For $remote_addr;
    }
}

Suggested: From not localhost i only can request GET/HEAD/POST, localhost can do everything.

From remote it works as expected:

root@test:~# curl -X PURGE -I EXTIP
HTTP/1.1 403 Forbidden
Server: nginx
Date: Mon, 18 Sep 2017 10:39:23 GMT
Content-Type: text/html
Content-Length: 162
Connection: keep-alive
Vary: Accept-Encoding

But from localhost:

root@test:~# curl -X PURGE -I http://127.0.0.1
HTTP/1.1 500 Internal Server Error
Server: nginx
Date: Mon, 18 Sep 2017 10:39:06 GMT
Content-Type: text/html
Content-Length: 186
Connection: close

Nginx error log tells me:

==> /var/log/nginx/error.log <==
2017/09/18 12:39:06 [error] 2483#2483: *2 invalid URL prefix in "", client: 127.0.0.1, server: , request: "PURGE / HTTP/1.1", host: "127.0.0.1"

Without using Variables in VHost:

server {
    listen       *:80 default_server;

    location / {
        limit_except GET POST {
            allow 127.0.0.1/32;
            deny all;
        }

        proxy_pass              http://127.0.0.1:8080;
        proxy_set_header        Host $host;
        proxy_set_header        X-Forwarded-For $remote_addr;
    }
}

Works as expected:

root@test:~# curl -X PURGE -I http://127.0.0.1
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 18 Sep 2017 10:45:35 GMT
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding

Other tests with a variable proxy_pass e.g. using the get method instead of purge also fails with same error.

Please take a look why nginx fails when combining limit_except with proxypass and variables. Thanks


#1384 request body may be corrupted when content-length is not set in headers using http2 nginx-module defect 09/18/17

Hi, Recently I use nginx as proxy server with a problem. If a client send a post request without Content-Length set in the request header using http2(proxy_request_buffering is on), the request body may be corrupted when nginx send request body to upstream in ngx_http_upstream_send_request_body. If content-length is set in the header, the request body will be ok. In addition, it will be ok if proxy_request_buffering is off. I want to find the answer in the code. In ngx_http_v2_read_request_body, if Content-Length is not set in the headers, r->headers_in.content_length_n == -1, will enter below branch.

} else {

if (stream->preread) {

/* enforce writing preread buffer to file */ r->request_body_in_file_only = 1;

}

rb->buf = ngx_calloc_buf(r->pool);

if (rb->buf != NULL) {

rb->buf->sync = 1;

}

}

As rb->buf->sync is set with 1, in ngx_http_v2_process_request_body, will not copy request body from recv_buffer of http2 module to buf->last.

if (size) {

if (buf->sync) {

buf->pos = buf->start = pos; buf->last = buf->end = pos + size;

} else {

if (size > (size_t) (buf->end - buf->last)) {

ngx_log_error(NGX_LOG_INFO, fc->log, 0,

"client intended to send body data " "larger than declared");

return NGX_HTTP_BAD_REQUEST;

}

buf->last = ngx_cpymem(buf->last, pos, size);

}

}

But recv_buffer is shared memory for all http2 request, the request body may be corrputed when call ngx_http_upstream_send_request_body. I also try to find whether it is necessary in the rfc7540. My understanding is that, Content-Length in the header is optional. If Conntent-Length is not set, nginx should read request body from DATA frame. I saw a description like this in the rfc7540:

An HTTP POST request that includes request header fields and payload data is transmitted as one HEADERS frame, followed by zero or more CONTINUATION frames containing the request header fields, followed by one or more DATA frames, with the last CONTINUATION (or HEADERS) frame having the END_HEADERS flag set and the final DATA frame having the END_STREAM flag set.

So, in my opinion nginx should handle the request body corretly even if the Content-Length is not set. Am I right?


#289 Add support for HTTP Strict Transport Security (HSTS / RFC 6797) nginx-core enhancement 01/29/13

It would be great if support for HSTS (RFC 6797) would be added to the nginx-core.

Currently HSTS is "enabled" like this (according to https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security):

add_header Strict-Transport-Security max-age=31536000;

However this has at least two downsides:

  1. The header is only added when the HTTP status code is 200, 204, 301, 302 or 304.
    • It would be great if the header would always be added
  2. The header is added on HTTPS and HTTP responses, but according to RFC 6797 (7.2.) it should not:
    • An HSTS Host MUST NOT include the STS header field in HTTP responses conveyed over non-secure transport.

RFC 6797: https://tools.ietf.org/html/rfc6797


#376 log file reopen should pass opened fd from master process nginx-core enhancement 06/14/13

When starting nginx all the log files (error_log, access_log) are created and opened by the master process and the filehandles passed to the worker while forking.

On SIGUSR1 the master reopens the files, chown's them and then the worker reopens the files himself. This has several drawbacks:

  • It is inconsistent behaviour and rather surprising (sudden change of ownership upon signal). If you really want to do it this way you should chown the files from the very beginning.
  • It permits the unprivileged nginx user read and write access to the current log files which is bad from the security perspective since the unprivileged user also needs to be able to change into/read the log directory

A better solution may be to reopen the log files in the master process as currently done and then use the already available ngx_{read,write}_channel functions to pass the new filehandles down to the worker.


#853 Поведение cache_use_stale updating если новые ответы нельзя кешировать nginx-core enhancement 12/08/15

Конфигурация следующая: fastcgi_cache_path /var/tmp/nginx/fastcgi_cache levels=1:2 keys_zone=fcgi_cache:16m max_size=1024m inactive=35m; fastcgi_cache_revalidate on;

fastcgi_cache fcgi_cache; fastcgi_cache_valid 200 301 302 304 10m; fastcgi_cache_valid 404 2m; fastcgi_cache_use_stale updating error timeout invalid_header http_500 http_503; fastcgi_cache_key "$request_method|$host|$uri|$args"; fastcgi_no_cache $cookie_nocache $arg_nocache $cookie_NRGNSID $cookie_NRGNTourSID $cookie_failed_login; fastcgi_cache_bypass $cookie_nocache $arg_nocache $cookie_NRGNSID $cookie_NRGNTourSID $cookie_failed_login;

Сейчас бекенд отвечает 200 с заголовками "Cache-Control: no-store, no-cache, must-revalidate" и "Pragma: no-cache". Но две недели назад некоторое время там было 302 без запрета кеширования и ответ попал в кеш по fastcgi_cache_valid 10m. После этого одинокие запросы получают upstream_cache_status EXPIRED и ответ бекенда, но если несколько приходят одновременно, то срабатывает UPDATING и редирект из кеша двухнедельной давности. Запросы приходят регулярно и удаление по inactive=35m не происходит.

Поведение полностью объяснимо механикой работы кеша, но не с точки зрения человеческих ожиданий. Хотелось бы иметь механизм инвалидации таких устаревших данных из кеша кроме удаления элементов на файловой системе внешним скриптом. Например, ещё один параметр для cache_path, который будет задавать максимальное время жизни в кеше expired элементов, даже если к ним есть обращения.


#712 limit_conn and internal redirects documentation defect 02/03/15

It seems that limit_conn is only checked at the beginning of the request processing and is ignored in other processing stages. This sometimes results in somewhat unanticipated behaviour when dealing with internal redirects.

Consider an example:

limit_conn_zone $binary_remote_addr zone=addr:10m;

server {
    listen       80;
    server_name  site.com;

    index index.html;

    limit_conn addr 20; # first rule

    location / {
        limit_conn addr 10; # second rule
        root /var/www;
    }
}

Since any request ends up in the only defined location, one would expect that the second rule would always be used. However, only the first rule is applied if we try to request http://site.com (that is, without relative reference part). If we move index directive inside the location though, the second rule will be used without exception.

This may not be exactly a bug, but if this behaviour is "by design" some additional explanation might be worth mentioning in the documentation.


#384 trailing dot in server_name nginx-core defect 07/09/13

nginx should treat server_name values with and without trailing dot as identical to each other. Thus, it shall warn and continue during configuration syntax check for the below snippet due to conflicting server_name.

    server {
        server_name  localhost;
    }

    server {
        server_name  localhost.;
    }

somebody (14 matches)

Ticket Summary Component Milestone Type Created
Description
#86 the "if" directive have problems in location context nginx-core defect 01/17/12

To start, I'm doing tricky stuff so please don't point out at the weird things and stay focused on the issue at hand. I'm mixing a configuration with userdir and symfony2 (http://wiki.nginx.org/Symfony) for a development environment, php is using php-fpm and a unix socket. The userdir configuration is classic, all your files in ~user/public_html/ will be accessible through http://server/~user/. I add to this the fact that if you create a folder ~user/public_html/symfony/ and put a symfony project in it (~user/public_html/symfony/project/) it will have the usual symfony configuration applied (rewrites and fastcgi path split).

Here you go for the configuration :

    # match 1:username, 2:project name, 3:the rest
    location ~ ^/~(.+?)/symfony/(.+?)/(.+)$ {
        alias /home/$1/public_html/symfony/$2/web/$3;
        if (-f $request_filename) {
            break;
        }
        # if no app.php or app_dev.php, redirect to app.php (prod)
        rewrite ^/~(.+?)/symfony(/.+?)/(.+)$ /~$1/symfony/$2/app.php/$3 last;
    }

    # match 1:username, 2:project name, 3:env (prod/dev), 4:trailing ('/' or
    # end)
    location ~ ^/~(.+?)/symfony(/.+)/(app|app_dev)\.php(/|$) {
        root /home/$1/public_html/symfony$2/web;
        # fake $request_filename
        set $req_filename /home/$1/public_html/symfony$2/web/$3.php;
        include fastcgi_params;
        fastcgi_split_path_info ^((?U).+\.php)(/?.+)$;
        fastcgi_param PATH_INFO $fastcgi_path_info;
        fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
        fastcgi_param SCRIPT_FILENAME $req_filename;
        fastcgi_pass unix:/tmp/php-fpm.sock;
    }

The second block (PHP backend) works on its own. The first block (files direct access) works on its own.

You can see that I already had a problem with PHP but went around it with creating my own variable.

To help understand, here is a sample of a symfony project layout (I removed some folders to help the comprehension):

project/
    src/
        [... my php code ...]
    web/
        app_dev.php
        app.php
        favicon.ico

If I try to access http://server/~user/symfony/project/favicon.ico I see this in the logs :

2012/01/17 16:36:25 [error] 27736#0: *1 open() "/home/user/public_html/symfony/project/web/favicon.icoavicon.ico" failed (2: No such file or directory), client: 10.11.60.36, server: server, request: "HEAD /~user/symfony/project/favicon.ico HTTP/1.1", host: "server"

If I remove the block that tests $request_filename, it works but I have to remove the rewrite as well.

The server is a CentOS 5.7 and the nginx is coming from the EPEL repository.

Unfortunately my C skills are down the floor so I can't really provide a better understanding of the problem. I tried to poke around the code but with not much luck.


#97 try_files and alias problems nginx-core defect 02/03/12
# bug: request to "/test/x" will try "/tmp/x" (good) and
# "/tmp//test/y" (bad?)
location /test/ {
    alias /tmp/;
    try_files $uri /test/y =404;
}
# bug: request to "/test/x" will fallback to "fallback" instead of "/test/fallback"
location /test/ {
    alias /tmp/;
    try_files $uri /test/fallback?$args;
}
# bug: request to "/test/x" will try "/tmp/x/test/x" instead of "/tmp/x"
location ~ /test/(.*) {
    alias /tmp/$1;
    try_files $uri =403;
}

Or document special case for regexp locations with alias? See 3711bb1336c3.

# bug: request "/foo/test.gif" will try "/tmp//foo/test.gif"
location /foo/ {
    alias /tmp/;
    location ~ gif {
        try_files $uri =405;
    }
}

#157 cache max_size limit applied incorrectly with xfs nginx-core defect 04/29/12

No matter what I write in inactive= parameter in proxy_cache_path directive - it is always resolved to 10 minutes .

I tried different formats : inactive=14d inactive=2w inactive=336h

but the result is always the same : 10 minutes .

Checked both by counting files in cache and manually doing ls -ltr in cache dir .

This bug exists in 1.0.15 too .

This bug does NOT exist in 0.8.55 ( the version we had to roll back to ) .

relevant lines :

proxy_cache_path /ssd/two levels=1:2:2 keys_zone=static:2000m inactive=14d max_size=120000m; proxy_temp_path /ssd/temp;

in some server :

location /images {

expires 5d ; proxy_pass http://static-local.domain:80; proxy_cache_valid 2w; proxy_cache static; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

}


#178 listen with ssl but missing ssl_certificate is not detected by nginx -t nginx-core defect 06/15/12

I just added the line:

listen 443 ssl;

to one of my extra (non-production) vhosts, but forgot to add the ssl_certificate and ssl_certificate_key.

That's my mistake. But nginx -t did not catch the mistake, and nginx -s reload did apply the changed configuration ... and our production site (which also has listen 443 ssl;) now failed all HTTPS requests with:

2012/06/15 17:21:35 [error] 18931#0: *2322994 no "ssl_certificate" is defined in server listening on SSL port while SSL handshaking, client: xxx.xxx.xx.xxx, server: 0.0.0.0:443

Usually nginx does catch my config mistakes at parse time and thus prevents me from breaking things. I think it could and should have done so in this instance too.


#189 timeouts break when time changes nginx-core defect 07/27/12

nginx uses gettimeofday() to obtain the current time for use with timers.

That's wrong, because gettimeofday() doesn't supply monotonic time. It should be using clock_gettime(CLOCK_MONOTONIC) instead.

For example, on one device nginx is a front for php fastgci. Timeouts are configured (fastcgi_send_timeout 300;). One PHP page allows the user to change the system time. If the time is changed backwards everything's OK, if it's changed forwards nginx returns '504 Gateway timeout'.

The problem was detected on 1.0.5, but from a quick look at the source it is still present.

I'll try to attach a rough patch (working on Linux, but not suitable for inclusion as-is).


#191 literal newlines logged in error log nginx-module defect 08/01/12

I noticed that when a %0a exists in the URL, nginx includes a literal newline in the error_log when logging a file not found:


2012/07/26 17:24:14 [error] 5478#0: *8 "/var/www/localhost/htdocs/

html/index.html" is not found (2: No such file or directory), client: 1.2.3.4, server: , request: "GET /%0a%0a%0ahtml/ HTTP/1.1", host: "test.example.com"


This wreaks havoc with my log monitoring utility 8-/.

It seems desirable to escape the newline in the log message? I tested with the latest 1.2.2. Is there any way with the existing configuration options to make this not happen, or any interest in updating the logging module to handle this situation differently?


#196 Inconsistent behavior on uri's with unencoded spaces followed by H nginx-core defect 08/12/12

When requesting files with unencoded spaces, nginx will typically respond with the file requested. But if the filename has a space followed by a capital H, nginx responds with a 400 error.

[foo@bar Downloads]$ nc -vv 127.0.0.1 8000
Ncat: Version 6.01 ( http://nmap.org/ncat )
Ncat: Connected to 127.0.0.1:8000.
GET /t h HTTP/1.1
Host: 127.0.0.1:8000

HTTP/1.1 200 OK
Server: nginx/1.3.4
Date: Sun, 12 Aug 2012 20:22:30 GMT
Content-Type: application/octet-stream
Content-Length: 4
Last-Modified: Sun, 12 Aug 2012 18:30:35 GMT
Connection: keep-alive
ETag: "5027f64b-4"
Accept-Ranges: bytes

bar

[foo@bar Downloads]$ nc -vv 127.0.0.1 8000
Ncat: Version 6.01 ( http://nmap.org/ncat )
Ncat: Connected to 127.0.0.1:8000.
GET /a H HTTP/1.1
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx/1.3.4</center>
</body>
</html>
Ncat: 18 bytes sent, 172 bytes received in 7.29 seconds.
[foo@bar Downloads]$ nc -vv 127.0.0.1 8000
Ncat: Version 6.01 ( http://nmap.org/ncat )
Ncat: Connected to 127.0.0.1:8000.
GET /a%20H HTTP/1.1
Host: 127.0.0.1:8000

HTTP/1.1 200 OK
Server: nginx/1.3.4
Date: Sun, 12 Aug 2012 20:23:32 GMT
Content-Type: application/octet-stream
Content-Length: 4
Last-Modified: Sun, 12 Aug 2012 18:34:44 GMT
Connection: keep-alive
ETag: "5027f744-4"
Accept-Ranges: bytes

bar


#217 Wrong "Content-Type" HTTP response header in certain configuration scenarios nginx-core defect 09/12/12

In certain configuration scenarios the "Content-Type" HTTP response header is not of the expected type but rather falls back to the default setting.

I was able to shrink down the configuration to a bare minimum test case which gives some indication that this might happen in conjunction with regex captured in "location", "try_files" and "alias" definitions.

Verfied with Nginx 1.3.6 (with patch.spdy-52.txt applied), but was also reproducible with earlier versions, see http://mailman.nginx.org/pipermail/nginx/2012-August/034900.html http://mailman.nginx.org/pipermail/nginx/2012-August/035170.html (no response was given on those posts)

# nginx -V
nginx version: nginx/1.3.6
TLS SNI support enabled
configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/sbin/nginx --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --user=nginx --group=nginx --with-openssl=openssl-1.0.1c --with-debug --with-http_stub_status_module --with-http_ssl_module --with-ipv6

Minimal test configuration for that specific scenario:

server {
    listen                          80;
    server_name                     t1.example.com;

    root                            /data/web/t1.example.com/htdoc;

    location                        ~ ^/quux(/.*)?$ {
        alias                       /data/web/t1.example.com/htdoc$1;
        try_files                   '' =404;
    }
}

First test request where Content-Type is being correctly set to "image/gif" as expected:

$ curl -s -o /dev/null -D - -H 'Host: t1.example.com' http://127.0.0.1/foo/bar.gif
HTTP/1.1 200 OK
Server: nginx/1.3.6
Date: Wed, 12 Sep 2012 14:20:09 GMT
Content-Type: image/gif
Content-Length: 68
Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT
Connection: keep-alive
ETag: "501a0a78-44"
Accept-Ranges: bytes

Second test request where Content-Type is wrong, "application/octet-stream" instead of "image/gif" (actually matches the value of whatever "default_type" is set to):

$ curl -s -o /dev/null -D - -H 'Host: t1.example.com' http://127.0.0.1/quux/foo/bar.gif
HTTP/1.1 200 OK
Server: nginx/1.3.6
Date: Wed, 12 Sep 2012 14:20:14 GMT
Content-Type: application/octet-stream
Content-Length: 68
Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT
Connection: keep-alive
ETag: "501a0a78-44"
Accept-Ranges: bytes

Debug log during the first test request:

2012/09/12 16:20:09 [debug] 15171#0: *1 delete posted event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: *1 malloc: 09BDA0C8:672
2012/09/12 16:20:09 [debug] 15171#0: *1 malloc: 09BE3210:1024
2012/09/12 16:20:09 [debug] 15171#0: *1 posix_memalign: 09C0AE10:4096 @16
2012/09/12 16:20:09 [debug] 15171#0: *1 http process request line
2012/09/12 16:20:09 [debug] 15171#0: *1 recv: fd:11 178 of 1024
2012/09/12 16:20:09 [debug] 15171#0: *1 http request line: "GET /foo/bar.gif HTTP/1.1"
2012/09/12 16:20:09 [debug] 15171#0: *1 http uri: "/foo/bar.gif"
2012/09/12 16:20:09 [debug] 15171#0: *1 http args: ""
2012/09/12 16:20:09 [debug] 15171#0: *1 http exten: "gif"
2012/09/12 16:20:09 [debug] 15171#0: *1 http process request header line
2012/09/12 16:20:09 [debug] 15171#0: *1 http header: "User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2"
2012/09/12 16:20:09 [debug] 15171#0: *1 http header: "Accept: */*"
2012/09/12 16:20:09 [debug] 15171#0: *1 http header: "Host: t1.example.com"
2012/09/12 16:20:09 [debug] 15171#0: *1 http header done
2012/09/12 16:20:09 [debug] 15171#0: *1 event timer del: 11: 3134905866
2012/09/12 16:20:09 [debug] 15171#0: *1 rewrite phase: 0
2012/09/12 16:20:09 [debug] 15171#0: *1 test location: ~ "^/quux(/.*)?$"
2012/09/12 16:20:09 [debug] 15171#0: *1 using configuration ""
2012/09/12 16:20:09 [debug] 15171#0: *1 http cl:-1 max:1048576
2012/09/12 16:20:09 [debug] 15171#0: *1 rewrite phase: 2
2012/09/12 16:20:09 [debug] 15171#0: *1 post rewrite phase: 3
2012/09/12 16:20:09 [debug] 15171#0: *1 generic phase: 4
2012/09/12 16:20:09 [debug] 15171#0: *1 generic phase: 5
2012/09/12 16:20:09 [debug] 15171#0: *1 access phase: 6
2012/09/12 16:20:09 [debug] 15171#0: *1 access phase: 7
2012/09/12 16:20:09 [debug] 15171#0: *1 post access phase: 8
2012/09/12 16:20:09 [debug] 15171#0: *1 try files phase: 9
2012/09/12 16:20:09 [debug] 15171#0: *1 content phase: 10
2012/09/12 16:20:09 [debug] 15171#0: *1 content phase: 11
2012/09/12 16:20:09 [debug] 15171#0: *1 content phase: 12
2012/09/12 16:20:09 [debug] 15171#0: *1 http filename: "/data/web/t1.example.com/htdoc/foo/bar.gif"
2012/09/12 16:20:09 [debug] 15171#0: *1 add cleanup: 09C0B3D8
2012/09/12 16:20:09 [debug] 15171#0: *1 http static fd: 14
2012/09/12 16:20:09 [debug] 15171#0: *1 http set discard body
2012/09/12 16:20:09 [debug] 15171#0: *1 HTTP/1.1 200 OK
Server: nginx/1.3.6
Date: Wed, 12 Sep 2012 14:20:09 GMT
Content-Type: image/gif
Content-Length: 68
Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT
Connection: keep-alive
ETag: "501a0a78-44"
Accept-Ranges: bytes

2012/09/12 16:20:09 [debug] 15171#0: *1 write new buf t:1 f:0 09C0B500, pos 09C0B500, size: 235 file: 0, size: 0
2012/09/12 16:20:09 [debug] 15171#0: *1 http write filter: l:0 f:0 s:235
2012/09/12 16:20:09 [debug] 15171#0: *1 http output filter "/foo/bar.gif?"
2012/09/12 16:20:09 [debug] 15171#0: *1 http copy filter: "/foo/bar.gif?"
2012/09/12 16:20:09 [debug] 15171#0: *1 read: 14, 09C0B67C, 68, 0
2012/09/12 16:20:09 [debug] 15171#0: *1 http postpone filter "/foo/bar.gif?" 09C0B6C0
2012/09/12 16:20:09 [debug] 15171#0: *1 write old buf t:1 f:0 09C0B500, pos 09C0B500, size: 235 file: 0, size: 0
2012/09/12 16:20:09 [debug] 15171#0: *1 write new buf t:1 f:0 09C0B67C, pos 09C0B67C, size: 68 file: 0, size: 0
2012/09/12 16:20:09 [debug] 15171#0: *1 http write filter: l:1 f:0 s:303
2012/09/12 16:20:09 [debug] 15171#0: *1 http write filter limit 0
2012/09/12 16:20:09 [debug] 15171#0: *1 writev: 303
2012/09/12 16:20:09 [debug] 15171#0: *1 http write filter 00000000
2012/09/12 16:20:09 [debug] 15171#0: *1 http copy filter: 0 "/foo/bar.gif?"
2012/09/12 16:20:09 [debug] 15171#0: *1 http finalize request: 0, "/foo/bar.gif?" a:1, c:1
2012/09/12 16:20:09 [debug] 15171#0: *1 set http keepalive handler
2012/09/12 16:20:09 [debug] 15171#0: *1 http close request
2012/09/12 16:20:09 [debug] 15171#0: *1 http log handler
2012/09/12 16:20:09 [debug] 15171#0: *1 run cleanup: 09C0B3D8
2012/09/12 16:20:09 [debug] 15171#0: *1 file cleanup: fd:14
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09C0AE10, unused: 1645
2012/09/12 16:20:09 [debug] 15171#0: *1 event timer add: 11: 75000:3134920866
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09BDA0C8
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09BE3210
2012/09/12 16:20:09 [debug] 15171#0: *1 hc free: 00000000 0
2012/09/12 16:20:09 [debug] 15171#0: *1 hc busy: 00000000 0
2012/09/12 16:20:09 [debug] 15171#0: *1 tcp_nodelay
2012/09/12 16:20:09 [debug] 15171#0: *1 reusable connection: 1
2012/09/12 16:20:09 [debug] 15171#0: *1 post event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: posted event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: *1 delete posted event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: *1 http keepalive handler
2012/09/12 16:20:09 [debug] 15171#0: *1 malloc: 09BE3210:1024
2012/09/12 16:20:09 [debug] 15171#0: *1 recv: fd:11 -1 of 1024
2012/09/12 16:20:09 [debug] 15171#0: *1 recv() not ready (11: Resource temporarily unavailable)
2012/09/12 16:20:09 [debug] 15171#0: posted event 00000000
2012/09/12 16:20:09 [debug] 15171#0: worker cycle
2012/09/12 16:20:09 [debug] 15171#0: accept mutex locked
2012/09/12 16:20:09 [debug] 15171#0: epoll timer: 75000
2012/09/12 16:20:09 [debug] 15171#0: epoll: fd:11 ev:0001 d:09C117C8
2012/09/12 16:20:09 [debug] 15171#0: *1 post event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: timer delta: 2
2012/09/12 16:20:09 [debug] 15171#0: posted events 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: posted event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: *1 delete posted event 09C2A710
2012/09/12 16:20:09 [debug] 15171#0: *1 http keepalive handler
2012/09/12 16:20:09 [debug] 15171#0: *1 recv: fd:11 0 of 1024
2012/09/12 16:20:09 [info] 15171#0: *1 client 127.0.0.1 closed keepalive connection
2012/09/12 16:20:09 [debug] 15171#0: *1 close http connection: 11
2012/09/12 16:20:09 [debug] 15171#0: *1 event timer del: 11: 3134920866
2012/09/12 16:20:09 [debug] 15171#0: *1 reusable connection: 0
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09BE3210
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 00000000
2012/09/12 16:20:09 [debug] 15171#0: *1 free: 09BD9FC0, unused: 56

Debug log during the second test request:

2012/09/12 16:20:14 [debug] 15171#0: *2 delete posted event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: *2 malloc: 09BDA0C8:672
2012/09/12 16:20:14 [debug] 15171#0: *2 malloc: 09BE3210:1024
2012/09/12 16:20:14 [debug] 15171#0: *2 posix_memalign: 09C0AE10:4096 @16
2012/09/12 16:20:14 [debug] 15171#0: *2 http process request line
2012/09/12 16:20:14 [debug] 15171#0: *2 recv: fd:11 183 of 1024
2012/09/12 16:20:14 [debug] 15171#0: *2 http request line: "GET /quux/foo/bar.gif HTTP/1.1"
2012/09/12 16:20:14 [debug] 15171#0: *2 http uri: "/quux/foo/bar.gif"
2012/09/12 16:20:14 [debug] 15171#0: *2 http args: ""
2012/09/12 16:20:14 [debug] 15171#0: *2 http exten: "gif"
2012/09/12 16:20:14 [debug] 15171#0: *2 http process request header line
2012/09/12 16:20:14 [debug] 15171#0: *2 http header: "User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2"
2012/09/12 16:20:14 [debug] 15171#0: *2 http header: "Accept: */*"
2012/09/12 16:20:14 [debug] 15171#0: *2 http header: "Host: t1.example.com"
2012/09/12 16:20:14 [debug] 15171#0: *2 http header done
2012/09/12 16:20:14 [debug] 15171#0: *2 event timer del: 11: 3134910906
2012/09/12 16:20:14 [debug] 15171#0: *2 rewrite phase: 0
2012/09/12 16:20:14 [debug] 15171#0: *2 test location: ~ "^/quux(/.*)?$"
2012/09/12 16:20:14 [debug] 15171#0: *2 using configuration "^/quux(/.*)?$"
2012/09/12 16:20:14 [debug] 15171#0: *2 http cl:-1 max:1048576
2012/09/12 16:20:14 [debug] 15171#0: *2 rewrite phase: 2
2012/09/12 16:20:14 [debug] 15171#0: *2 post rewrite phase: 3
2012/09/12 16:20:14 [debug] 15171#0: *2 generic phase: 4
2012/09/12 16:20:14 [debug] 15171#0: *2 generic phase: 5
2012/09/12 16:20:14 [debug] 15171#0: *2 access phase: 6
2012/09/12 16:20:14 [debug] 15171#0: *2 access phase: 7
2012/09/12 16:20:14 [debug] 15171#0: *2 post access phase: 8
2012/09/12 16:20:14 [debug] 15171#0: *2 try files phase: 9
2012/09/12 16:20:14 [debug] 15171#0: *2 http script copy: "/data/web/t1.example.com/htdoc"
2012/09/12 16:20:14 [debug] 15171#0: *2 http script capture: "/foo/bar.gif"
2012/09/12 16:20:14 [debug] 15171#0: *2 trying to use file: "" "/data/web/t1.example.com/htdoc/foo/bar.gif"
2012/09/12 16:20:14 [debug] 15171#0: *2 try file uri: ""
2012/09/12 16:20:14 [debug] 15171#0: *2 content phase: 10
2012/09/12 16:20:14 [debug] 15171#0: *2 content phase: 11
2012/09/12 16:20:14 [debug] 15171#0: *2 content phase: 12
2012/09/12 16:20:14 [debug] 15171#0: *2 http script copy: "/data/web/t1.example.com/htdoc"
2012/09/12 16:20:14 [debug] 15171#0: *2 http script capture: "/foo/bar.gif"
2012/09/12 16:20:14 [debug] 15171#0: *2 http filename: "/data/web/t1.example.com/htdoc/foo/bar.gif"
2012/09/12 16:20:14 [debug] 15171#0: *2 add cleanup: 09C0B414
2012/09/12 16:20:14 [debug] 15171#0: *2 http static fd: 14
2012/09/12 16:20:14 [debug] 15171#0: *2 http set discard body
2012/09/12 16:20:14 [debug] 15171#0: *2 HTTP/1.1 200 OK
Server: nginx/1.3.6
Date: Wed, 12 Sep 2012 14:20:14 GMT
Content-Type: application/octet-stream
Content-Length: 68
Last-Modified: Thu, 02 Aug 2012 05:04:56 GMT
Connection: keep-alive
ETag: "501a0a78-44"
Accept-Ranges: bytes

2012/09/12 16:20:14 [debug] 15171#0: *2 write new buf t:1 f:0 09C0B53C, pos 09C0B53C, size: 250 file: 0, size: 0
2012/09/12 16:20:14 [debug] 15171#0: *2 http write filter: l:0 f:0 s:250
2012/09/12 16:20:14 [debug] 15171#0: *2 http output filter "?"
2012/09/12 16:20:14 [debug] 15171#0: *2 http copy filter: "?"
2012/09/12 16:20:14 [debug] 15171#0: *2 read: 14, 09C0B6C4, 68, 0
2012/09/12 16:20:14 [debug] 15171#0: *2 http postpone filter "?" 09C0B708
2012/09/12 16:20:14 [debug] 15171#0: *2 write old buf t:1 f:0 09C0B53C, pos 09C0B53C, size: 250 file: 0, size: 0
2012/09/12 16:20:14 [debug] 15171#0: *2 write new buf t:1 f:0 09C0B6C4, pos 09C0B6C4, size: 68 file: 0, size: 0
2012/09/12 16:20:14 [debug] 15171#0: *2 http write filter: l:1 f:0 s:318
2012/09/12 16:20:14 [debug] 15171#0: *2 http write filter limit 0
2012/09/12 16:20:14 [debug] 15171#0: *2 writev: 318
2012/09/12 16:20:14 [debug] 15171#0: *2 http write filter 00000000
2012/09/12 16:20:14 [debug] 15171#0: *2 http copy filter: 0 "?"
2012/09/12 16:20:14 [debug] 15171#0: *2 http finalize request: 0, "?" a:1, c:1
2012/09/12 16:20:14 [debug] 15171#0: *2 set http keepalive handler
2012/09/12 16:20:14 [debug] 15171#0: *2 http close request
2012/09/12 16:20:14 [debug] 15171#0: *2 http log handler
2012/09/12 16:20:14 [debug] 15171#0: *2 run cleanup: 09C0B414
2012/09/12 16:20:14 [debug] 15171#0: *2 file cleanup: fd:14
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09C0AE10, unused: 1568
2012/09/12 16:20:14 [debug] 15171#0: *2 event timer add: 11: 75000:3134925906
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09BDA0C8
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09BE3210
2012/09/12 16:20:14 [debug] 15171#0: *2 hc free: 00000000 0
2012/09/12 16:20:14 [debug] 15171#0: *2 hc busy: 00000000 0
2012/09/12 16:20:14 [debug] 15171#0: *2 tcp_nodelay
2012/09/12 16:20:14 [debug] 15171#0: *2 reusable connection: 1
2012/09/12 16:20:14 [debug] 15171#0: *2 post event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: posted event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: *2 delete posted event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: *2 http keepalive handler
2012/09/12 16:20:14 [debug] 15171#0: *2 malloc: 09BE3210:1024
2012/09/12 16:20:14 [debug] 15171#0: *2 recv: fd:11 -1 of 1024
2012/09/12 16:20:14 [debug] 15171#0: *2 recv() not ready (11: Resource temporarily unavailable)
2012/09/12 16:20:14 [debug] 15171#0: posted event 00000000
2012/09/12 16:20:14 [debug] 15171#0: worker cycle
2012/09/12 16:20:14 [debug] 15171#0: accept mutex locked
2012/09/12 16:20:14 [debug] 15171#0: epoll timer: 75000
2012/09/12 16:20:14 [debug] 15171#0: epoll: fd:11 ev:0001 d:09C117C9
2012/09/12 16:20:14 [debug] 15171#0: *2 post event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: timer delta: 2
2012/09/12 16:20:14 [debug] 15171#0: posted events 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: posted event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: *2 delete posted event 09C2A710
2012/09/12 16:20:14 [debug] 15171#0: *2 http keepalive handler
2012/09/12 16:20:14 [debug] 15171#0: *2 recv: fd:11 0 of 1024
2012/09/12 16:20:14 [info] 15171#0: *2 client 127.0.0.1 closed keepalive connection
2012/09/12 16:20:14 [debug] 15171#0: *2 close http connection: 11
2012/09/12 16:20:14 [debug] 15171#0: *2 event timer del: 11: 3134925906
2012/09/12 16:20:14 [debug] 15171#0: *2 reusable connection: 0
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09BE3210
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 00000000
2012/09/12 16:20:14 [debug] 15171#0: *2 free: 09BD9FC0, unused: 56

#242 DAV module does not respect if-unmodified-since nginx-module defect 11/04/12

I.e. if you PUT or DELETE a resource with an if-unmodified-since header, the overwrite or delete will go through happily even if the header should have prevented it.

(This is a common use case, where you've previously a version of a resource, and you know its modified date, and then, when updating it or deleting it, you want to check for race conditions with other clients, and can use if-unmodified-since to get an error back if someone else messed with the resource in the meantime.)

Find a patch for this attached (also at https://gist.github.com/4013062). It's my first Nginx contribution -- feel free to point out style mistakes or general wrong-headedness.

I did not find a clean way to make the existing code in ngx_http_not_modified_filter_module.c handle this. It looks directly at the last-modified header, and, as a header filter, will only run *after* the actions for the request have already been taken.

I also did not add code for if-match, which is analogous, and code for which could probably be added to the ngx_http_test_if_unmodified function I added (which would be renamed in that case). But I don't really understand handling of etags by nginx yet, so I didn't touch that.


#52 urlencode/urldecode needed in rewrite and other places nginx-module enhancement 11/13/11

Если в $http_accept есть пробелы, то они передаются без кодирования

rewrite /cgi-bin/index.pl?_requri=$uri&_accept=$http_accept break; ... proxy_pass http://127.0.0.1:82; # mini-httpd listening


#165 Nginx worker processes don't seem to have the right group permissions nginx-core enhancement 05/11/12

Package: nginx Version: 1.2.0-1~squeeze (from Nginx repository, Debian version)

When a UNIX domain socket permissions are set to allow the primary group of the nginx worker processes to read/write on it, the Nginx worker process fail to access it with a 'permission denied' error logged.

Way to reproduce it: Binding Nginx on PHP-FPM UNIX domain socket

PHP-FPM socket configured as follow:

  • User: www-data
  • Group: www-data
  • Mode: 0660

Nginx configured as follow:

  • Worker processes spawned with the user 'nginx'
  • User 'nginx' has 'www-data' as primary group

Details on the configuration can be found here: http://forum.nginx.org/read.php?2,226182

It would be also nice to check than any group of the Nginx worker processes can be used for setting access permissions on sockets, not only its primary one.


#195 Close connection if SSL not enabled for vhost nginx-module enhancement 08/11/12

Instead of using the default SSL certificate, nginx should (by default or when configured) close the SSL connection as soon as it realizes that the requested domain has not been configured to be served over HTTPS.

For example,

server {
    listen 80 default_server;
    listen 443 ssl;

    server_name     aaa.example.net;

    ssl_certificate     /etc/ssl/certs/aaa.example.net.pem;
    ssl_certificate_key /etc/ssl/private/aaa.example.net.key;
}

server {
    listen 80;

    server_name     bbb.example.net;
}

If a client starts an HTTPS request for bbb.example.net, it will be greeted with a (correct) error/warning: "This certificate is untrusted, wrong domain". This is correct because nginx is serving the aaa.example.net certificate.

What nginx should do is to close the connection as soon as it discovers the domain that is being requested (after reading the SNI data, I suppose). This will communicate to the client and the user that there is HTTPS connectivity on bbb.example.net. Also, this solution will not disclose information about the fact that aaa.example.net is served by the same nginx server.


#239 Support for large (> 64k) FastCGI requests nginx-module enhancement 10/30/12

Currently, a hardcoded limit returns a '[alert] fastcgi request record is too big:...' message on the error output when requests larger than 64k are tempted to be sent with Nginx.

The improvement would be to handle larger requests based on configuration, if possible. Something similar to the work already done on output uffers would be nice.

The only current workaround is not to use FastCGI, ie revert to some Apache for example, which is a huge step backwards...


#55 Неправильно определяется версия Opera nginx-module defect 11/19/11

В новых версиях у браузера Opera user-agent выглядит так Opera/9.80 (Windows NT 6.1; U; MRA 5.8 (build 4661); ru) Presto/2.8.131 Version/11.11 Тоесть версию отражает Version/11.11 а не Opera/9.80 В модуле ngx_http_browser_module она определяется так:

{ "opera",

0, sizeof("Opera ") - 1, "Opera"},

Замена на

{ "opera",

sizeof("Opera ") - 1, sizeof("Version/") - 1, "Version/"},

выявляет правильно новые версии но со старыми версиями будет проблема


Note: See TracReports for help on using and creating reports.