Custom Query (2296 matches)

Filters
 
Or
 
  
 
Columns

Show under each result:


Results (25 - 27 of 2296)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Ticket Resolution Summary Owner Reporter
#2023 invalid 401 when not authenticated yet jeanmarc77@…
Description

Hi, Firstly, thank you very much for nginx. I find a strange behavior while using HTTP authentication :

I log 401 attempts with this settings :

..
error_page 400 404 401 /error.html;
..
location = /error.html {
 internal;
 access_log /var/log/nginx/4xxerror.log;
}
..

All work fine but i still have a 401 error when i first request a page so it's while i get the prompt (i haven't click ok or cancel yet). Of course i'am not authenticated at this stage but i don't think it's normal, it should only return a 401 if there is a wrong user/passwd or pressed the cancel button. Could you avoid this 401 when no login have been entered yet ? Cheers

#1468 duplicate 408 generated during active transfer to upstream Krzysztof Malinowski
Description

I have an nginx set up as a reverse proxy to a backend server. It happens on large (~60GB) file upload that the connection is terminated due to client request body timeout (HTTP 408) even though the transmission is still running (as observed by strace and backend receiving data).

Relevant part of configuration:

worker_processes  4;

events {
  worker_connections  1024;
}

http {
  sendfile        on;
  #tcp_nopush     on;

  keepalive_timeout  65;

  #gzip  on;

  include /etc/nginx/conf.d/*.conf;
}

server {
  listen 443 ssl http2;

  location / {
    proxy_pass http://upstream:8081;
    include conf.d/proxy-settings.inc;
    add_header Access-Control-Allow-Origin $http_origin always;
    add_header Access-Control-Expose-Headers Server always;
  }
}

# configuration file /etc/nginx/conf.d/proxy-settings.inc:
if ($http_x_forwarded_proto = '') {
  set $http_x_forwarded_proto  $scheme;
}

chunked_transfer_encoding on;
client_max_body_size 0;

proxy_read_timeout  900;
proxy_pass_header   Server;
proxy_cookie_path   ~*^/.* /;
proxy_set_header    X-Forwarded-Port  $server_port;
proxy_set_header    X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header    Host              $http_host;
proxy_set_header    X-Forwarded-For   $proxy_add_x_forwarded_for;

include conf.d/response-buffering.inc;

# configuration file /etc/nginx/conf.d/response-buffering.inc:
proxy_buffering on;
proxy_buffers 10240 16k;
proxy_busy_buffers_size 10240k;
proxy_buffer_size 8k;
proxy_max_temp_file_size 0;

# configuration file /etc/nginx/conf.d/request_buffering.conf:
proxy_request_buffering off;

Upload is being randomly terminated as 408 at various part of the file; it can be ~2GB, ~10GB, ~20GB or any other value; it may also happen that the upload finishes correctly.

I have enabled debugging while stracing worker process at the same time. I have noticed that the behavior is triggered with EAGAIN on client->nginx connection:

strace (in local time - GMT+1):

17:51:18.225939 read(11, ""..., 33093)  = 33093 <0.000042>
17:51:18.226527 writev(12, [...], 1)    = 8192 <0.000061>
17:51:18.227442 writev(12, [...], 1)    = 8192 <0.000058>
17:51:18.228364 writev(12, [...], 1)    = 8192 <0.000058>
17:51:18.229238 writev(12, [...], 1)    = 8192 <0.000058>
17:51:18.229624 read(11, ""..., 16146)  = 8236 <0.000022>
17:51:18.229680 read(11, 0x7f6bed03513a, 7910) = -1 EAGAIN (Resource temporarily unavailable) <0.000018>
17:51:18.237114 epoll_wait(17, {{EPOLLIN|EPOLLOUT, {u32=3975967392, u64=140101514195616}}}, 512, 60000) = 1 <0.000017>
17:51:18.237652 read(11, ""..., 7910)   = 7910 <0.000103>
17:51:18.238506 writev(12, [...], 1)    = 8192 <0.000120>
17:51:18.239582 writev(12, [...], 1)    = 8192 <0.000061>
17:51:18.239995 read(11, ""..., 33093)  = 33093 <0.000081>
17:51:18.240682 writev(12, [...], 1)    = 8192 <0.000083>
17:51:18.241743 writev(12, [...], 1)    = 8192 <0.000063>
17:51:18.242684 writev(12, [...], 1)    = 8192 <0.000087>
17:51:18.243681 writev(12, [...], 1)    = 8192 <0.000080>

<--- read/writev continues (fd:11 == client->nginx, fd:12 == nginx->backend -->

19:03:41.032687 read(11, ""..., 33093)  = 32826 <0.000102>
19:03:41.034186 writev(12, [...], 1)    = 8192 <0.000081>
19:03:41.036440 writev(12, [...], 1)    = 8192 <0.000089>
19:03:41.038823 writev(12, [...], 1)    = 8192 <0.000095>
19:03:41.040972 writev(12, [...], 1)    = 8192 <0.000086>
19:03:41.041956 read(11, 0x7f6bed033003, 33093) = -1 EAGAIN (Resource temporarily unavailable) <0.000045>
19:03:41.043862 epoll_wait(17, {{EPOLLOUT, {u32=3975967392, u64=140101514195616}}}, 512, 60000) = 1 <0.000049>
19:03:41.046868 close(12)               = 0 <0.000092>
19:03:41.049310 write(5, ""..., 341)    = 341 <0.007480>
19:03:41.058692 close(11)               = 0 <0.000099>
19:03:41.059661 epoll_wait(17,  <detached ...>

error.log (in GMT):

2018/01/23 16:51:18 [debug] 65378#65378: *1147733 http read client request body
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 SSL_read: 8192
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 http client request body recv 8192
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 http body new buf t:1 f:0 00007F6BECFB0230, pos 00007F6BECFB0230, size: 8192 file: 0, size: 0
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 chain writer buf fl:1 s:8192
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 chain writer in: 00007F6BECF543A0
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 writev: 8192 of 8192
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 chain writer out: 0000000000000000
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 http read client request body
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 SSL_read: 8192
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 http client request body recv 8192
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 http body new buf t:1 f:0 00007F6BECFB0230, pos 00007F6BECFB0230, size: 8192 file: 0, size: 0
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 chain writer buf fl:1 s:8192
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 chain writer in: 00007F6BECF543A0
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 writev: 8192 of 8192
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 chain writer out: 0000000000000000
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 http read client request body
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 SSL_read: -1
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 SSL_get_error: 2
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 http client request body recv -2
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 http client request body rest 36277250217
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 event timer add: 11: 60000:1516726338237
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 http run request: "/path/to/backend?"
2018/01/23 16:51:18 [debug] 65378#65378: *1147733 http upstream check client, write event:1, "/path/to/backend"
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 http run request: "/path/to/backend?"
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 http upstream check client, write event:1, "/path/to/backend"
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 event timer del: 11: 1516726338237
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 http run request: "/path/to/backend?"
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 http upstream read request handler
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 finalize http upstream request: 408
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 finalize http proxy request
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 free rr peer 1 0
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 close http upstream connection: 12
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 free: 00007F6BECFC6140, unused: 48
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 reusable connection: 0
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 http finalize request: 408, "/path/to/backend?" a:1, c:1
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 http terminate request count:1
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 http terminate cleanup count:1 blk:0
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 http posted request: "/path/to/backend?"
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 http terminate handler count:1
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 http request count:1 blk:0
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 http close request
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 http log handler
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 free: 00007F6BECFB0230
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 free: 00007F6BECFBD240, unused: 15
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 free: 00007F6BECF53650, unused: 138
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 close http connection: 11
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 SSL_shutdown: 1
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 reusable connection: 0
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 free: 00007F6BECFA8580
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 free: 00007F6BECF3A970, unused: 0
2018/01/23 18:03:41 [debug] 65378#65378: *1147733 free: 00007F6BECF6C6D0, unused: 464

Note that error.log does not have any entries between 16:51:18 and 18:03:41 which correspond to those EAGAINs from strace. The timer is set (client_request_body_timeout I presume) before going into "http upstream check client" and then it is not clear where the code goes. strace shows that there is still transfer proxied between client and backend but the error.log is revived only when another EAGAIN shows up and the connection is immediately terminated due to expired timer (after 1hr12m although the timer itself is set to default == 60s).

It is possible that this behavior is only visible in HTTP 1.1. I haven't been able to reproduce it with HTTP2 yet; with HTTP 1.1 it happens quite often, though irregularly.

Am I missing something in the configuration or is it a bug in the code? How can I debug what happens with nginx between 16:51:18 and 18:03:41? Any help appreciated.

#2408 duplicate 413 error_page directive in server or location context returns default 413 error page when using HTTP/2 Seb
Description

I think this is a bug, but I may be misunderstanding something.

If the error_page directive is set to a relative URI inside a server or location block the custom error page is not returned to HTTP/2 requests (the default Nginx one is). However requests using HTTP/1.1 work fine and receive the custom error page.

Example config:

server {
    listen       80;
    listen       443 ssl http2;
    listen  [::]:80;
    server_name  localhost;

    ssl_certificate /etc/nginx/ssl/nginx.crt;
    ssl_certificate_key /etc/nginx/ssl/nginx.key;

    error_page 413 /413/;

    location / {
        client_max_body_size 20m;
        proxy_pass http://172.28.0.1:8000;
    }
}

From the above example: http://172.28.0.1:8000/413/ is never requested when POSTing a >20M payload using HTTP/2 to Nginx. The default Nginx 413 error page is returned.

However making the same request using HTTP/1.1 results in a call to the upstream http://172.28.0.1:8000/413/ and this gets returned to the original request.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Batch Modify
Note: See TracBatchModify for help on using batch modify.
Note: See TracQuery for help on using queries.