#2494 closed defect (invalid)

large_client_header_buffers not working - When cookie size is more than 8k and taking default

Reported by: akhileshdwivedi@… Owned by:
Priority: minor Milestone:
Component: documentation Version: 1.19.x
Keywords: Cc:
uname -a: Linux ip-10-67-226-191.vpc.internal 4.14.311-233.529.amzn2.x86_64 #1 SMP Thu Mar 23 09:54:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
nginx -V: nginx version: nginx/1.24.0
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC)
built with OpenSSL 1.0.2k-fips 26 Jan 2017
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic-fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie'

Description

server {

listen 443 ssl default_server;
server_name _;
ssl_certificate /etc/ssl/certs/localhost.crt;
ssl_certificate_key /etc/ssl/private/localhost.key;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:AES128-GCM-SHA256:AES128-SHA256;

# ssl_session_cache none;
# keepalive_timeout 60s;
# ssl_session_timeout 5m;

keepalive_requests 250;
keepalive_timeout 75 75;

client_header_buffer_size 64k;
client_body_buffer_size 64K;
large_client_header_buffers 4 64k;

add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security max-age=15768000 always;
access_log /var/log/nginx/access.log mainattr;
error_log /var/log/nginx/error.log debug;

location / {

proxy_buffering off;
proxy_request_buffering off;
proxy_pass http://sasbackend/;
proxy_http_version 1.1;
proxy_set_header Proxy "";
proxy_set_header Connection "";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

}

location /health {

proxy_pass http://sasbackend/v1/HealthCheck;
proxy_http_version 1.1;
proxy_set_header Proxy "";
proxy_set_header Connection "";
proxy_set_header Host $http_host;

}

}

Change History (16)

comment:1 by akhileshdwivedi@…, 20 months ago

I forgot to change the priority this is actually critical

comment:2 by akhileshdwivedi@…, 20 months ago

Forgot to change the version its been happening in 1.12 , 1.13 and 1.24 version of nginx

comment:3 by Maxim Dounin, 20 months ago

Please define "not working". How do you test it? What do you observe? What do you expect to happen instead?

Note well that configuring large_client_header_buffers might not be trivial across multiple virtual servers, see docs. If unsure, a bulletproof approach would be to configure it at the "http" level instead.

comment:4 by akhileshdwivedi@…, 20 months ago

Thank you , Maxim

I am sending a rest call to nGinx- and when the cookie header size is more than 8k I get bad request 400 error if it's less than 8k than I get 200 ok response.

Client_large_header_buffers was set to 4 64k as shown in the file.

Yes this is not idle and as call comes via http but we need to set it up on the server context as all the requests are coming to server context.

in reply to:  4 comment:5 by Maxim Dounin, 20 months ago

Replying to akhileshdwivedi@…:

I am sending a rest call to nGinx- and when the cookie header size is more than 8k I get bad request 400 error if it's less than 8k than I get 200 ok response.

How are you sending it? Can you reproduce it, for example, using curl? What do you see in nginx logs? All 400 errors are expected to contain the reason for the error at the info level.

Client_large_header_buffers was set to 4 64k as shown in the file.

Note that it is set only in the particular server block, which is configured as the default server for port 443 and with server_name _;. As long as a request is processed in a different server block, the value configured in it will be used instead (in most cases).

Yes this is not idle and as call comes via http but we need to set it up on the server context as all the requests are coming to server context.

Things configured at the "http" level are inherited into all server blocks, and this prevents various silly mistakes like not configuring things in a virtual server which is actually being used. Hence the suggestion: configure large_client_header_buffers at the "http" level to see if it works.

From the information provided so far, it looks like you expect that large_client_header_buffers as configured in the default server is going to be used for all requests to the listening socket in question, even for requests actually using other server blocks. This is not the case, see the link above: you have to configure all the server blocks involved (and configuring things at the "http" level is the most trivial way to do it).

comment:6 by akhileshdwivedi@…, 20 months ago

I am sending it via curl request. I have removed large_client_header_buffers from everywhere and put everything under http but still no luck.



user nginx nginx;
worker_processes auto;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
worker_rlimit_nofile 200000;
events {

worker_connections 4096;

}

http {

default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/mime.types;
log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "request_time=$request_time" "request_length=$request_len

gth" "http_x_forwarded_for=$http_x_forwarded_for" "proxy_protocol_addr=$proxy_protocol_addr" "jk_authid=$http_jk_authid" "jk_tid=$http_jk_tid"';

log_format json '{"remote_addr": "$remote_addr", "remote_user": "$remote_user", "time_local": "$time_local", "request": "$request", "status": "$status", "body_bytes_sent": "$body_bytes_se

nt", "http_referer": "$http_referer", "http_user_agent": "$http_user_agent", "request_time": "$request_time", "request_length": "$request_length", "http_x_forwarded_for": "$http_x_forwarded
_for", "proxy_protocol_addr": "$proxy_protocol_addr", "jk_authid": "$http_jk_authid", "jk_tid": "$http_intuit_tid"}';

access_log /var/log/nginx/access.log main;
index index.html index.php


client_header_buffer_size 64k;
client_body_buffer_size 64K;
large_client_header_buffers 4 64k;

keepalive_requests 75;
keepalive_timeout 5 5;
tcp_nopush on;
tcp_nodelay on;
sendfile on;
ignore_invalid_headers off;
underscores_in_headers on;
client_body_timeout 10s;
client_header_timeout 10s;
send_timeout 10s;
server_tokens off;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security max-age=15768000 always;
proxy_set_header Proxy "";
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 180m;
ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-

sl_prefer_server_ciphers on;

ssl_protocols TLSv1.2;
resolver 127.0.0.1 valid=30s;
resolver_timeout 2s;

server {

listen 127.0.0.1:991;

location /server-status {

stub_status on;
access_log off;
allow 127.0.0.1;
deny all;

}

}

}

comment:7 by akhileshdwivedi@…, 20 months ago

This is the file that is included inside the nginx.conf file and is under conf.d folder

log_format mainattr ' "flow=ESA" ;
log_format mainattrsap ' "flow=SAP";

server {

listen 443 ssl default_server;
server_name _;
ssl_certificate /etc/ssl/certs/localhost.crt;
ssl_certificate_key /etc/ssl/private/localhost.key;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384;

keepalive_requests 250;
keepalive_timeout 75 75;


access_log /var/log/nginx/access.log mainattr;
error_log /var/log/nginx/error.log debug;

location / {

proxy_buffering off;
proxy_request_buffering off;
proxy_pass http://sasbackend/;
proxy_http_version 1.1;
proxy_set_header Proxy "";
proxy_set_header Connection "";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

}

location /health {

proxy_pass http://sasbackend/v1/HealthCheck;
proxy_http_version 1.1;
proxy_set_header Proxy "";
proxy_set_header Connection "";
proxy_set_header Host $http_host;

}

}

}

comment:8 by akhileshdwivedi@…, 20 months ago

I can reproduce using curl. I am only interested in ssl server block

comment:9 by akhileshdwivedi@…, 20 months ago

Do you think I have the proper module installed to handle large client buffer is part of
ngx_http_core_module -

I have not installed all module rpm. Should I install all-module rpm.

in reply to:  8 comment:10 by Maxim Dounin, 20 months ago

Replying to akhileshdwivedi@…:

I can reproduce using curl. I am only interested in ssl server block

So, how do you run curl, and what does curl show with --verbose? And what is in the error log?

comment:11 by akhileshdwivedi@…, 20 months ago

Below is the postman verbose.

Note: Unnecessary use of -X or --request, POST is already inferred.

  • Trying 44.226.243.91...
  • TCP_NODELAY set
  • Connected to prf-mdmesa-pub.mdm-preprod.a.jk.com (44.226.243.91) port 443 (#0)
  • Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
  • error setting certificate verify locations, continuing anyway:
  • CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none
  • TLSv1.2 (OUT), TLS handshake, Client hello (1):
  • TLSv1.2 (IN), TLS handshake, Server hello (2):
  • NPN, negotiated HTTP1.1
  • TLSv1.2 (IN), TLS handshake, Certificate (11):
  • TLSv1.2 (IN), TLS handshake, Server key exchange (12):
  • TLSv1.2 (IN), TLS handshake, Server finished (14):
  • TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
  • TLSv1.2 (OUT), TLS change cipher, Client hello (1):
  • TLSv1.2 (OUT), TLS handshake, Unknown (67):
  • TLSv1.2 (OUT), TLS handshake, Finished (20):
  • TLSv1.2 (IN), TLS change cipher, Client hello (1):
  • TLSv1.2 (IN), TLS handshake, Finished (20):
  • SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
  • Server certificate:
  • subject: C=US; ST=California; L=San Diego; O=INC.; CN=*.mdm-preprod.a.jk.com
  • start date: Sep 2 00:00:00 2022 GMT
  • expire date: Oct 3 23:59:59 2023 GMT
  • issuer: C=US; O=DigiCert Inc; CN=DigiCert TLS RSA SHA256 2020 CA1
  • SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway.

    POST /v1/StandardizedAddress HTTP/1.1
    Host: prf-mdmesa-pub.mdm-preprod.a.jk.com
    User-Agent: curl/7.50.3
    Authorization:
    Content-Type: application/json
    tid: awsesateste2ecom
    Accept: application/json
    Cookie: removed to make it short its more than8k
    Content-Length: 258

  • upload completely sent off: 258 out of 258 bytes

< HTTP/1.1 400 Bad Request
HTTP/1.1 400 Bad Request
< Date: Fri, 19 May 2023 15:16:42 GMT
Date: Fri, 19 May 2023 15:16:42 GMT
< Transfer-Encoding: chunked
Transfer-Encoding: chunked
< Connection: keep-alive
Connection: keep-alive
< Server: nginx
Server: nginx
< Strict-Transport-Security: max-age=15768000
Strict-Transport-Security: max-age=15768000

Last edited 20 months ago by akhileshdwivedi@… (previous) (diff)

comment:12 by Maxim Dounin, 20 months ago

And what is in the error log?

And, more specifically, what makes you think that the error was generated by the nginx in question, and not by the backend server?

Note that the response contains Connection: keep-alive, while nginx closes connections when generating 400 errors. Further, there is the Transfer-Encoding: chunked response header (and no response body, but this might be due to incomplete curl output shown), which suggests that the error response isn't the default, as it should be per the configuration shown.

comment:13 by akhileshdwivedi@…, 20 months ago

Below is the request_length- when request length is more than 8k I get 400 or else 200

Below is the nginx log output of 400

"remote_address=10.67.228.52"
"remote_user=-" [19/May/2023:08:39:00 -0700]
"request=POST /v1/Address HTTP/1.1"
"request_status=400"
"body_bytes=5"
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36"
"request_time=0.001"
"request_length=8953"
"http_x_forwarded_for=207.207.180.8, 35.92.25.110, 34.215.206.174"
"proxy_protocol_addr=-"
"request_body={\x22addressLine1\x22:\x22235 TIIEIIE St.\x22,\x22addressLine2\x22:\x22502\x22,\x22city\x22:\x22San Francisco\x22,\x22state\x22:\x22CA\x22,\x22zipCode\x22:\x2294103\x22,\x22isocountryCode\x22:\x22US\x22}"
"upstream_connect_time=0.000"
"upstream_header_time=0.001"
"upstream_response_time=0.001"

Last edited 20 months ago by akhileshdwivedi@… (previous) (diff)

comment:14 by Maxim Dounin, 20 months ago

The entry in question is from the access log, not the error log. Would it be correct to say that there are no entries in the error log corresponding to the problematic requests?

Note well that upstream_response_time=0.001 in the access log also suggests that the 400 response was received from the upstream server, and not generated by nginx itself. Consider logging $upstream_status for some additional information.

To further check nginx works as expected, please consider using something like

location / {
    return 200 "";
}

instead of proxying.

comment:15 by akhileshdwivedi@…, 19 months ago

Sure- thank you for the pointer Maxim

comment:16 by Maxim Dounin, 19 months ago

Resolution: invalid
Status: newclosed

Feedback timeout. As outlined in the comments, provided details suggest that everything works as expected, and the error is returned by the upstream server.

Note: See TracTickets for help on using tickets.