Opened 23 months ago
Closed 22 months ago
#2450 closed defect (invalid)
Upload big Size File SSL_write() failed (32: Broken pipe) error
Reported by: | Koichi | Owned by: | |
---|---|---|---|
Priority: | critical | Milestone: | |
Component: | documentation | Version: | 1.22.x |
Keywords: | file size | Cc: | |
uname -a: |
uname -a
Linux ip-172-31-8-223.ap-northeast-1.compute.internal 5.10.157-139.675.amzn2.aarch64 #1 SMP Thu Dec 8 01:29:03 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux |
||
nginx -V: |
nginx -V
nginx version: nginx/1.22.0 built by gcc 7.3.1 20180712 (Red Hat 7.3.1-15) (GCC) built with OpenSSL 1.1.1g FIPS 21 Apr 2020 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-compat --with-debug --with-file-aio --with-google_perftools_module --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_degradation_module --with-http_flv_module --with-http_geoip_module=dynamic --with-stream_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_mp4_module --with-http_perl_module=dynamic --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-http_xslt_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-threads --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -moutline-atomics -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E' |
Description (last modified by )
Because the error message "Maximum number of external links per post exceeded"
I Change websiete to websiete.
I have problems with nginx.
I upload a big-size file of more than 800MB.
And I got an error:
2023/02/07 20:53:54 [error] 27686#27686: *15265 SSL_write() failed (32: Broken pipe) while sending request to upstream, client: xxx.xxx.xxx.xxx, server: mysite.xyz, request: "POST /management-upload HTTP/1.1", upstream: "websiete://127.0.0.1:9993/management-upload", host: "mysite.xyz", referrer: "websiete://mysite.xyz/console/index.html" 2023/02/07 20:53:54 [debug] 27686#27686: *15265 chain writer out: FFFFFFFFFFFFFFFF 2023/02/07 20:53:54 [debug] 27686#27686: *15265 http next upstream, 2 2023/02/07 20:53:54 [debug] 27686#27686: *15265 free rr peer 1 4 2023/02/07 20:53:54 [debug] 27686#27686: *15265 finalize http upstream request: 502 2023/02/07 20:53:54 [debug] 27686#27686: *15265 finalize http proxy request 2023/02/07 20:53:54 [debug] 27686#27686: *15265 SSL_shutdown: 1 2023/02/07 20:53:54 [debug] 27686#27686: *15265 close http upstream connection: 15 2023/02/07 20:53:54 [debug] 27686#27686: *15265 free: 0000AAAAF61E0130 2023/02/07 20:53:54 [debug] 27686#27686: *15265 free: 0000AAAAF56026A0 2023/02/07 20:53:54 [debug] 27686#27686: *15265 free: 0000AAAAF55E8AF0 2023/02/07 20:53:54 [debug] 27686#27686: *15265 free: 0000AAAAF5612160, unused: 0 2023/02/07 20:53:54 [debug] 27686#27686: *15265 event timer del: 15: 812062421 2023/02/07 20:53:54 [debug] 27686#27686: *15265 reusable connection: 0 2023/02/07 20:53:54 [debug] 27686#27686: *15265 http finalize request: 502, "/management-upload?" a:1, c:1 2023/02/07 20:53:54 [debug] 27686#27686: *15265 http special response: 502, "/management-upload?" 2023/02/07 20:53:54 [debug] 27686#27686: *15265 HTTP/1.1 502 Bad Gateway Server: nginx/1.22.0 Date: Tue, 07 Feb 2023 11:53:54 GMT Content-Type: text/html Content-Length: 559 Connection: keep-alive
[ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cat /etc/os-release
NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2" ANSI_COLOR="0;33" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2" HOME_URL="websiete://amazonlinux.com/"
cat nginx.conf
# For more information on configuration, see: # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/ user nginx; worker_processes auto; error_log /var/log/nginx/error.log debug; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 120; types_hash_max_size 4096; include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; upstream backend { server localhost:8080; } #upstream mgtcon { # server localhost:9990; #} upstream mgtcon{ server localhost:9993; } server { listen 80; server_name mysite.xyz; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains"; location ^~ /.well-known { root /usr/share/nginx/html; } location / { return 301 websiete://$host$request_uri; } } server { listen 443 ssl; server_name mysite.xyz; client_max_body_size 1000m; ssl_session_timeout 1d; ssl_session_cache shared:SSL:10m; ssl_session_tickets off; ssl_buffer_size 3000k; ssl_dhparam /etc/nginx/ssl/dhparam.pem; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; ssl_prefer_server_ciphers on; ssl_certificate /etc/letsencrypt/live/mysite.xyz/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/mysite.xyz/privkey.pem; # managed by Certbot # root /usr/share/nginx/html; location / { include conf.d/proxy_headers.conf; proxy_pass http://backend/; } location /management { include conf.d/proxy_headers.conf; proxy_pass websiete://mgtcon/management; } location /console { include conf.d/proxy_headers.conf; proxy_pass websiete://mgtcon/console; } location /logout { include conf.d/proxy_headers.conf; proxy_pass websiete://mgtcon/logout; } location /error { include conf.d/proxy_headers.conf; proxy_pass websiete://mgtcon; } } }
cat conf.d/proxy_headers.conf
proxy_set_header Host $host; proxy_set_header X-Forwarded-Proto $scheme; proxy_buffering on; proxy_busy_buffers_size 4000k; proxy_buffer_size 4000k; proxy_buffers 8 1000k; #proxy_set_header X-Forwarded-Proto https; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; add_header Front-End-Https on; add_header Cache-Control no-cache; client_body_buffer_size 20971520k; #proxy_request_buffering on; #add_header Front-End-websiete on;
Wildfly Version: WildFly Full 22.0.1.Final (WildFly Core 14.0.1.Final)
Reverse server(nginx) => App Server(Wildfly)
There is my configuration on the nginx server.
Change History (3)
comment:1 by , 23 months ago
Description: | modified (diff) |
---|
comment:2 by , 23 months ago
comment:3 by , 22 months ago
Resolution: | → invalid |
---|---|
Status: | new → closed |
Feedback timeout. As previously suggested, this looks like a backend issue.
Note:
See TracTickets
for help on using tickets.
The error suggests that your backend server closed the connection. Any reasons to think this is an issue in nginx rather than a bug and/or deliberate behaviour of the backend server?