Opened 16 months ago
Last modified 10 months ago
#2528 new defect
nginx reload with quic reuseport: quic packet rejected rc:-1
Reported by: | Owned by: | ||
---|---|---|---|
Priority: | minor | Milestone: | |
Component: | http/3 | Version: | 1.25.x |
Keywords: | listen quic reuseport quic packet rejected rc:-1 | Cc: | murilo.b.andrade@… |
uname -a: | Linux edg-teb-doc002d 5.4.17-2136.318.7.1.el7uek.x86_64 #2 SMP Thu Apr 13 17:51:11 PDT 2023 x86_64 x86_64 x86_64 GNU/Linux | ||
nginx -V: |
nginx version: nginx/1.25.2
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) built with OpenSSL 1.1.1u 30 May 2023 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-http_ssl_module --with-http_v2_module --with-http_v3_module --with-http_realip_module --with-http_image_filter_module --with-http_sub_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_secure_link_module --with-http_stub_status_module --with-http_perl_module --with-http_slice_module --with-http_auth_request_module --with-pcre --without-pcre2 --without-mail_pop3_module --without-mail_smtp_module --without-mail_imap_module --with-google_perftools_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY__SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic -Wno-sign-compare -DAZN_NGX_HTTP_UPSTREAM_RESOLVER -I/usr/local/include -O0 -g' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E -L/usr/local/lib -Wl,-rpath,/usr/local/lib -Wl,-rpath,/usr/lib64/' |
Description
I observed some client timeout issue in http3 requests when nginx is recycle the workers (nginx -s reload
)
Requirements to reproduce the issue
- You need a worker in
shutting down
status;- We need a big dumb file in the server;
- We will use a slowly
curl
request to "stuck" one worker;
- A SIGHUP signal (
nginx -s reload
); - http3 client, see (This is the http3 client that I used);
Debug log evidence
2023/08/09 01:57:58 [debug] 22714#0: quic recvmsg on 0.0.0.0:443, ready: 0 2023/08/09 01:57:58 [debug] 22714#0: posix_memalign: 000055C8A015DFC0:512 @16 2023/08/09 01:57:58 [debug] 22714#0: malloc: 000055C8A015E1D0:1252 2023/08/09 01:57:58 [debug] 22714#0: *52 quic recvmsg: 34.95.175.91:52394 fd:10 n:1252 2023/08/09 01:57:58 [debug] 22714#0: *52 http3 init session 2023/08/09 01:57:58 [debug] 22714#0: *52 posix_memalign: 000055C8A015E6C0:512 @16 2023/08/09 01:57:58 [debug] 22714#0: *52 add cleanup: 000055C8A015E848 2023/08/09 01:57:58 [debug] 22714#0: *52 event timer add: 10: 60000:5817468822 2023/08/09 01:57:58 [debug] 22714#0: *52 quic run 2023/08/09 01:57:58 [debug] 22714#0: *52 quic packet rx long flags:c3 version:1 2023/08/09 01:57:58 [debug] 22714#0: *52 quic packet rx init len:1219 2023/08/09 01:57:58 [debug] 22714#0: *52 quic packet rx dcid len:19 206bd91956f15bbff8467934f240527fce6522 2023/08/09 01:57:58 [debug] 22714#0: *52 quic packet rx scid len:4 65133304 2023/08/09 01:57:58 [debug] 22714#0: *52 quic address validation token len:0 2023/08/09 01:57:58 [debug] 22714#0: *52 quic packet done rc:-1 level:init decr:0 pn:0 perr:0 2023/08/09 01:57:58 [debug] 22714#0: *52 quic packet rejected rc:-1, cleanup connection 2023/08/09 01:57:58 [debug] 22714#0: *52 reusable connection: 0 2023/08/09 01:57:58 [debug] 22714#0: *52 run cleanup: 000055C8A015E848 2023/08/09 01:57:58 [debug] 22714#0: *52 event timer del: -1: 5817468822 2023/08/09 01:57:58 [debug] 22714#0: *52 free: 000055C8A015E1D0 2023/08/09 01:57:58 [debug] 22714#0: *52 free: 000055C8A015DFC0, unused: 16 2023/08/09 01:57:58 [debug] 22714#0: *52 free: 000055C8A015E6C0, unused: 96
Attachments (2)
Change History (6)
by , 16 months ago
Attachment: | debug_error.log.zip added |
---|
comment:1 by , 16 months ago
This is my nginx.conf
worker_processes 2; error_log logs/error.log info; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name localhost; location / { root /usr/share/nginx/html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } # HTTPS server # server { error_log /usr/share/nginx/logs/debug_error.log debug; listen 443 ssl; listen 443 quic reuseport; http2 on; server_name edgtebdoc002d_domain_com; ssl_certificate cert.pem; ssl_certificate_key cert.key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; add_header "X-W" $pid; location / { root html; index index.html index.htm; } } }
(This is the http3 client that I used):
//go-quic.go package main import ( "crypto/tls" "fmt" "io/ioutil" "log" "time" "net/http" //"github.com/quic-go/quic-go" "github.com/quic-go/quic-go/http3" ) func main() { // Create a TLS configuration that skips certificate verification tlsConfig := &tls.Config{ InsecureSkipVerify: true, } //quicConfig := &quic.Config{ // HandshakeIdleTimeout: 10*time.Second, // KeepAlivePeriod: 15*time.Second, //} // Create a QUIC round tripper with the TLS configuration quicRoundTripper := &http3.RoundTripper{ TLSClientConfig: tlsConfig, //QuicConfig: quicConfig, } //for { // Create an HTTP client with the QUIC round tripper client := &http.Client{ Transport: quicRoundTripper, Timeout: 20* time.Second, } start := time.Now().UTC() resp, err := client.Head("https\:\/\/edgtebdoc002d_domain_com/?testa=murilo20230802") duration := time.Now().UTC().Sub(start).Milliseconds() if err != nil { log.Fatal(err) } defer resp.Body.Close() // Read and print the response body body, err := ioutil.ReadAll(resp.Body) if err != nil { log.Fatal(err) } for header, value := range resp.Header { if (header == "X-Request") { fmt.Println(value) } if (header == "X-W") { fmt.Println("worker_pid: ", value) } } fmt.Println("duration: ", duration) fmt.Println(string(body)) }
My simulation log
In the nginx servers, lets create the dumb file in the html
server folder, like this:
#to create 10GB file sudo dd if=/dev/random of=/usr/share/nginx/html/random_bit.bin bs=10000 count=1000000 status=progress
Let's start the nginx in the server like this:
sudo nginx -c /etc/nginx/nginx.conf.default sudo ps -f -C nginx UID PID PPID C STIME TTY TIME CMD root 22712 1 0 01:56 ? 00:00:00 nginx: master process nginx -c /etc/nginx/nginx.conf.default nginx 22713 22712 0 01:56 ? 00:00:00 nginx: worker process nginx 22714 22712 0 01:56 ? 00:00:00 nginx: worker process
Let's make a http request slowly inside the server, only to have a "worker process" working for a long time on something while we decided to send the SIGHUP signal.
curl -v --limit-rate 100k localhost/random_bit.bin -o /dev/null
In our client machine, let's make some http3 requests:
while true; do date && ./go-quic; done
In the nginx server, we can send SIGHUP signal to master:
sudo nginx -c /etc/nginx/nginx.conf.default -s reload sudo ps -f -C nginx UID PID PPID C STIME TTY TIME CMD root 22712 1 0 01:56 ? 00:00:00 nginx: master process nginx -c /etc/nginx/nginx.conf.default nginx 22713 22712 0 01:56 ? 00:00:00 nginx: worker process is shutting down nginx 22721 22712 0 01:57 ? 00:00:00 nginx: worker process nginx 22722 22712 0 01:57 ? 00:00:00 nginx: worker process
And the timeout will starts to happens in http3 client.
Tue Aug 8 22:57:54 -03 2023 worker_pid: [22714] duration: 371 Tue Aug 8 22:57:54 -03 2023 worker_pid: [22714] duration: 300 Tue Aug 8 22:57:55 -03 2023 worker_pid: [22713] duration: 303 Tue Aug 8 22:57:55 -03 2023 worker_pid: [22713] duration: 299 Tue Aug 8 22:57:55 -03 2023 worker_pid: [22714] duration: 315 Tue Aug 8 22:57:56 -03 2023 worker_pid: [22713] duration: 301 Tue Aug 8 22:57:56 -03 2023 worker_pid: [22714] duration: 301 Tue Aug 8 22:57:56 -03 2023 worker_pid: [22713] duration: 300 Tue Aug 8 22:57:57 -03 2023 worker_pid: [22714] duration: 303 Tue Aug 8 22:57:57 -03 2023 worker_pid: [22713] duration: 300 Tue Aug 8 22:57:57 -03 2023 worker_pid: [22713] duration: 320 Tue Aug 8 22:57:58 -03 2023 worker_pid: [22713] duration: 302 Tue Aug 8 22:57:58 -03 2023 2023/08/08 22:57:58 Head "https\:\/\/edgtebdoc002d_domain_com/?testa=murilo20230802": Application error 0x100 (remote): connection shutdown Tue Aug 8 22:57:58 -03 2023 2023/08/08 22:58:03 Head "https\:\/\/edgtebdoc002d_domain_com/?testa=murilo20230802": timeout: no recent network activity Tue Aug 8 22:58:03 -03 2023 2023/08/08 22:58:08 Head "https\:\/\/edgtebdoc002d_domain_com/?testa=murilo20230802": timeout: no recent network activity Tue Aug 8 22:58:08 -03 2023 2023/08/08 22:58:13 Head "https\:\/\/edgtebdoc002d_domain_com/?testa=murilo20230802": timeout: no recent network activity Tue Aug 8 22:58:13 -03 2023 2023/08/08 22:58:18 Head "https\:\/\/edgtebdoc002d_domain_com/?testa=murilo20230802": timeout: no recent network activity Tue Aug 8 22:58:18 -03 2023 worker_pid: [22722] duration: 3605 Tue Aug 8 22:58:22 -03 2023 2023/08/08 22:58:42 Head "https\:\/\/edgtebdoc002d_domain_com/?testa=murilo20230802": stream 0 canceled by local with error code 268 (Client.Timeout exceeded while awaiting headers) Tue Aug 8 22:58:42 -03 2023 worker_pid: [22722] duration: 303 Tue Aug 8 22:58:42 -03 2023 worker_pid: [22721] duration: 1245 Tue Aug 8 22:58:44 -03 2023 ^C% 130 murilo.andrade@FVFGV7N9Q6L4 ~/Dev/go-wrk (git)-[main] % :(
comment:2 by , 16 months ago
Currently QUIC does not handle reloads well. Existing connections may fail while two sets of workers are running. This happens because while reloading, a UDP socket is shared between two workers - old a new. Incoming packets may arrive in the wrong worker. As a result, connections may close and their packets may be rejected.
I sent out a patchset a while ago which should fix the issue for Linux:
https://mailman.nginx.org/pipermail/nginx-devel/2023-January/thread.html#16073
The patch which fixes the reload issue is 'QUIC: eBPF worker sockets'. You need to enable quic_bpf for this. We would appreciate feedback.
comment:3 by , 16 months ago
I am still testing the patches that you mentioned.
I tested with quic_bpf on;
and off;
BPF ON:
The bpf prog apparently has loaded
sudo bpftool prog show 7: type 21 tag 2a89f65000aaa25e loaded_at 2023-08-18T18:16:04+0000 uid 0 xlated 1128B jited 634B memlock 4096B map_ids 14,15,13
But I started to receive a timeout in my client request (and randomly receive a success response, but the majority of requests reach a timeout).
I don't need to reload the workers to reach a timeout. I am trying to understand what's happening and the reason for this.
BPF OFF:
The nginx worked very well, Apparently solving the issue with the client connections with timeout in all the cases (with/without reload process); In this approach I only needed to change nginx.conf from user nginx;
to user root;
, solving the following emerg
error:
ps -fC nginx #psnginx UID PID PPID C STIME TTY TIME CMD root 13177 1 0 18:20 ? 00:00:00 nginx: master process /usr/sbin/nginx nginx 13178 13177 0 18:20 ? 00:00:00 nginx: worker process nginx 13180 13177 0 18:20 ? 00:00:00 nginx: worker process nginx 13181 13177 0 18:20 ? 00:00:00 nginx: cache manager process
2023/08/18 18:21:54 [emerg] 13178#0: bind() for client socket failed (13: Permission denied)
Thank you for your attention @Roman
by , 10 months ago
Attachment: | capset.diff added |
---|
comment:4 by , 10 months ago
I gave this a shot on 1.25.4 and hit the same issue (bind() for client socket failed (13: Permission denied)), however we're not really willing to run workers as root so I added a patch (capset.patch) to inherit CAP_NET_BIND_TO_SERVICE from the parent.
I also added AmbientCapabilities=CAP_NET_BIND_SERVICE
to the systemd unit file, but that probably wasn't necessary.
This could probably use some further checks to make sure quic_bpf is enabled before inheriting the permission, but it works for now.
debug_error.log