#2445 closed defect (worksforme)

Nginx event "blocking" in H2

Reported by: zw-byte@… Owned by:
Priority: minor Milestone:
Component: documentation Version: 1.18.x
Keywords: HTTP2 Cc:
uname -a:
nginx -V: nginx version: nginx/1.18.0
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC)
built with OpenSSL 1.1.1r-dev xx XXX xxxx
TLS SNI support enabled
configure arguments: --prefix=/usr/local/nginx --with-http_ssl_module --with-http_v2_module --with-openssl=/home/ly257097/openssl

Description (last modified by zw-byte@…)

I found that nginx event cycle could be "blocked” when using H2 because of concurrent streams .

For example, when Nginx’s maximum concurrent streams is 128, if the client sends 128 requests at the same time, nginx will receive 128 requests, and will process 128 requests in one event callback (function ngx_http_v2_read_handler).It’s about 20 milliseconds to processing 128 requests, which means that other connection events will be delayed by 20 milliseconds.

If there is more H2 concurrent streams, it will be worse. how to solve this problem? Thank you.

Change History (2)

comment:1 by zw-byte@…, 15 months ago

Description: modified (diff)

comment:2 by Maxim Dounin, 15 months ago

Resolution: worksforme
Status: newclosed

The way how nginx works suggests that processing of events related to particular connections and/or requests can delay processing of other events, such as in the case you are observing.

Depending on the configuration, delays can be different. If in your particular configurations delays are larger than the workload can tolerate, there are wide range of measures to take, in particular:

  • Improve the software configuration. In particular, make sure to avoid blocking actions, such as in embedded languages. For example, the perl module explicitly recommends to avoid long-running operations. Avoiding disk operations when possible might be also a good idea, especially ones hitting rotating disks, since a single disk seek might consume up to 10ms. Mechanism such as aio can be used to minimize disk operations in nginx itself.
  • Improve the hardware used. Switching to a faster server and/or to SSD might significantly improve operations latency, reducing delays.

If these are not possible or not enough, there are limitation mechanisms implemented in nginx which can help to mitigate delays due to load spikes and/or mitigate DoS if that's a concern. In particular, the limit_req directive can be used to delay excessive requests from a single client arriving at the same time.

Note well that if delays are larger than your workload tolerates, it might be a good idea to avoid configuring the number of allowed concurrent streams larger than the default, and instead consider making it smaller.

Note: See TracTickets for help on using tickets.