Opened 22 months ago
Closed 22 months ago
#2445 closed defect (worksforme)
Nginx event "blocking" in H2
Reported by: | Owned by: | ||
---|---|---|---|
Priority: | minor | Milestone: | |
Component: | documentation | Version: | 1.18.x |
Keywords: | HTTP2 | Cc: | |
uname -a: | |||
nginx -V: |
nginx version: nginx/1.18.0
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) built with OpenSSL 1.1.1r-dev xx XXX xxxx TLS SNI support enabled configure arguments: --prefix=/usr/local/nginx --with-http_ssl_module --with-http_v2_module --with-openssl=/home/ly257097/openssl |
Description (last modified by )
I found that nginx event cycle could be "blocked” when using H2 because of concurrent streams .
For example, when Nginx’s maximum concurrent streams is 128, if the client sends 128 requests at the same time, nginx will receive 128 requests, and will process 128 requests in one event callback (function ngx_http_v2_read_handler).It’s about 20 milliseconds to processing 128 requests, which means that other connection events will be delayed by 20 milliseconds.
If there is more H2 concurrent streams, it will be worse. how to solve this problem? Thank you.
Change History (2)
comment:1 by , 22 months ago
Description: | modified (diff) |
---|
comment:2 by , 22 months ago
Resolution: | → worksforme |
---|---|
Status: | new → closed |
Note:
See TracTickets
for help on using tickets.
The way how nginx works suggests that processing of events related to particular connections and/or requests can delay processing of other events, such as in the case you are observing.
Depending on the configuration, delays can be different. If in your particular configurations delays are larger than the workload can tolerate, there are wide range of measures to take, in particular:
If these are not possible or not enough, there are limitation mechanisms implemented in nginx which can help to mitigate delays due to load spikes and/or mitigate DoS if that's a concern. In particular, the limit_req directive can be used to delay excessive requests from a single client arriving at the same time.
Note well that if delays are larger than your workload tolerates, it might be a good idea to avoid configuring the number of allowed concurrent streams larger than the default, and instead consider making it smaller.