Opened 8 years ago

Closed 8 years ago

Last modified 8 years ago

#1106 closed defect (fixed)

Stale workers not exiting after reload (with HTTP/2 long poll requests)

Reported by: hwyuan@… Owned by: Valentin Bartenev <vbart@…>
Priority: minor Milestone:
Component: nginx-core Version: 1.11.x
Keywords: HTTP2, longpoll, keepalive Cc:
uname -a: 3.16.0-60-generic 80~14.04.1-Ubuntu
nginx -V: built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) built with OpenSSL 1.0.2j 26 Sep 2016 TLS SNI support enabled configure arguments: --with-http_ssl_module --with-http_v2_module --with-openssl=/PATH/TO/OPENSSL/openssl-1.0.2j

Description

Nginx stale workers not exiting with HTTP/2 long poll requests.

Based on the commit info in http://hg.nginx.org/nginx/rev/ea284434db0f, it seems that connections with active streams should be closed if nginx is exiting (e.g. after a reload). But for long poll requests, where there could always be active streams in the connection, it seems that ngx_http_v2_finalize_connection() (and maybe also ngx_http_v2_handle_connection_handler()) may not have a chance to be reached because of the non-zero h2c->processing value.

The issue can be reproduced by sending long poll HTTP/2 requests to Nginx.

  1. Enable HTTP/2 on an Nginx server.
  2. Configure the upstream server to return the response X seconds after receiving the request.
  3. Generate HTTP/2 traffic using tools such as h2load.
  4. Reload Nginx after seeing some number of requests.
  5. It can be observed that requests are still sent to the old worker (include pid in the access log), and the stale worker would not exit.

Change History (6)

comment:1 by Alexey Ivanov, 8 years ago

You can reproduce that easily running h2load with 2+ concurrent clients against the following upstream:

package main

import (
    "fmt"
    "log"
    "net/http"
    "time"
)

func main() {
    http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
        time.Sleep(60 * time.Second)
        fmt.Fprintf(w, "Hello, longpoll")
    })

    log.Fatal(http.ListenAndServe(":8080", nil))
}

If you reload nginx during that test, it'll never exit.

comment:2 by Alexey Ivanov, 8 years ago

For now we are workarounding that problem by sending GOAWAY frame(via ngx_http_v2_send_goaway) to active connections during shutdown. Not sure if it's the proper solution for the given problem though.

comment:3 by Valentin V. Bartenev, 8 years ago

Status: newaccepted

comment:4 by Valentin V. Bartenev, 8 years ago

Please, try the following patch:

diff -r 7cdf69d012f0 src/http/v2/ngx_http_v2.c
--- a/src/http/v2/ngx_http_v2.c Tue Oct 11 18:03:01 2016 +0300
+++ b/src/http/v2/ngx_http_v2.c Mon Oct 17 20:08:45 2016 +0300
@@ -136,6 +136,8 @@ static ngx_int_t ngx_http_v2_send_window
     ngx_uint_t sid, size_t window);
 static ngx_int_t ngx_http_v2_send_rst_stream(ngx_http_v2_connection_t *h2c,
     ngx_uint_t sid, ngx_uint_t status);
+static ngx_int_t ngx_http_v2_send_goaway(ngx_http_v2_connection_t *h2c,
+    ngx_uint_t status);
 
 static ngx_http_v2_out_frame_t *ngx_http_v2_get_frame(
     ngx_http_v2_connection_t *h2c, size_t length, ngx_uint_t type,
@@ -293,6 +295,8 @@ ngx_http_v2_init(ngx_event_t *rev)
     rev->handler = ngx_http_v2_read_handler;
     c->write->handler = ngx_http_v2_write_handler;
 
+    c->idle = 1;
+
     ngx_http_v2_read_handler(rev);
 }
 
@@ -320,6 +324,25 @@ ngx_http_v2_read_handler(ngx_event_t *re
 
     h2c->blocked = 1;
 
+    if (c->close) {
+        c->close = 0;
+
+        if (ngx_http_v2_send_goaway(h2c, NGX_HTTP_V2_NO_ERROR) == NGX_ERROR) {
+            ngx_http_v2_finalize_connection(h2c, 0);
+            return;
+        }
+
+        if (ngx_http_v2_send_output_queue(h2c) == NGX_ERROR) {
+            ngx_http_v2_finalize_connection(h2c, 0);
+            return;
+        }
+
+        h2c->goaway = 1;
+        h2c->blocked = 0;
+
+        return;
+    }
+
     h2mcf = ngx_http_get_module_main_conf(h2c->http_connection->conf_ctx,
                                           ngx_http_v2_module);
 
@@ -633,6 +656,11 @@ ngx_http_v2_handle_connection(ngx_http_v
         /* rc == NGX_OK */
     }
 
+    if (h2c->goaway) {
+        ngx_http_close_connection(c);
+        return;
+    }
+
     h2scf = ngx_http_get_module_srv_conf(h2c->http_connection->conf_ctx,
                                          ngx_http_v2_module);
     if (h2c->state.incomplete) {
@@ -640,11 +668,6 @@ ngx_http_v2_handle_connection(ngx_http_v
         return;
     }
 
-    if (ngx_terminate || ngx_exiting) {
-        ngx_http_v2_finalize_connection(h2c, NGX_HTTP_V2_NO_ERROR);
-        return;
-    }
-
     ngx_destroy_pool(h2c->pool);
 
     h2c->pool = NULL;
@@ -658,7 +681,6 @@ ngx_http_v2_handle_connection(ngx_http_v
 #endif
 
     c->destroyed = 1;
-    c->idle = 1;
     ngx_reusable_connection(c, 1);
 
     c->write->handler = ngx_http_empty_handler;
@@ -1027,6 +1049,12 @@ ngx_http_v2_state_headers(ngx_http_v2_co
         return ngx_http_v2_connection_error(h2c, NGX_HTTP_V2_SIZE_ERROR);
     }
 
+    if (h2c->goaway) {
+        ngx_log_debug0(NGX_LOG_DEBUG_HTTP, h2c->connection->log, 0,
+                       "skipping HEADERS frame");
+        return ngx_http_v2_state_skip(h2c, pos, end);
+    }
+
     if ((size_t) (end - pos) < size) {
         return ngx_http_v2_state_save(h2c, pos, end,
                                       ngx_http_v2_state_headers);
@@ -4162,7 +4190,6 @@ ngx_http_v2_idle_handler(ngx_event_t *re
 #endif
 
     c->destroyed = 0;
-    c->idle = 0;
     ngx_reusable_connection(c, 0);
 
     h2scf = ngx_http_get_module_srv_conf(h2c->http_connection->conf_ctx,
@@ -4197,8 +4224,10 @@ ngx_http_v2_finalize_connection(ngx_http
 
     h2c->blocked = 1;
 
-    if (!c->error && ngx_http_v2_send_goaway(h2c, status) != NGX_ERROR) {
-        (void) ngx_http_v2_send_output_queue(h2c);
+    if (!c->error && !h2c->goaway) {
+        if (ngx_http_v2_send_goaway(h2c, status) != NGX_ERROR) {
+            (void) ngx_http_v2_send_output_queue(h2c);
+        }
     }
 
     c->error = 1;
diff -r 7cdf69d012f0 src/http/v2/ngx_http_v2.h
--- a/src/http/v2/ngx_http_v2.h Tue Oct 11 18:03:01 2016 +0300
+++ b/src/http/v2/ngx_http_v2.h Mon Oct 17 20:08:45 2016 +0300
@@ -146,6 +146,7 @@ struct ngx_http_v2_connection_s {
     unsigned                         closed_nodes:8;
     unsigned                         settings_ack:1;
     unsigned                         blocked:1;
+    unsigned                         goaway:1;
 };
 
 

comment:5 by Valentin Bartenev <vbart@…>, 8 years ago

Owner: set to Valentin Bartenev <vbart@…>
Resolution: fixed
Status: acceptedclosed

In 6778:5e95b9fb33b7/nginx:

HTTP/2: graceful shutdown of active connections (closes #1106).

Previously, while shutting down gracefully, the HTTP/2 connections were
closed in transition to idle state after all active streams have been
processed. That might never happen if the client continued opening new
streams.

Now, nginx sends GOAWAY to all HTTP/2 connections and ignores further
attempts to open new streams. A worker process will quit as soon as
processing of already opened streams is finished.

comment:6 by Valentin Bartenev <vbart@…>, 8 years ago

In 6889:09cf90250844/nginx:

HTTP/2: graceful shutdown of active connections (closes #1106).

Previously, while shutting down gracefully, the HTTP/2 connections were
closed in transition to idle state after all active streams have been
processed. That might never happen if the client continued opening new
streams.

Now, nginx sends GOAWAY to all HTTP/2 connections and ignores further
attempts to open new streams. A worker process will quit as soon as
processing of already opened streams is finished.

Note: See TracTickets for help on using tickets.