Opened 4 weeks ago

Closed 18 hours ago

#2403 closed defect (fixed)

http request has no response, the configure with 'master_process off and listen reuseport'

Reported by: bullerdu@… Owned by:
Priority: minor Milestone:
Component: documentation Version: 1.23.x
Keywords: reuseport, master_process, worker_processes Cc:
uname -a: Linux cdn-dev011164234021.na61 4.19.91-008.ali4000.alios7.x86_64 #1 SMP Fri Sep 4 17:33:26 CST 2020 x86_64 x86_64 x86_64 GNU/Linux
nginx -V: nginx version: nginx/1.23.2
built by gcc 6.5.1 20220324 (GCC)
configure arguments: --prefix=/home/yefei.dyf/nginx

Description

When the following configuration is used, no response is received when sending an http request.

user  root;

master_process              off;
worker_processes            4;
worker_cpu_affinity         auto;

error_log  logs/error.log  debug;

pid        logs/nginx.pid;

#load_module modules/ngx_http_image_filter_module.so;

events {
    use                 epoll;
    accept_mutex        off;
    worker_connections  102400;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

    server {
        listen 80 reuseport;

        location / {
            return 200 "ok";
        }
    }
}

the result with no response:

$curl -sv 'localhost:80/'
* About to connect() to localhost port 80 (#0)
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost
> Accept: */*
>
^C

The fault occurs because worker_processes enable multiple processes with 'listen 80 reuseport;' in the master off mode.
My question is, since master_process off conflicts with worker_processes, why does nginx not disable worker_processes to configure multiple processes in master off mode

A patch that solves this configuration problem is listed below.

diff --git a/src/core/ngx_connection.c b/src/core/ngx_connection.c
index fe729a78..ab210dd8 100644
--- a/src/core/ngx_connection.c
+++ b/src/core/ngx_connection.c
@@ -112,6 +112,10 @@ ngx_clone_listening(ngx_cycle_t *cycle, ngx_listening_t *ls)

     ccf = (ngx_core_conf_t *) ngx_get_conf(cycle->conf_ctx, ngx_core_module);

+    if (ccf->master == 0) {
+        return NGX_OK;
+    }
+
     for (n = 1; n < ccf->worker_processes; n++) {

         /* create a socket for each worker process */

Change History (3)

comment:1 by Maxim Dounin, 4 weeks ago

My question is, since master_process off conflicts with worker_processes, why does nginx not disable worker_processes to configure multiple processes in master off mode

Forcing worker_processes to be 1 in case of master_process off; might solve some of the minor issues like this one, though generally is not enough to work without master process.

Note well that the master_process off; mode is for nginx development, and implies understanding of various limitations, see docs. It is expected that some features simply won't work or will break things, up to segmentation faults in some cases (see ticket #945 for an example).

A patch that solves this configuration problem is listed below.

Thanks for the patch. I've submitted a slightly different one for review.

comment:2 by Maxim Dounin <mdounin@…>, 29 hours ago

In 8105:09463dd9c504/nginx:

Disabled cloning of sockets without master process (ticket #2403).

Cloning of listening sockets for each worker process does not make sense
when working without master process, and causes some of the connections
not to be accepted if worker_processes is set to more than one and there
are listening sockets configured with the reuseport flag. Fix is to
disable cloning when master process is disabled.

comment:3 by Maxim Dounin, 18 hours ago

Resolution: fixed
Status: newclosed

Fix committed, thanks for reporting this.

Note: See TracTickets for help on using tickets.