Opened 3 years ago

Closed 3 years ago

#2234 closed defect (invalid)

NGINX 1.19.2 TCP RST/ACK TLSv1.0 Client Hello of Tor Relay ORPort Self-Test in TCP Stream Mode

Reported by: garycnew@… Owned by:
Priority: minor Milestone:
Component: nginx-module Version: 1.19.x
Keywords: NGINX TCP RST ACK TLSv1.0 Tor Relay ORPort Self-Test Stream Cc:
uname -a: Linux gnutech-wap01 2.6.36.4brcmarm #1 SMP PREEMPT Fri Aug 14 15:20:58 EDT 2020 armv7l ASUSWRT-Merlin
nginx -V: nginx version: nginx/1.19.2 (x86_64-pc-linux-gnu)
built with OpenSSL 1.1.1g 21 Apr 2020
TLS SNI support enabled
configure arguments: --target=arm-openwrt-linux --host=arm-openwrt-linux --build=x86_64-pc-linux-gnu --program-prefix= --program-suffix= --prefix=/opt --exec-prefix=/opt --bindir=/opt/bin --sbindir=/opt/sbin --libexecdir=/opt/lib --sysconfdir=/opt/etc --datadir=/opt/share --localstatedir=/opt/var --mandir=/opt/man --infodir=/opt/info --disable-nls --crossbuild=Linux::arm --prefix=/opt --conf-path=/opt/etc/nginx/nginx.conf --with-http_ssl_module --add-module=/media/ware4/Entware.2020.09/build_dir/target-arm_cortex-a9_glibc-2.23_eabi/nginx-ssl/nginx-1.19.2/nginx-naxsi/naxsi_src --add-module=/media/ware4/Entware.2020.09/build_dir/target-arm_cortex-a9_glibc-2.23_eabi/nginx-ssl/nginx-1.19.2/lua-nginx --with-ipv6 --with-http_stub_status_module --with-http_flv_module --with-http_dav_module --add-module=/media/ware4/Entware.2020.09/build_dir/target-arm_cortex-a9_glibc-2.23_eabi/nginx-ssl/nginx-1.19.2/nginx-dav-ext-module --with-http_auth_request_module --with-http_v2_module --with-http_realip_module --with-http_secure_link_module --with-http_sub_module --with-stream --with-stream_ssl_module --with-stream_ssl_preread_module --add-module=/media/ware4/Entware.2020.09/build_dir/target-arm_cortex-a9_glibc-2.23_eabi/nginx-ssl/nginx-1.19.2/nginx-headers-more --add-module=/media/ware4/Entware.2020.09/build_dir/target-arm_cortex-a9_glibc-2.23_eabi/nginx-ssl/nginx-1.19.2/nginx-brotli --add-module=/media/ware4/Entware.2020.09/build_dir/target-arm_cortex-a9_glibc-2.23_eabi/nginx-ssl/nginx-1.19.2/nginx-rtmp --add-module=/media/ware4/Entware.2020.09/build_dir/target-arm_cortex-a9_glibc-2.23_eabi/nginx-ssl/nginx-1.19.2/nginx-ts --error-log-path=/opt/var/log/nginx/error.log --pid-path=/opt/var/run/nginx.pid --lock-path=/opt/var/lock/nginx.lock --http-log-path=/opt/var/log/nginx/access.log --http-client-body-temp-path=/opt/var/lib/nginx/body --http-proxy-temp-path=/opt/var/lib/nginx/proxy --http-fastcgi-temp-path=/opt/var/lib/nginx/fastcgi --with-cc=arm-openwrt-linux-gnueabi-gcc --with-cc-opt='-I/media/ware4/Entware.2020.09/staging_dir/target-arm_cortex-a9_glibc-2.23_eabi/opt/include -I/media/ware4/Entware.2020.09/staging_dir/toolchain-arm_cortex-a9_gcc-8.4.0_glibc-2.23_eabi/include -O2 -pipe -mtune=cortex-a9 -fno-caller-saves -fhonour-copts -Wno-error=unused-but-set-variable -Wno-error=unused-result -mfloat-abi=soft -fvisibility=hidden -ffunction-sections -fdata-sections -DNGX_LUA_NO_BY_LUA_BLOCK' --with-ld-opt='-L/media/ware4/Entware.2020.09/staging_dir/target-arm_cortex-a9_glibc-2.23_eabi/opt/lib -Wl,-rpath,/opt/lib -Wl,-rpath-link=/media/ware4/Entware.2020.09/staging_dir/target-arm_cortex-a9_glibc-2.23_eabi/opt/lib -Wl,--dynamic-linker=/opt/lib/ld-linux.so.3 -L/media/ware4/Entware.2020.09/staging_dir/toolchain-arm_cortex-a9_gcc-8.4.0_glibc-2.23_eabi/lib -Wl,--gc-sections' --without-http_upstream_zone_module --modules-path=/opt/lib/nginx --http-uwsgi-temp-path=/opt/var/lib/nginx/uwsgi --http-scgi-temp-path=/opt/var/lib/nginx/scgi

Description (last modified by garycnew@…)

There appears to be a bug with NGINX 1.19.2 immediately sending a TCP RST/ACK after receiving a TLSv1.0 Client Hello from a Tor Relay ORPort Self-Test in TCP Stream Mode with a Single TorNode in the NGINX Upstream Hash Configuration, but works fine with any Tor Relay Requests over TLSv1.2 or TLSv1.3.

# cat nginx.conf
user nobody;
worker_processes auto;
worker_rlimit_nofile 7168;

events {
    worker_connections  3584;
}

stream {

    upstream application {
        hash $remote_addr consistent;
        server 192.168.0.21:9001 weight=4 max_fails=1 fail_timeout=10s;
    }

    server {
        listen                        xxx.xxx.xxx.xxx:443;

        proxy_pass                    application;
    }
}
# cat torrc 
Nickname xxxxxxxxxxxxxxxxx
ORPort xxx.xxx.xxx.xxx:443 NoListen
ORPort 192.168.0.21:9001 NoAdvertise
SocksPort 9050
SocksPort 192.168.0.21:9050
ControlPort 9051
ExitRelay 0
DirCache 0
MaxMemInQueues 192 MB
GeoIPFile /opt/share/tor/geoip
Log notice file /tmp/torlog
Log notice syslog
VirtualAddrNetwork 10.192.0.0/10
AutomapHostsOnResolve 1
TransPort 192.168.0.21:9040
DNSPort 192.168.0.21:9053
RunAsDaemon 1
DataDirectory /tmp/tor/torrc.d/.tordb
AvoidDiskWrites 1
User tor
ContactInfo tor-operator@your-emailaddress-domain

Interestingly, an external TLS scan of the NGINX listening port shows that it's capable of TLSv1, TLSv1.1, TLSv1.2, and TLSv1.3. However, in the particular scenario previously described, NGINX immediately sends a TCP RST/ACK after receiving a TLSv1.0 Client Hello from a Tor Relay ORPort Self-Test, which has been validated with several packet traces and should be easily reproducible.

I've confirmed that this issue is specific to NGINX by stopping NGINX and configuring a PortFoward in its place, which is successful.

This is a blocker for High-Availability Tor Relay Implementation using NGINX.

Respectfully,

Gary

P.S. I've confirmed that this is an issue with HAProxy's TCP Stream implementation, as well, but we'd prefer to use NGINX.

Change History (17)

comment:1 by garycnew@…, 3 years ago

Description: modified (diff)

comment:2 by Maxim Dounin, 3 years ago

Priority: blockerminor

There appears to be a bug with NGINX 1.19.2 immediately sending a TCP RST/ACK after receiving a TLSv1.0 Client Hello

First of all, check nginx error log. It is expected to contain the details about the connection, and may contain additional information, such errors being encountered while handling the connection. Consider configuring error log to use at least the info level.

Note though that in the provided configuration nginx does not handle SSL, but rather forwards everything to the upstream server configured. If the behaviour depends on the TLS protocol version, this is highly unlikely to be something related to nginx. Rather, consider checking how the backend server responds. A tcpdump between nginx and the backend and/or nginx debug log might be helpful to understand what goes on here.

Note well that configure arguments provided suggests that you are using 3rd party patches for cross-compilation. There were multiple reports in the past involving these cross-compilation patches where it was clearly shown that the patches result in incorrect compilation. See #1928 for a recent example involving specifically OpenWrt. If you observe incorrect behaviour with cross-compiled nginx, consider checking if you observe the same behaviour with native compilation, without any 3rd party patches.

in reply to:  2 comment:3 by garycnew@…, 3 years ago

Hi Maxim!

Thank you for your prompt reply to this ticket.

First of all, check nginx error log. It is expected to contain the details about the connection, and may contain additional information, such errors being encountered while handling the connection. Consider configuring error log to use at least the info level.

I've been monitoring the nginx error log at its default level (notice) and there aren't any errors at start-up and only the occasional upstream error after running for an hour or more. I'll try modifying the log-level to info for additional verbosity.

Note though that in the provided configuration nginx does not handle SSL, but rather forwards everything to the upstream server configured. If the behaviour depends on the TLS protocol version, this is highly unlikely to be something related to nginx. Rather, consider checking how the backend server responds. A tcpdump between nginx and the backend and/or nginx debug log might be helpful to understand what goes on here.

As Tor generates a unique self-signed certificate each time that it is started/restarted it makes it impractical to us nginx's http loadbalancing; thus, we opted to make use of nginx's tcp stream loadbalancing and understand that it should simply proxypass the original stream on to the upstream node. Through packet-traces, we've confirmed that this is successfully preformed by nginx for TLSv1.2 and TLSv1.3 connections. The issue is specifically with nginx and TLSv1.0 connections. You can see in the packet-traces that nginx receives the TLSv1.0 Client Hello, ACK's the Client's previous ACK, and then immediately RST's the connection. With a good TLSv1.2 or TLSv1.3 Client Hello, you can see nginx ACK, then open a connection with the Upstream Node sending the same Client Hello (same certificate), and successfully establish the connection. The same is not true with TLSv1.0 connections. I'd be happy to provide the raw, unfiltered packet-traces, but your ticket attachment mechanism only permits maximum file sizes of 250KB, which is unrealistic for packet-traces. Is there another mechanism to provide you with the packet-traces?

Note well that configure arguments provided suggests that you are using 3rd party patches for cross-compilation. There were multiple reports in the past involving these cross-compilation patches where it was clearly shown that the patches result in incorrect compilation. See #1928 for a recent example involving specifically OpenWrt. If you observe incorrect behaviour with cross-compiled nginx, consider checking if you observe the same behaviour with native compilation, without any 3rd party patches.

The version of nginx that we have installed is from the Entware repository, which is closely related to OpenWRT. If there is an issue with the Entware compiled version of nginx, I will be willing to concede this point. Presently, with the circumstantial evidence (Directly Binding and PortForwarding to Tor successfully completing the Tor Self-Test) I am inclined to believe it is a bug with nginx.

I understand that the burden of proof is upon us to show that this issue is a bug in nginx as much as it is your burden of proof to show that it is not a bug in nginx. All I ask is that you provide the same level of troubleshooting to prove or disprove this issue is an nginx bug. I assume you have a Sandbox or the like to try and reproduce such issues with a natively compiled nginx implementation?

Please let us know the best way to provide you with our existing packet-traces to document the issue.

Respectfully,

Gary

comment:4 by Maxim Dounin, 3 years ago

I'd be happy to provide the raw, unfiltered packet-traces, but your ticket attachment mechanism only permits maximum file sizes of 250KB, which is unrealistic for packet-traces. Is there another mechanism to provide you with the packet-traces?

256KB is certainly more than enough for a packet trace with a single ClientHello and an RST response. If there is more than that, it might be a good idea to figure out how to isolate problematic traffic, for example, by making sure the address/port you are looking at is not used for anything but the test. You can, however, use other resources to publish traces. In particular, Github might be a good solution.

I assume you have a Sandbox or the like to try and reproduce such issues with a natively compiled nginx implementation?

Certainly we have our development and testing sandboxes. Unfortunately, your issue description does not contain ways to reproduce the issue in an isolated environment, and the only way to observe it with some probability seems to setup a public TOR relay (not something we are familiar with and willing to do for tests, yet running a sandboxed TOR relay for tests does not seem to be trivial). And even with a TOR relay it is not clear how to check if the issue is present or not. If you can provide a way to reproduce the issue without a public TOR relay, it would be awesome and will greatly simplify testing.

in reply to:  4 comment:5 by garycnew@…, 3 years ago

Maxim,

We have uploaded the nginx-tor-tcpdumps to github as you recommended.

https://github.com/garycnew/nginx-tor-tcpdumps

We created the nginx-tor-tcpdumps raw and unfiltered to instantiate a sample of network traffic during the same time-period that the issue is observed to refrain from bias and provide a complete picture.

We use Wireshark with the Primary Router packet-trace to display filter based on tls.handshake.type==1 (Client Hello). We then follow a tcp stream related to a bad TLSv1 or good TLSv1.2/TLSv1.3 Tor Relay connection to identify the ip.addr and tcp.srcport of the connections in question. In the example of a bad TLSv1 connection you will see that the connection is RST by nginx (198.91.60.78:443). In the example of a good TLSv1.2/TLSv1.3 connection you will see that a Client Hello with the same self-signed certificate is received by nginx (198.91.60.78:443) and then a second Client Hello follows from the Private Gateway Address (192.168.0.1:[highport]) and is proxypassed to the upstream node (192.168.0.21:9001). You can then use Wireshark with the UpstreamNode packet-trace to display filter based on the tcp.srcport of the Private Gateway Address (192.168.0.1:[highport]) previously identified in the PrimaryRouter packet-trace to validate the connection is successfully established.

Certainly we have our development and testing sandboxes. Unfortunately, your issue description does not contain ways to reproduce the issue in an isolated environment, and the only way to observe it with some probability seems to setup a public TOR relay (not something we are familiar with and willing to do for tests, yet running a sandboxed TOR relay for tests does not seem to be trivial). And even with a TOR relay it is not clear how to check if the issue is present or not. If you can provide a way to reproduce the issue without a public TOR relay, it would be awesome and will greatly simplify testing.

To reproduce the issue, you would have to configure a public Tor Relay, which I have provided with the exact torrc configuration we are currently using (identifying information redacted). Configuring a Tor Relay is actually quite simple; especially, when provided with a sample torrc configuration.

In the torlog, what is observed when directly binding the Tor Relay to 198.91.60.78:443 or PortForwarding from 198.91.60.78:443 to 192.168.0.21:9001 is the following:

Aug 13 00:26:42.000 [notice] Self-testing indicates your ORPort 198.91.60.78:443 is reachable from the outside. Excellent. Publishing server descriptor.
Aug 13 00:27:49.000 [notice] Performing bandwidth self-test...done.

However, when nginx is bound to 198.91.60.78:443 and proxypass to 192.168.0.21:9001 the following is what results:

Aug 13 01:01:46.000 [notice] Now checking whether IPv4 ORPort 198.91.60.78:443 is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Aug 13 01:21:45.000 [warn] Your server has not managed to confirm reachability for its ORPort(s) at 198.91.60.78:443. Relays do not publish descriptors until their ORPort and DirPort are reachable. Please check your firewalls, ports, address, /etc/hosts file, etc.

Hopefully, this provides sufficient detail of the issue description, how to observe it through the referenced packet-traces, and the manor in which it can be reproduced.

If you can show that nginx is able to TCP Stream Loadbalance the initial TLSv1 Tor Self-Test connections to an Upstream Tor Node, we will concede the issue and troubleshoot further in our environment.

Thank you for your time and assistance.

Respectfully,

Gary

comment:6 by Maxim Dounin, 3 years ago

Resolution: invalid
Status: newclosed

First connection attempt as seen in the nginx-to-backend dump (nginx-tor-self-test-UpstreamNode-192.168.0.1.highport-to-192.168.0.21.9001-raw-unfiltered.cap) is as follows (timestamps in GMT):

10:36:01.568098 IP 192.168.0.1.37050 > 192.168.0.21.9001: Flags [S], seq 403842462, win 5840, options [mss 1460,sackOK,TS val 9517211 ecr 0,nop,wscale 4], length 0
10:36:01.568123 IP 192.168.0.1.37050 > 192.168.0.21.9001: Flags [S], seq 403842462, win 5840, options [mss 1460,sackOK,TS val 9517211 ecr 0,nop,wscale 4], length 0
10:36:01.568237 IP 192.168.0.21.9001 > 192.168.0.1.37050: Flags [R.], seq 0, ack 403842463, win 0, length 0
10:36:01.568259 IP 192.168.0.21.9001 > 192.168.0.1.37050: Flags [R.], seq 0, ack 1, win 0, length 0

Clearly the connection is reset by the backend before any data are sent, immediately after the initial SYN packet. (Also note duplicate packets, this might be an issue in your network configuration and might be related.)

Based on the timestamps, corresponding connection to nginx (as seen in the nginx-tor-self-test-PrimaryRouter-198.91.60.78.443-to-192.168.0.1.9001-raw-unfiltered.cap file) seems to be the following one:

10:36:01.504365 IP 96.249.235.98.50094 > 198.91.60.78.443: Flags [S], seq 4149382912, win 64240, options [mss 1460,sackOK,TS val 297907124 ecr 0,nop,wscale 7], length 0
10:36:01.504581 IP 198.91.60.78.443 > 96.249.235.98.50094: Flags [S.], seq 405236672, ack 4149382913, win 5792, options [mss 1460,sackOK,TS val 9517204 ecr 297907124,nop,wscale 4], length 0
10:36:01.576774 IP 96.249.235.98.50094 > 198.91.60.78.443: Flags [.], ack 1, win 502, options [nop,nop,TS val 297907198 ecr 9517204], length 0
10:36:01.577774 IP 96.249.235.98.50094 > 198.91.60.78.443: Flags [P.], seq 1:313, ack 1, win 502, options [nop,nop,TS val 297907198 ecr 9517204], length 312
10:36:01.577889 IP 198.91.60.78.443 > 96.249.235.98.50094: Flags [.], ack 313, win 429, options [nop,nop,TS val 9517211 ecr 297907198], length 0
10:36:01.580567 IP 198.91.60.78.443 > 96.249.235.98.50094: Flags [R.], seq 1, ack 313, win 429, options [nop,nop,TS val 9517211 ecr 297907198], length 0

Clearly the connection is reset by nginx because it was reset by the backend server (or the network layer). Does not seem to be an issue in nginx, rather something wrong with your configuration and/or backend software.

Since nginx-tor-self-test-PrimaryRouter-198.91.60.78.443-to-192.168.0.1.9001-raw-unfiltered.cap dump seems to contain all the packets to/from the server with nginx, here is the first connection as seen in the dump (starting at the first SYN packet to nginx as seen in the dump), along with intermediate packets to the backend:

$ TZ=GMT tcpdump -nr nginx-tor-self-test-PrimaryRouter-198.91.60.78.443-to-192.168.0.1.9001-raw-unfiltered.cap port 443 or port 9001
...
10:35:59.908888 IP 66.23.202.234.38466 > 198.91.60.78.443: Flags [S], seq 2222559361, win 64240, options [mss 1460,sackOK,TS val 1691223673 ecr 0,nop,wscale 7], length 0
10:35:59.909110 IP 198.91.60.78.443 > 66.23.202.234.38466: Flags [S.], seq 373993965, ack 2222559362, win 5792, options [mss 1460,sackOK,TS val 9517044 ecr 1691223673,nop,wscale 4], length 0
10:35:59.960604 IP 66.23.202.234.38466 > 198.91.60.78.443: Flags [.], ack 1, win 502, options [nop,nop,TS val 1691223725 ecr 9517044], length 0
10:35:59.960813 IP 66.23.202.234.38466 > 198.91.60.78.443: Flags [P.], seq 1:317, ack 1, win 502, options [nop,nop,TS val 1691223725 ecr 9517044], length 316
10:35:59.960951 IP 198.91.60.78.443 > 66.23.202.234.38466: Flags [.], ack 317, win 429, options [nop,nop,TS val 9517050 ecr 1691223725], length 0
10:35:59.961212 IP 192.168.0.1.37043 > 192.168.0.21.9001: Flags [S], seq 375859539, win 5840, options [mss 1460,sackOK,TS val 9517050 ecr 0,nop,wscale 4], length 0
10:35:59.961247 IP 192.168.0.1.37043 > 192.168.0.21.9001: Flags [S], seq 375859539, win 5840, options [mss 1460,sackOK,TS val 9517050 ecr 0,nop,wscale 4], length 0
10:35:59.961500 IP 192.168.0.21.9001 > 192.168.0.1.37043: Flags [R.], seq 0, ack 375859540, win 0, length 0
10:35:59.961500 IP 192.168.0.21.9001 > 192.168.0.1.37043: Flags [R.], seq 0, ack 1, win 0, length 0
10:35:59.961866 IP 198.91.60.78.443 > 66.23.202.234.38466: Flags [R.], seq 1, ack 317, win 429, options [nop,nop,TS val 9517050 ecr 1691223725], length 0
...

Again, clearly the connection to the backend server is reset by the backend server, and so nginx resets the connection with the client. Does not seem to be an nginx issue either.

Note well, in no particular order:

  • There is a lot of completely unrelated traffic in the dumps provided, such as SSH (port 22) traffic. Consider using relevant tcpdump expressions when dumping traffic, such as port 443 if you want to dump traffic to nginx and port 9001 when dumping traffic from nginx to the backend. To dump both client-to-nginx and nginx-to-backend traffic on the nginx server something like port 443 or port 9001 would be enough.
  • There seems to be a lot of unrelated traffic even on the ports used for tests. You may want to stop other traffic to the server when testing configuration, or at least use dedicated ports for tests.
  • When the backend resets connection as in the above trace, there should be an error in nginx logs at the "error" level, similar to something like this:
    2021/08/24 17:44:30 [error] 18584#18584: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1:443, upstream: "127.0.0.1:9001", bytes from/to client:0/0, bytes from/to upstream:0/0
    

If you are not seeing such errors, this might indicate that you are looking at the wrong log.

Hope this helps.

Closing this, as nginx clearly behaves as it should in all the connections examined.

in reply to:  6 comment:7 by garycnew@…, 3 years ago

Maxim,

I believe this ticket was closed prematurely. The resets you referenced are due to the Tor Application not being fully started, yet. Moreover, you'll notice that the same string of resets subside after the syslog packets show that the Tor Application has finished starting. I've confirmed the same by increasing the Nginx log-level to info.

The test is initiated by starting a tcpdump on the Primary Router, then starting a tcpdump on the Upstream Node, then starting Nginx on the Primary Router, and finally starting the Tor Application on the Upstream Node.

Again... The Tor Application works without issue when bound to the Public IP Address (198.91.60.78:443) or PortForwarded from 198.91.60.78:443 => 192.168.0.21:9001 with the following iptables rules:

iptables -A VSERVER -p tcp -m tcp --dport 443 -j DNAT --to-destination 192.168.0.21:9001
iptables -I INPUT -p tcp --dport 9001 -j ACCEPT

It is only when binding Nignx to 198.91.60.78:443 and TCP Loadbalancing to the Upstream Node 192.168.0.21:9001 does the Tor Self-Check fail.

Are there any requirements for configuring the O/S when installing Nginx?

Respectfully,

Gary

comment:8 by garycnew@…, 3 years ago

Resolution: invalid
Status: closedreopened

comment:9 by Maxim Dounin, 3 years ago

I believe this ticket was closed prematurely. The resets you referenced are due to the Tor Application not being fully started, yet. Moreover, you'll notice that the same string of resets subside after the syslog packets show that the Tor Application has finished starting.

Connection dumps shown above clearly demonstrate that nginx behaves as expected. If you think that these dumps are irrelevant and some different connections were rejected incorrectly, don't hesitate to show relevant dumps (and/or nginx debug logs) as originally suggested.

comment:10 by garycnew@…, 3 years ago

Maxim,

Is it possible to escalate this ticket to someone at Nginx whom will take a deeper look into this issue? There is "clearly" a compatibility issue between Nginx and Tor Relays. It's one thing to actually attempt to reproduce the issue within Nginx's test environment and another to take a cursory look at a couple of packet-captures and declare Nginx behaving as expected. The former proves Nginx/Tor compatibility; while, the later relies on circumstantial evidence. If Nginx is behaving as expected; perhaps, the expected behavior is incomparability with Tor.

Thank you for escalating this ticket.

Regards,

Gary

comment:11 by Maxim Dounin, 3 years ago

Resolution: invalid
Status: reopenedclosed

Thanks for the feedback. So, to summarise, the following facts are known:

  • Tor Relay self-test does not pass in your particular configuration, involving multiple hosts, complex network configuration and proxying through nginx with stream module without any processing.
  • The same issue appears with HAProxy.
  • The network configuration is known to be broken and produce duplicate packets.
  • The connection traces provided demonstrate that connections to the backend server (Tor Relay) are reset by the backend server. No other traces provided.

As previously suggested, this does not seem to be a bug in nginx. Rather, this looks like an issue in your configuration and/or in Tor Relay. If you need help to find out what goes wrong in your configuration, consider asking for help in the mailing list.

comment:12 by garycnew@…, 3 years ago

Resolution: invalid
Status: closedreopened

Maxim,

Points omitted from your facts-summary:

The test environment is simplified to a server running Nginx (Basic TCP Stream Config) and a host running Tor Relay (Basic Tor Relay Config). We have disabled all firewalls, so it is as vanilla as you can get.

The Tor Relay Self-Test successfully works in the same environment when directly bound to the Public IP Address or PortForwarded. However, it fails when Nginx TCP Streaming is substituted.

The connection traces provided demonstrate RST's prior to the Tor Relay being fully started and then normal upstream activity.

It is unknown whether there is an incomparability issue between Nginx and Tor as verification has never been preformed within an Nginx approved test environment to validate such compatibility.

We appreciate the mailing-list reference, but we would like this ticket escalated.

Thank you for your time and assistance.

Regards,

Gary

comment:13 by Maxim Dounin, 3 years ago

Resolution: invalid
Status: reopenedclosed

Sorry, this is not how it works. If you need help with your configuration which doesn't work for some unknown reason, Trac is the wrong place to ask for help, it is to report ugs. To ask for help use the mailing list instead. If there will be additional details demonstrating there is indeed a bug in nginx, this ticket can be reopened with these additional details.

comment:14 by mikhail.isachenkov, 3 years ago

Gary,

We are unable to reproduce this issue with official nginx and Tor packages. Unfortunately, we don't have OpenWRT in our infrasctructure to build both nginx and Tor from sources and do the further checking.

It's highly unlikely that the issue related to nginx; so, I'd recommend to completely exclude Tor from our view and use nginx (or any other web server that supports TLSv1.0) as backend too, then test that TLSv1.0 is actually working. Of course, it make sense to build nginx without any 3rd party modules. You may add the following snippet to nginx configuration and try to reproduce the issue (for example, with openssl or curl -kv).

http {
    server {
        listen 127.0.0.1:9001 ssl;
        ssl_protocols TLSv1;
        ssl_certificate /etc/nginx/example.crt;
        ssl_certificate_key /etc/nginx/example.key;

        location / {
            return 200 "OK\n";
        }
    }
}

Hope this helps.

in reply to:  14 comment:15 by garycnew@…, 3 years ago

@mikhail.isachenkov

We are unable to reproduce this issue with official nginx and Tor packages. Unfortunately, we don't have OpenWRT in our infrasctructure to build both nginx and Tor from sources and do the further checking.

Are you stating that you tried to reproduce the issue with "official nginx and Tor packages" and the Tor Relay Self-Test was successful with Nginx TCP Stream loadbalancing to it? We simply would like to know that there aren't any incompatibility issues between Nginx and Tor. It doesn't have to be with OpenWRT packages.

It's highly unlikely that the issue related to nginx; so, I'd recommend to completely exclude Tor from our view and use nginx (or any other web server that supports TLSv1.0) as backend too, then test that TLSv1.0 is actually working.

The purpose of this venture is to confirm that Nginx will TCP Stream to Tor, so removing it from the equation is only useful in determining whether the installed version of Nginx supports TLSv1.

Please validate that there are no incompatibility issues between Nginx and Tor using "official nginx and Tor packages" and we will be satisfied that it is not a Nginx bug.

Regards,

Gary

comment:16 by garycnew@…, 3 years ago

Resolution: invalid
Status: closedreopened

Maxim,

After pouring through debug logs, which Nginx debug logs aren't very verbose in stream mode, it appears that the Tor Relay doesn't like that the upstream connection appears to be originating from the internal interface address (192.168.0.1) of the Nginx server. The Tor Relay is expecting the upstream connection to originated from the remote address and refuses the connection.

With this information, we tried enabling the proxy_protocol directive within Nginx, but the Tor Relay doesn't seem to support PROXY protocol.

We then attempted to configure Nginx with the proxy_bind directive using the variable $remote_addr and while this appeared to change the originating address to what the Tor Relay was expecting, Nginx had issues binding the $remote_addr to the internal interface of the Nginx Server and failed.

Upon further research, it sounds like we may have to configure Nginx to be run with proxy_bind in transparent mode and make the necessary router and iptables changes.

It would have been helpful to know that transparency can be an issue with Nginx (opposed to just lazily stating that Nginx is behaving as expected).

We'll attempt to implement the proxy_bind transparent directive in Nginx and let you know whether it remedies the issue.

Regards,

Gary

comment:17 by Maxim Dounin, 3 years ago

Resolution: invalid
Status: reopenedclosed

As previously suggested, if you need help with configuring nginx, consider asking for help in the mailing list.

Note: See TracTickets for help on using tickets.