#2225 closed enhancement (wontfix)

location-matching on undecoded paths

Reported by: breunigs@… Owned by:
Priority: minor Milestone:
Component: nginx-core Version: 1.18.x
Keywords: location encoding decoding Cc:
uname -a: Linux nb-sbreunig 5.8.0-63-generic #71~20.04.1-Ubuntu SMP Thu Jul 15 17:46:08 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
nginx -V: nginx version: nginx/1.18.0 (Ubuntu)
built with OpenSSL 1.1.1f 31 Mar 2020
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-KTLRnK/nginx-1.18.0=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-compat --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-mail=dynamic --with-mail_ssl_module


Dear devs,

is there a way to configure nginx to location-match on undecoded URLs? Ideally matching on "only semantically" decoded URLs would be possible. Currently, nginx decodes (and compresses) everything, including characters that change the semantics of the URL. For example this nginx configuration

location /aaa {
    return 412;

location /bbb {
    return 411;

will return 411 on this curl:

curl -vs localhost/aaa%2f..%2fbbb

The RFCs I found recommend treating these reserved characters specially if they change the semantics, but I can see how this would fail in many real world use cases. My use case is preventing directory traversal on a non-public nginx in a "security in depth" fashion.

A potential workaround is matching on $request_uri, but it is rather brittle to regex-match on an URL like that, plus it's of course much less performant:

if ($request_uri ~* "^[^?#]*%2F") {
    return 405;

Is there a better solution?

For completeness, the RFCs I am referring to:


Change History (1)

comment:1 by Maxim Dounin, 10 months ago

Resolution: wontfix
Status: newclosed

There is no concept of the "only semantical" decoding in nginx. Rather, it provides a way to fully decode URLs matching typical filesystem semantics, and does location matching based on this decoded URI. If that's not enough for some reason, the $request_uri can be used to inspect the original URI as sent by the client.

In your particular use case there seems to be two basic options:

  1. Reject URIs with any %2f as in the snippet you've provided.
  2. Proxy requests with normalized URI by using proxy_pass with a URI component.

Note though that both approaches might break valid requests. See also ticket #786, which is somewhat relevant.

Note: See TracTickets for help on using tickets.