Opened 11 years ago

Last modified 12 months ago

#376 accepted enhancement

log file reopen should pass opened fd from master process

Reported by: Tiziano Müller Owned by:
Priority: minor Milestone:
Component: nginx-core Version:
Keywords: Cc:
uname -a: n/a
nginx -V: n/a

Description

When starting nginx all the log files (error_log, access_log) are created and opened by the master process and the filehandles passed to the worker while forking.

On SIGUSR1 the master reopens the files, chown's them and then the worker reopens the files himself. This has several drawbacks:

  • It is inconsistent behaviour and rather surprising (sudden change of ownership upon signal). If you really want to do it this way you should chown the files from the very beginning.
  • It permits the unprivileged nginx user read and write access to the current log files which is bad from the security perspective since the unprivileged user also needs to be able to change into/read the log directory

A better solution may be to reopen the log files in the master process as currently done and then use the already available ngx_{read,write}_channel functions to pass the new filehandles down to the worker.

Change History (11)

comment:1 by Maxim Dounin, 11 years ago

nginx -V: modified (diff)
Summary: Inconsistent log file handlinglog file reopen should pass opened fd from master process
Type: defectenhancement
uname -a: modified (diff)

Current behaviour is as follows:

  • Do not attempt to do anything with logs ownership/permissions on normal startup.
  • Chown logs to a user nginx is configured to run workers on logfile quick reopen signal (USR1).

This allows to use any permissions desired as long as logfile quick reopen isn't used (e.g., full configuration reload is used instead), and ensures logfile quick reopen works as currently implemented. While ownership change after USR1 might be a bit surprising for unfamiliar users, it is believed to be better than unconditional ownership change in any case. The need of execute bit on a log directory doesn't looks like a problem, too (note that read access isn't needed). If you are paranoid enough, you can use full configuration reload instead.

On the other hand, passing opened file descriptors from a master process is certainly better approach on platforms which support it, mostly because it needs less configuration. It is planned enhancement for a long time, let this ticket sit in trac as a reminder.

comment:2 by Maxim Dounin, 11 years ago

Status: newaccepted

comment:3 by MTecknology@…, 7 years ago

What would it take to get some attention drawn to this bug?

In Debian, we thought we fixed one broken hack to address a CVE [1], but years later it was brought to our attention that we'd inadvertently created a bigger concern [2]. Now, we've reverted back to the old behavior, in effect, re-opening the old CVE.

It appears this is not just a Debian problem. I've looked around at other distributions and haven't been able to find any that don't have some form of security concern.

I'll admit, most of the security concerns require at least one other vulnerability be exploited, but web application vulnerabilities aren't exactly uncommon.

[1] https://security-tracker.debian.org/tracker/CVE-2013-0337
[2] https://security-tracker.debian.org/tracker/CVE-2016-1247

comment:4 by Jérôme Poulin, 7 years ago

To make matter worse, the log is always opened using O_CREAT, disallowing the use of AppArmor to restrict the log file opening to append only.

The workaround we're using is:

  • Make a AppArmor profile which removes dac_override and chown capabilities from Nginx.
  • Set-up logrotate to create files as root:adm/0640.
  • Set-up logrotate to issue a SIGHUP instead of SIGUSR1.
  • chown/chmod /var/log/nginx
  • Enforce profile and restart Nginx.

However, this proves to be dangerous in case a configuration has been changed without a reload overnight. Other than that, the child processes can't write to any log.

Version 0, edited 7 years ago by Jérôme Poulin (next)

comment:5 by Beuc@…, 4 years ago

Hi,
I'm part of the Debian LTS Team.
What is the nginx maintainers' stance on this issue?

comment:6 by Maxim Dounin, 4 years ago

This is going to be implemented at some point. This is relatively low priority since there isn't much difference with new log files properly pre-created during log rotation, and the only noticeable benefit is no need to pre-create log files during log rotation.

comment:7 by maxim, 3 years ago

Owner: set to Ruslan Ermilov
Status: acceptedassigned

comment:8 by Maxim Dounin, 12 months ago

Status: assignedaccepted

Just for the record, Ruslan's attempt to implement this revealed multiple edge cases, notably:

  • Obviously enough, this will work only on platforms which support passing file descriptors, so platforms without the support need to use the existing approach of reopening logs in the worker processes.
  • Sending descriptors from master to worker processes might block, and this needs to be correctly handled without blocking the master process.
  • File descriptors maintained by the old worker processes might be different from the ones currently maintained by the master process and new worker processes (for example, if a configuration was changed, and some logs were added or removed). At the same time, old worker processes might work for a long time, so not rotating logs in the old worker processes does not look like a viable solution. For proper handling this will require some infrastructure to keep old logs open in the master and tracking of the required logs on the per-worker basis.

This work was postponed for now.

comment:9 by Maxim Dounin, 12 months ago

Owner: Ruslan Ermilov removed
Status: acceptedassigned

comment:10 by Maxim Dounin, 12 months ago

Status: assignedaccepted

comment:11 by Maxim Dounin, 12 months ago

See also #1686, #1740, #2471.

Note: See TracTickets for help on using tickets.