NGINX Best Practices


Need for Speed?

With its first public release in 2004, NGINX now is the second most used web server in the world. This is quite impressive considering NGINX was born among several big players including Apache, Microsoft IIS, and other popular web server software. By sharing best practices and a few tips today, we would like to bring NGINX’s new blood up to speed. Even if you come from the Apache world, you will learn how to build an Apache server that married NGINX, an extremely fast and high-performance web server. Let’s go:

  1. Make sure your OS is well supported by NGINX:
    In particular, double-check NGINX supports the event polling system call under your operating system. This is important because the beauty of NGINX is its asynchronous architecture for event handling. (This design is used to address the C10k problem, literally, to handle 10,000 clients simultaneously.) Under Windows, however, NGINX uses select() for event polling. While select() is non-blocking, select() can limit a process to have only 1024 open file descriptors. Therefore if you expect your NGINX server to be high-performance, Windows might not be your best environment to work with.
  2. Improve disk I/O if possible:
    • You can turn off NGINX access logs and use the client site scripts such as Google Analytics to collect statistics. This way NGINX will not record every single request to your log file, reducing disk I/O significantly. It is not recommended you also turn off the error logs. After all, errors are important information, and they don’t occur that often anyway.
    • Configure “open file cache” if your system supports it. Note that the actual file is not cached, but the pointer to the file instead. That is, an inode in your filesystem. Why caching inodes can speed up disk I/O? Think of inodes as a back-of-the-book index, which can help you look up information quickly.
  3. Improve network I/O if possible:
    Turn on the “Gzip” module so you can compress data before sending them over the network. Nowadays the CPUs are so fast that you can even tell NGINX to use level 6 compression just to save a bit bandwidth. If you are worried about CPU hogs, use level 1 compression since it has the best compression to time ratio.
  4. Tweak the HTTP keep alive setting:
    NGINX is so memory efficient that it is possible to use only ~2.5 M to maintain 10k HTTP keep-alive connections. (too scary to be true, isn’t it?) This gives systemadmin plenty of room to lift the “keep alive” limit. The idea is to have as many keep alive connections as possible, and set the keep alive timeout with a larger number as well. Why not setting them both to infinite? It is because you want to think about malicious users. Attackers can abuse your generous setting and possibly exhaust your server memory.
  5. Configure NGINX as a load balancer:
    This is easy to configure. The example below shows how NGINX uses  round-robin and client IP to achive load-balancing across backend servers.
  6. Configure NGINX as a reverse proxy:
    Combining Apache with Nginx is a very popular setup. The idea is to have NGINX serve static files, and let Apache handle dynamic content.
  7. Only Enable the NGINX modules you need:
    In other words, turn off the modules you are not using. Keep your NGINX simple and small. Not only you can save memory, this can also improves overall security. For example, if you are not using NGINX as a load balancer, disable the “Upstream” module.
  8. Avoid NGINX configuration pitfalls:
    First of all, NGINX is not Apache. If you are new to NGINX’s configuration, you might want to check out the wiki first to have a glimpse of NGINX’s rich configuration options. The following are some quick tips for configuring NGINX:

    • Factor out “root” outside of a location block.
    • Factor out “index” to the http block.
    • Avoid using “if” unless you really have to.
    • Use “try_files” if you want to check if a file exists.
    • Use “try_files” first to serve static content on dynamic pages.
    • Use “script_filename” to avoid hardcoding absolute paths.
    • Use ” $request_uri” to avoid using regular expression
    • Use “rewrite” and “http://” to force absolute path.
    • Use “map” to customize your key-value pairs.
    • Use “stub_status” to monitor your server.
    • You can combine http and https block in NGINX
    • Set “proxy_connect_timeout” higher under heavy load.
    • Clear browser cache before testing the new configuration.


NGINX is the king of serving static contents. Believe it or not, you don’t need to tweak much to optimize your NGINX server. The default configuration works quite well. It gives you a sense of “you don’t pay for what you don’t use.” If you do need to customize your NGINX, please follow NGINX best practices and, remember NGINX’s underlying principle: lean and fast.

We hope you enjoy the article. As a bonus, Monitis® offers enterprise-level solutions to monitor your NGINX servers. You can get your free signup here.

Useful Links:
Integrate NGINX Monitoring into
Monitis Free Registration:

Monitis Monitoring Platform