From Funtoo
Jump to: navigation, search



We welcome improvements to this page. To edit this page, Create a Funtoo account. Then log in and then click here to edit this page. See our editing guidelines to becoming a wiki-editing pro.


nginx (pronounced "engin-x") is a Web and reverse proxy server for HTTP, SMTP, POP3 and IMAP protocols. It focuses on high concurrency, performance and low memory usage. Nginx quickly delivers static content with efficient use of system resources, also dynamic content is delivered on a network using FastCGI, SCGI handlers for scripts, uWSGI application servers or Phusion Passenger module (atm broken in funtoo's nginx, working under www-servers/tengine), further more it can serve a very capable software load balancer. It uses an asynchronos event-driven approach to handle requests which provides more predictable performance under load, in contrast to the Apache HTTP server model, that uses a threaded or process-oriented approach to handling request. Nginx is licensed under a BSD-like license and it runs on Unix, Linux, BSD variants, Mac OS X, Solaris, AIX and Microsoft Windows.

Emerging nginx

Prior to emerging nginx, be sure to do a world update, particularly if you have a new Funtoo system. This will force openssl to rebuild without the bindist option, which is necessary for nginx to work properly with SSL. nginx doesn't have dependency info to enforce this currently, so it is a manual process:

root # emerge -auDN @world

Once openssl is updated and rebuilt, you are ready to install nginx:

root # emerge -av nginx

Configuring with SSL

Since SSL is commonplace now, let's look at how to configure a site with SSL. Below is an ideal SSL configuration that should give you an A+ SSL rating under most tests:

   /etc/nginx/sites-available/ - ideal SSL configuration
server {
    listen 80;
    access_log off;
    return 301 https://$server_name$request_uri;

server {
    listen 443 default ssl;
    ssl on;
    # we will create these with certbot:
    ssl_certificate /etc/letsencrypt/live/;
    ssl_certificate_key /etc/letsencrypt/live/;

   ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off; # Requires nginx >= 1.5.9
    ssl_stapling on; # Requires nginx >= 1.3.7
    ssl_stapling_verify on; # Requires nginx => 1.3.7
    # this tells nginx to use google for DNS:
    resolver valid=300s;
    resolver_timeout 5s;
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
    add_header X-Content-Type-Options nosniff;

   # we will generate this file later
    ssl_dhparam /etc/nginx/dhparams.pem;

    root /home/myuser/public_html;
    index index.html index.php;
    access_log      /var/log/nginx/ main;
    error_log       /var/log/nginx/ info;


To generate dhparams.pem, required for the above nginx configuration, use the following commands:

root # cd /etc/nginx
root # openssl dhparam -out dhparams.pem 2048

To generate SSL certificates, we are going to use letsencrypt and certbot. To install certbot, do:

root # emerge certbot

Once installed, we will run certbot certonly to start the process of creating the certificate. You must make sure that the following are true:

  1. nginx is not running
  2. you have updated DNS so that points to your server's IP address

You will want to use certbot's built-in Web server. We will also want to update our certificate at least once a day. To do this, perform the following steps to emerge fcron, which will allow us to periodically run a script every 24 hours to do this:

root # emerge fcron

Now create a script in /root called that contains the following contents:

/etc/init.d/nginx stop
/usr/bin/certbot renew
/etc/init.d/nginx start

This script stops nginx, then runs certbot renew, which sets up a temporary Web server and renews our SSL certificate, and then starts nginx again. The entire process typically takes less than a second so it does not have a significant impact on uptime of your site, but it should be scheduled to run during "off hours". So, let's perform the following steps to make the script executable and then schedule it to run at 3 AM in the morning:

root # rc-update add fcron default
root # rc-update add nginx default
root # rc
root # chmod +x /root/
root # fcrontab -e

This will start an editor. Now add the following line to cron:

0 3 * * * /root/

Save the file. Our script is now scheduled to run at 3 AM.

Let's enable our site:

root # cd /etc/nginx/sites-enabled
root # ln -s ../sites-available/
root # rm localhost
root # /etc/init.d/nginx restart
root # su myuser
user $ cd
user $ mkdir public_html
user $  echo "hello world!" > public_html/index.html

Advanced Topics

USE Expanded flags

Furthermore, you can set the nginx modules you like to use in /etc/portage/make.conf in the NGINX_MODULES_HTTP variable as NGINX_MODULES_HTTP="variables".

nginx USE flags go into etc/portage/package.use or /etc/portage/package.use/nginx, while the HTTP and MAIL modules go as NGINX_MODULES_HTTP or NGINX_MODULES_MAIL are stored in /etc/portage/make.conf. And as you wouldn't server only static html files, but most commonly also php files/scripts you should also install php with fpm enabled and xcache for caching the content, what makes your nginx setup way faster. For xcache you need to set PHP_TARGETS="php5-3" in /etc/portage/make.conf.


root # echo "www-servers/nginx USE-FLAG-List" >> /etc/portage/package.use/nginx

This config removes things to render html, access directory browsing etc, then enables gzip, and spdy for content delivery since were going to have tengine do the heavy lifting.

   /etc/portage/package.use/nginx - ssl/load balance only use flags
www-servers/nginx threads
   /etc/portage/make.conf - ssl/load balance only use flags
NGINX_MODULES_HTTP="access browser charset empty_gif fastcgi gzip limit_conn limit_req map proxy realip referer scgi split_clients secure_link spdy ssi ssl upstream_hash upstream_ip_hash upstream_keepalive upstream_least_conn userid uwsgi"


This configuration proxies to other webservers. In this example we have webrick running on port 3000 behind nginx producing the live link http://localhost/rails

   /etc/nginx/sites-available/localhost - rails or python configurations
server {
	location /rails/ {
	    proxy_set_header Host $host;
	    proxy_set_header X-Real-IP $remote_addr;
	    proxy_pass; #for ruby on rails webrick
            #proxy_pass; #for python -m http.server
            #proxy_pass; #for other web servers like apache, lighttpd, tengine, cherokee, etc...
Load Balancing
   /etc/nginx/sites-available/localhost - setup backend node pool, using host3 3x as much as the others. We'll also set xforward headers so back end servers see external ip addresses, not localhost or the ip of the load balancer.
upstream backend_nodes {
    server weight=3;

server {
    listen 80;

    location / {
        proxy_set_header HOST $host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://backend_nodes;
Passing Requests by Socket
   /etc/nginx/sites-available/localhost - Make www-servers/tengine do the html rendering work.
upstream backend_nodes {
    server unix:/var/run/tengine.sock;

server {
    listen 80;

    location / {
        proxy_pass http://backend_nodes;
Proxy Pass Buffering
   /etc/nginx/sites-available/localhost - buffer proxy pass so slow connections will release the backend node connection.
proxy_buffering on;
proxy_buffer_size 10k;
proxy_buffers 24 16k;
proxy_busy_buffers_size 16k;
proxy_max_temp_file_size 2048m;
proxy_temp_file_write_size 32k;

location / {
    proxy_pass http://backend_nodes;
Proxy Pass Caching
   /etc/nginx/sites-available/localhost - proxy pass cache configuration
proxy_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=proxycache:8m max_size=50m;
proxy_cache_key "$scheme$request_method$host$request_uri$is_args$args";
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 10m;
server {

location / {
    proxy_cache proxycache;
    proxy_cache_bypass $http_cache_control;
    add_header X-Proxy-Cache $upstream_cache_status;

    proxy_pass http://backend_nodes;


Nginx does not natively support php, so we delegate that responsibility to php-fpm

   /etc/nginx/sites-available/localhost - fpm configuration
server {
	index index.php index.cgi index.htm index.html;
	location ~ .php$ {
		include fastcgi.conf;

php caching

   /etc/nginx/sites-available/localhost - fpm cache configuration
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=MYAPP:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
server {
        location ~ \.php$ {
		fastcgi_cache MYAPP;
		fastcgi_cache_valid 200 60m;

for more information on php caching

Location Processing Order

One often confusing aspect of nginx configuration is the order in which it processes location directives. This section is intended to clarify the confusion and help you to write secure nginx location directives.

Two basic types of Location directives

There are two basic types of location directives. The first is called a "conventional string", and looks something like this:

location /foo { deny all; }

The second basic type of location directive is a regex, or regular expression block. In its most basic form, it looks like this, with a "~" and then a regular expression that is matched against the request path. "^" can be used to match the beginning of the request path, and "$" can be used to match the end of the request path. If you need to match a ".", you must escape it as "\." as per regular expression matching rules:

location ~ \.php$ { blah; }

The basic algorithm

Nginx uses a special algorithm to find the proper location string to match the incoming request. The basic concept to remember is that conventional string directives are placed in one "bucket", and then regular expression strings are placed in another "bucket". Nginx will use the first regular expression match that it finds, when scanning the file from top to bottom. If no matching regular expression is found, nginx will look in its "conventional string" bucket, and try to find a match. In the case of the conventional string matches, the most specific match will be used, in other words, the one will be used that matches the greatest number of characters in the request path.

This is the foundation for nginx location processing, so always use these rules as a starting point for understanding location matching order. Nginx then provides various sub-types of location directives which modify this default behavior in a number of ways. This will be covered in the next section.

Advanced Location Processing

Always use the location processing logic described in the previous section as the foundation for understanding how nginx finds a matching location directive, and then once you are comfortable with how this works, read about these more advanced directives and understand how they fit into nginx's overall logic.

= (equals) Location

One advanced location directive is the "=" location, which can be considered a variant of a "conventional string" directive. "=" directives are searched before all other directives, and if a match found, then the corresponding location block is used. A "=" location must the requested path exactly and completely. For example, the following location block will match only the request /foo/bar, but not /foo/bar/oni.html:

location = /foo/bar { deny all; }

~* (case-insensitive regex) Location

A "~*" regex match is just like a regular "~" regex match, except matches will be performed in a case-insensitive manner. "~*" location directives, being regex directives, fall into the regex "bucket" and are processed along other regex directives. This means that they are processed in the order they appear in your configuration file and the first match will be used -- assuming no "=" directives match.

^~ (short-circuit conventional string) Location

You may think that a "^~" location is a regex location, but it is not. It is a variant of a conventional string location. If you recall, nginx will search for conventional string matches by finding the most specific match. However, when you use a "^~" location, nginx behavior is modified. Imagine the way a conventional string match works. Nginx scans your configuration file, looking at each conventional string match from line 1 to the end of file, but it scans all conventional string matches to find the best match. Well, the "~^" location match short-circuits this process. If, in the process of scanning each conventional string match in the config file, nginx encounters a "^~" match that matches the current request path, then nginx will apply this match, and stop looking for the best match.

Ebuild Update Protocol

To work on a new version of the ebuild, perform the following steps.

First, temporarily set the following settings in /etc/make.conf:


This will enable all available modules for nginx.

Now, create a new version of the ebuild in your overlay, and look at all the modules listed at the top of the ebuild. Visit the URLs in the comments above each one and ensure that the latest versions of each are included. Now run ebuild nginx-x.y.ebuild clean install to ensure that all modules patch/build properly. Basic build testing is now complete.



502 Bad Gateway is caused by nginx being started and php-fpm not being started.