Nginx reverse proxy and load balancing

Recommended for you: Get network issues from WhatsUp Gold. Not end users.

1 environment

1.1 system environment
[root@nginx-cache-1-1 nginx]# cat /etc/redhat-release 
CentOS release 5.8 (Final)
[root@nginx-cache-1-1 ~]# uname -r
2.6.18-308.el5
[root@nginx-cache-1-1 ~]# uname -m
x86_64
[root@nginx-cache-1-1 ~]# uname -n
nginx-cache-1-1
1.2 software requirements
Software:
nginx-1.2.1.tar.gz
pcre-8.11.tar.gz
Address:

ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/
Installing Nginx 2
Installing PCRE 2.1
The installation command:
cd /home/start/tools
tar zxf pcre-8.11.tar.gz
cd pcre-8.11
./configure
make && make install
cd ../
The installation process:
[root@nginx-cache-1-1 tools]# tar zxf pcre-8.11.tar.gz
[root@nginx-cache-1-1 tools]# cd pcre-8.11
[root@nginx-cache-1-1 pcre-8.11]# ./configure
[root@nginx-cache-1-1 pcre-8.11]# make && make install
Installing Nginx 2.2
1)The installation command:
useradd -M -s /sbin/nologin www
tar zxf nginx-1.2.1.tar.gz
cd nginx-1.2.1
./configure \
--user=www \
--group=www  \
--prefix=/application/nginx-1.2.1 \
--with-pcre  \
--with-http_stub_status_module \
--with-http_ssl_module \

make && make install
cd ..
ln -s /application/nginx-1.2.1 /application/nginx
echo '/usr/local/lib' >>/etc/ld.so.conf  
Do not implement this wrong: nginx: error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory
tail -1 /etc/ld.so.conf                
ldconfig 
###The following command can not operate###
echo 'export PATH=$PATH:/application/nginx/sbin' >>/etc/profile
source /etc/profile
echo 'start for nginx by start 2012-01-26' >>/etc/rc.local
echo '/application/nginx/sbin/nginx' >>/etc/rc.local
tail -2 /etc/rc.local
2)The installation process:
[root@nginx-cache-1-1 tools] useradd -M -s /sbin/nologin www  <==Add a running account Nginx system
[root@nginx-cache-1-1 tools] tar zxf nginx-1.2.1.tar.gz  <==Unzip the Nginx
[root@nginx-cache-1-1 tools] cd nginx-1.2.1
[root@nginx-cache-1-1 nginx-1.2.1] ./configure \   <==Compile and install
--user=www \
--group=www  \
--prefix=/application/nginx-1.2.1 \
--with-pcre  \
--with-http_stub_status_module \
--with-http_ssl_module 
[root@nginx-cache-1-1 nginx-1.2.1] make && make install 
[root@nginx-cache-1-1 nginx-1.2.1] ln -s /application/nginx-1.2.1 /application/nginx  <==Create a soft link to facilitate the upgrade
[root@nginx-cache-1-1 nginx-1.2.1] echo '/usr/local/lib' >>/etc/ld.so.conf
[root@nginx-cache-1-1 nginx-1.2.1] tail -1 /etc/ld.so.conf  <==Check whether the add
/usr/local/lib
[root@nginx-cache-1-1 nginx-1.2.1] ldconfig  
[root@nginx-cache-1-1 nginx-1.2.1] cd ..
[root@nginx-cache-1-1 nginx-1.2.1] echo 'export PATH=$PATH:/application/nginx/sbin' >>/etc/profile <==The Nginx command is added to the system of global variables
[root@nginx-cache-1-1 nginx-1.2.1] source /etc/profile  <==The variable effect
[root@nginx-cache-1-1 nginx-1.2.1] echo 'start for nginx by start 2012-01-26' >>/etc/rc.local
[root@nginx-cache-1-1 nginx-1.2.1] echo '/application/nginx/sbin/nginx' >>/etc/rc.local
Explanation of parameters:
--prefix=/application/nginx-1.2.1  <==Specify the installation location
--with-http_stub_status_module  <==Enable "server status" (service).
--with-http_ssl_module  <==SSL support is enabled and can handle the HTTPS request. The need to support OpenSSL
--with-pcre  <==Support for regular expressions

The following two parameters in this paper does not use
--with-mail  <==Enable the IMAP4/POP3/SMTP agent module
--with-http_realip_module  <==The Internet is the bluff?
Prompt:

2.3 start the check
[root@nginx-cache-1-1 tools]# nginx   <==The Nginx command has been added to the system of global variables
[root@nginx-cache-1-1 tools]# netstat -lnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State      
tcp        0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      
tcp        0      0 :::22                       :::*                        LISTEN  
3 Nginx load balancing

Tip: Nginx is the main configuration file using is available by default, joined the Proxy parameters.

The 3.1 Proxy parameters

client_max_body_size     300m;
client_body_buffer_size  128k;
proxy_connect_timeout    600;
proxy_read_timeout       600;
proxy_send_timeout       600;
proxy_buffer_size        16k;
proxy_buffers            4 32k;
proxy_busy_buffers_size 64k;
Parameter interpretation:
#Allows the client to request the largest single file bytes
client_max_body_size     300m;  
#The maximum number of bytes buffer buffer proxy client requests can be understood as the first save to the local and then transmitted to the user
client_body_buffer_size  128k;  
#Timeout _ connected with the back-end server timed out waiting for response time initiate handshaking
proxy_connect_timeout    600;  
#After a successful connection _ waiting for the back-end server response time _ has actually entered the back line, waiting for
proxy_read_timeout       600;  
#The back-end server data transmission time _ is within the prescribed time server must pass all the data
proxy_send_timeout       600;              
#The proxy request buffer cache _ this interval will save the user's head information for Nginx rule processing _ general as long as can save the head information can be
proxy_buffer_size        16k;              
#Ibid told Nginx save individual with several Buffer maximum much space
proxy_buffers            4 32k;               
#If the system is busy can apply for larger proxy_buffers official recommended *2 making
proxy_busy_buffers_size 64k; 
The 3.2 upstream module
3.2.1 grammar
The official address:
Officials hint: upstream module default is proxy_pass, fastcgi_pass, and memcached_pass and the three call parameters
The ngx_http_upstream_module module allows to define groups of servers that can be referenced from the proxy_pass, fastcgi_pass, and memcached_pass directives.
Official sample:
upstream backend {
   server backend1.example.com       weight=5;  <==A separate domain name. If no default port, port 80. Weight represents the weight, the greater the chance to be assigned values of higher,
   server backend2.example.com:8080;  <==The domain name and port. Forwarding to the specified port on the rear end,
   server unix:/tmp/backend3;  <==The specified socket file (the specific usage of ominous)
Making tips: Server if the domain name, need to network with DNS server, or do the resolution in the hosts file. Behind the Server can be directly connected with IP or IP plus port
server 192.168.1.2
server 192.168.1.3:8080
   server backup1.example.com:8080   backup;  <==Backup server, when above the specified server is not accessible will enable, backup usage and Haproxy usage.,
   server backup2.example.com:8080   backup;
}
3.2.1 Upstream parameters
The official text:
weight=number
sets a weight of the server, by default 1.
Weight setting the server, the default value is 1, the numerical value is greater, the server will be forwarded more requests,
max_fails=number
sets a number of unsuccessful attempts to communicate with the server during a time set by the fail_timeout parameter after which it will be considered down for a period of time also set by the fail_timeout parameter. By default, the number of unsuccessful attempts is set to 1. A value of zero disables accounting of attempts. What is considered to be an unsuccessful attempt is configured by the proxy_next_upstream, fastcgi_next_upstream, and memcached_next_upstream directives. The http_404 state is not considered an unsuccessful attempt.
Nginx attempts failed to link the backend host, this value is combined with proxy_next_upstream, fastcgi_next_upstream, and memcached_next_upstream and the three parameter to use, when Nginx receives a back-end server returns the three parameters defining the state code, will this request is forwarded to the normal work of the back-end servers, such as 404502503.Max_fails, the default value is 1,
fail_timeout=time
sets
a time during which the specified number of unsuccessful attempts to communicate with the server should happen for the server to be considered down;
and a period of time the server will be considered down.
By default, timeout is set to 10 seconds.
In the max_fails definition of failure times, interval distance for the next inspection, the default is 10s,
backup
marks the server as a backup server. It will be passed requests when the primary servers are down.
This marks the server as a backup server, when the primary server is down to him all, will forward the request,
down
marks the server as permanently down; used along with the ip_hash directive.
This marks the server never available, this parameter only with the use of ip_hash
Example:
upstream backend {
   server backend1.example.com     weight=5;  <==If it is a single Server, no need to set the weight
   server 127.0.0.1:8080           max_fails=5 fail_timeout=10s; <==When the detection of failure is equal to the number of 5, interval 10s check, the parameters and proxy/fasrcgi/memcached_next_upstream, correlation,
   server unix:/tmp/backend3;
   server backup1.example.com:8080 backup;  <==Hot standby machine settings
}
max_fails=5 fail_timeout=10s
Re loading the nginx configuration or WEB host detection of normal, If the definition of error occurred in proxy_next_upstream backend(502), Nginx will be based on the value of max_fails to the back-end server detection, If max_fails is 5 , He was detected in 5 times, If the 5 is 502, He will be based on the value of fail_timeout, Wait for the 10s to check, 10s check once a, If it is 502, Then continue to wait for 10s, To check, Or just a check, If the last 502, Without re loading the nginx configuration or web site is not to restore the situation, Every 10s is checked only once.
Test results are shown in Appendix 5
3.2.2 scheduling algorithm
1)Polling (default)
Each request according to the time sequence distribution of attention to a different machine, is the equivalent of the LVS RR algorithm, if the back-end server downtime (by default only detect 80 port, if the back end at 502404403503, or will return directly to the user), it will skip the server, the request is allocated to the next server.
2)Weight (weight)
Plus the weight based on the specified polling on (default is rr+weight), weighted round robin and access is proportional to, the greater the weight, the forwarded request is also more. According to the configuration and performance of the server specifies the size of the weight value, can effectively solve the old server allocation problem.
Example:
The back-end server configuration: 192.168.1.2 E5520*2 CPU, 8G memory
The back-end server configuration: 192.168.1.3 Xeon (TM) 2.80GHz * 2, 4G memory
I hope that in the 30 request arrives at the front end, wherein the 20 requests to 192.168.1.3 treatment, the remaining 10 requests to the 192.168.1.2 processing, we do the following configuration,
upstream engine {
server 192.168.1.2 weight=1;
server 192.168.1.3 weight=2;

3)ip_hash
Each request according to the access of Ip hash results distribution, when a new request arrives, the first of its client IP by hashing hash out a value in the hash then request the client, as long as the same Ip value, will be assigned to the same server, this algorithm can solve the session problem, but sometimes cause the uneven distribution of namely, cannot guarantee the load balancing.
Note: must be the most front-end server, back-end must also be directly connected with the application server
Example:
upstream engine {
ip_hash;
server 192.168.1.2:80;
server 192.168.1.3:8080;
}
4)Fair (third party, NO)
According to the response time of the back-end server to assign priority allocation request, response time is short.
Example:
upstream engine {
server 192.168.1.2;
server 192.168.1.3;
fair;
}
5)Usr_hash (third party, NO)
According to access URL hash results to the allocation request, so that each URL directed to the same server, back-end server to cache more effectively. Add hash statements in upstream, weight and other parameters can not write a server statement, hash_method is the use of the hash algorithm.
upstream engine {
server squid1:3128;
server squid2:3128;
hash $request_uri;
hash_method crc32;
}
3.3 Proxy_pass instruction
3.3.1 official documents:

3.3.2 official definition:
This module makes it possible to transfer requests to another server.
This module can be forwards the request to another server
3.4 Location instruction
Nginx location instructions are important instructions in NginxHttpCoreModule. Location is simple and commonly used.
The location instruction, is used to match rul, URI syntax /uri/, can be a string or regular expression. But if the regular expression, you must specify a prefix.
3.4.1 basic grammar
Grammar:
location [=|~|~*|^~|@] /uri/ {···}
Interpretation:
[ = ] Match exactly, if found, search stops immediately, and immediately process the request (highest priority)
[ ~ ] Case sensitive
[ ^~ ] The matching string, not a regular expression matching
[ ~*] Insensitive
[ @ ] Specifies a named location, generally used only for internal redirect requests
The matching process
The first string matching queries, the exact matching will be used. Then, the beginning of the regular expression matching query, matching the first result will stop the search, if the regular expression is not found, will use the string search results, if the string matching and regular, so regular higher priority.

Tip: This article did not test for matching sequence, location, summed up the "exact matching priority". How to exactly match the needs of readers.
4 Nginx agents
4.1 L4 load balancing
4.1.1 Upstraem configuration (machine not with multiple ports to demo)
1)Weighted round robin
upstream engine {
        server 192.168.18.211:80  weight=2;
        server 192.168.18.211:81  weight=3;
        server 192.168.18.211:82  weight=4;
}
2)ip_hash
upstream engine {
        server 192.168.18.211:80;
        server 192.168.18.211:81;
        server 192.168.18.211:82  down;
        ip_hash;
}
4.1.2 Server configuration
server {
      listen 80;
      server_name nginx.san.com;
      location / {
      proxy_pass http://engine;
}
}
4.2 L7 load balancing
4.2.1 based on the URI forwarding
4.2.1.1 Upstream configuration
upstream nginx {
        server 192.168.18.211:80;
}
upstream php {
        server 192.168.18.211:81;
}
upstream java {
        server 192.168.18.211:82;
}
4.2.1.2 virtual host configuration
server {
      listen 80;
      server_name nginx.san.com;
      location /nginx/ {
      proxy_pass http://nginx/;
}
      location /php/ {
      proxy_pass http://php/;
}
      location /java/ {
      proxy_pass http://java/;
}
}
Tip: on the end of the "/" I this not tested, everybody interested can test!
4.2.2 according to extension forward (need regular expression support)
4.2.2.1 Upstream configuration
upstream nginx {
        server 192.168.18.211:80;
}
upstream php {
        server 192.168.18.211:81;
}
upstream java {
        server 192.168.18.211:82;
}
4.2.2.2 virtual host configuration
server {
      listen 80;
      server_name nginx.san.com;
      location ~* /(.*\.jpg)$ {
      proxy_pass http://nginx/$1;
}
      location ~* /(.*\.gif)$ {
      proxy_pass http://php/$1;
}
      location ~* /(.*\.png)$ {
      proxy_pass http://java/$1;
}
}
4.3 Nginx SSL
The configuration file (no testing of multiple certificate server)
server {
      listen 443 ssl;   <==Specifies the HTTPS port, the "SSL" must have or error
      server_name sso.eefocus.com;
      #access_log logs/sss-access.log;
      #error_log logs/ssl-error.log;
      ##SSL cert files##
      ssl_certificate  /usr/local/nginx/ssl/ee_com.crt;
      #ssl_certificate_key /usr/local/nginx/ssl/ees_com.key;
      ssl_certificate_key /usr/local/nginx/ssl/ee_com-nopass.key;
      #keepalive_timeout 60;
location / {
        proxy_pass ;
        proxy_set_header X-Forwarded-For $remote_addr;
        #proxy_set_header X-Forwarded-Proto https;
}
}
4.4 WEB log IP client records
4.4.1 X-Forwarded-For field
Interpretation (Wikipedia):
X-Forwarded-For(XFF)Is used to identify the HTTP proxy or load balancing method to connect to the Web server in the client's original IP address of the HTTP request header fields.
Address: http://zh.wikipedia.org/wiki/X-Forwarded-For
4.4.2 Apeche
4.4.2.1 Nginx configuration
location ~* /(.*\.png)$ {
      proxy_pass http://java/$1;
      proxy_set_header X-Forwarded-For $remote_addr;
}
The 4.4.2.2 Apache log format configuration (here HAProxy)
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
Prompt:
1 note braces and a"i",
2 (yourself) within a pair of braces is the value of "X-Forwarded-For" can be defined as Nginx and X-Forwarded-For, case insensitive
4.4.3 Nginx(WEB)
4.4.3.1 Nginx(Proxy)
Example:
location ~* /(.*\.jpg)$ {
      proxy_set_header X-Forwarded-For $remote_addr; 
      proxy_pass http://nginx/$1;
}
4.4.3.2 Nginx(WEB)Log format configuration
log_format  main  '$http_x_forwarded_for - $remote_user [$time_local] "$request" '
                     '$status $body_bytes_sent "$http_referer" '
                     '"$http_user_agent" ';
4.5 Nginx health check
4.5.1 upstream configuration (details mentioned)
upstream php {
        server 10.0.11.82:81 weight=1;
        server 10.0.11.83:80 weight=3 max_fails=5 fail_timeout=10s;
}
4.5.2 server configuration
server {
      listen 80;
      server_name php.san.com;
      location / {
      proxy_pass http://php;
      proxy_set_header X-Forwarded-For $remote_addr;
      proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_404;
}
}

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download

Posted by Saxon at February 12, 2014 - 12:57 AM