The nginx connection pool

Recommended for you: Get network issues from WhatsUp Gold. Not end users.

The 1 configuration


Module: EventsModule

Syntax: worker_connections number

The default:

Located in the main section directive worker_connections and worker_processes can help you calculate the maximum number of concurrent you can handle

max clients = worker_processes * worker_connections

In a reverse proxy environment, maximum number of concurrent into

max clients = worker_processes * worker_connections/2

Configuration example

worker_processes  12;

events {

        use epoll;

        worker_connections  2048000;


The 2 Principle

The same process, upstream and downstream share a connection pool, connection pool size, number of processes can be specified in the configuration.

3 data structure

In the nginx connection is connected with the TCP package, including the connection of socket, read, write events. Using the nginx connection package, we can easily use nginx to handle the connection with the related things, for example, to establish a connection, send and receive data. The structure is as follows:


struct ngx_connection_s {

    void               *data;

    ngx_event_t        *read;

    ngx_event_t        *write;

ngx_socket_t        fd;

. . . 

ngx_queue_t         queue;

. . . 


Connection pooling data structure as follows:

Connection pool cycle-> connections array; the single list, idle connection list free_connections also points to the connection pool head node, to achieve the very beautiful.

1, Realize the use of array, queue or stack, not simple to implement, can be randomly accessed, initialization is very convenient

2, All operations only operation chain header node, no memory copy, performance and complexity of O(1)

Create the initialization as data structure in src/event/ngx_event.c:ngx_event_process_init

ngx_event_process_init(ngx_cycle_t *cycle)

{. . . 

cycle->files_n = (ngx_uint_t) rlmt.rlim_cur;

       cycle->files = ngx_calloc(sizeof(ngx_connection_t *) * cycle->files_n, cycle->log);

. . . 

    cycle->connections =

        ngx_alloc(sizeof(ngx_connection_t) * cycle->connection_n, cycle->log);

. . . 

    i = cycle->connection_n;

    next = NULL;

    do {


        c[i].data = next;

        c[i].read = &cycle->read_events[i];

        c[i].write = &cycle->write_events[i];

        c[i].fd = (ngx_socket_t) -1;

        next = &c[i];

    } while (i);

    cycle->free_connections = next;

cycle->free_connection_n = cycle->connection_n;

. . . 


4 basic operations



Gets a connection from free_connections, and then initialize

Free_connections points to a list of a node, return head.

At the same time, cycle-> files[fd] also points to the return node


The main close a connection, including the "aftermath" and call ngx_reusable_connection (C, 0) ngx_free_connection to connect to free_connections

Can be thought of as the inverse operation of ngx_get_connection


ngx_reusable_connection(ngx_connection_t *c, ngx_uint_t reusable)

reusable=1 , Put queue

reusable=0 , From the queue


Will use the end of the connection back to the free_connections


When ngx_get_connection gets no connection (i.e., with relatively high connection are used up), then use

Ngx_drain_connections to release the long connection, will long connection out of queue, back to free_connections, and then obtain the


When getting a back from upstream, it calls the ngx_event_connect_peer to connect, ngx_event_connect_peer calls Ngx_get_connection to get a connection structure, and then to perform the connect operation.

ngx_event_connect_peer(ngx_peer_connection_t *pc)
. . . 
s = ngx_socket(pc->sockaddr->sa_family, SOCK_STREAM, 0);
. . . 
c = ngx_get_connection(s, pc->log);
. . . 
rc = connect(s, pc->sockaddr, pc->socklen);
. . . 

5 Nginx accept_mutex lock mechanism

In order to fully understand this process, we first brushes

The 1.master process bind port, listen, generating a listening socket

The 2.Master process fork the work process, work process to inherit the listening socket, perform accept requests

All of the work process to inherit the same socket listening, then a connection over time, a number of idle process, will compete for this connection, which is the work process to deal with the connection?

If a process to get accept more opportunities, the idle connection soon ran out, if not ahead of some control, when the accept to a new TCP connection, because can not be idle connection, but not the connection to other processes, will eventually lead to this TCP connection not processing, cut off. That's not fair.

We must design a mechanism, make a request, one and only one work process, that is to say there is only one process accept to this connection. At the same time each process has roughly equal chance to handle connection.

We look at nginx is how to do?

Listen socket in the master process has been established,






    if (ngx_open_listening_sockets(cycle) != NGX_OK) {

        goto failed;



In src/core/ngx_connection.c:ngx_open_listening_sockets


ls = cycle->listening.elts;

s = ngx_socket(ls[i].sockaddr->sa_family, ls[i].type, 0);

if (setsockopt(s, SOL_SOCKET, SO_REUSEADDR,

                           (const void *) &reuseaddr, sizeof(int))

                == -1)



if (bind(s, ls[i].sockaddr, ls[i].socklen) == -1) {...

if (listen(s, ls[i].backlog) == -1) {...

ls[i].listen = 1;

ls[i].fd = s;


The Work process inherits the master process listen socket, cyclic monitoring network events









if(ngx_use_accept_mutex) {

if(ngx_accept_disabled > 0) {



//In thesrc/event/nginx_event_accept.c:ngx_event_accept()In the calculation of: ngx_accept_disabled = ngx_cycle->connection_n / 8 - ngx_cycle->free_connection_n;   The initial value of0, After each to create a new connection, Will update the value. 


//When the remaining time connection number is less than the maximum number of connections of 1/8 positive, that connects a little bit more, so give up a chance for lock 



if(ngx_trylock_accept_mutex(cycle) == NGX_ERROR) {


//Here ngx_trylock_accept_mutex function is for locking function, successfully won the lock will be the global variable ngx_accept_mutex_held is set to 1, otherwise it is set to 0 




if(ngx_accept_mutex_held) {

  flags |= NGX_POST_EVENTS;


//Occupy the accept lock process when dealing with event is the first event in a queue, the subsequent slowly processing, so as to release the lock on the following.  




//Don't get locked process need not be divided two steps of processing events, but the events of timer update ngx_accept_mutex_delay 


  || timer > ngx_accept_mutex_delay)


  timer = ngx_accept_mutex_delay;






delta = ngx_current_msec;


The stampede

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download

Posted by Oswald at February 25, 2014 - 2:59 AM