Model and event handling mechanism process of Nginx architecture

Recommended for you: Get network issues from WhatsUp Gold. Not end users.

Study on nginx platform(30%)

Research on nginx architecture(100%)

As everyone knows, nginx high performance, and high performance of nginx and its structure are inseparable. So what nginx is? In this section we first met at the nginx framework.
Nginx after start, in the UNIX system will be in the form of daemon running in the background, the background process contains a master process and worker process. We can also manually turn off daemon mode, so nginx is running in front, this time, the nginx is a single process, obviously, the production environment we must not do, so turn off the daemon, is used for debugging, in later chapters, we will explain in detail how to debug nginx. So, we can see, nginx is to work with the multi process, of course, nginx also supports multi-threaded mode, but still we mainstream way process, the default is nginx. Nginx uses multi process has many advantages, so I mainly on nginx multi process model.
Just mentioned, nginx after start, there will be a master process and worker process. The master process is mainly used to manage the worker process, including: receiving external signal, sending a signal to the worker process, monitoring the running state of the worker process, when the worker process after the exit (abnormal condition), will automatically restart the new worker process. While the basic network events, is placed in the worker process to deal with. Between multiple worker processes are equal, they equal competition from the client's request, the process is independent of each other. A request, can only be treated in a worker process, a worker process, can not deal with the other process request. A number of worker process is set, in general we will set and machine CPU audit, process model and nginx here and event handling model are inseparable. The process model of nginx, can be expressed by the following:
The nginx process model After nginx is restarted, if we want to operate nginx, how to do? From the above we can see, master to manage the worker process, so we need only with master process communication on the line. The master process will receive signals from outside to, then according to the signal to do different things. So we have to control the nginx, only need to send via kill to process master signal on the line. For example, kill -HUP PID, is to tell nginx, calmly to restart the nginx, we usually use this signal to restart the nginx, or reload the configuration, because it is easily restart, so the service is not interrupted. The master process in the received HUP signal is how to do? Firstly, the master process in the received signal, will be the first to reload the configuration file, and then start the new process, and to all the old process of sending a signal, told them they could be a glorious retirement. The new process after start, began to receive the new request, and the old process in the received signals from the master, no longer receive new request, and all the pending request processing in the current process is completed, and then exit. Of course, direct send a signal to the master process, it is the old mode of operation, nginx after version 0.8, introduced a series of command line parameters, to facilitate our management. For example,./nginx -s reload, is to restart nginx,./nginx -s stop, is to stop the operation of nginx. How to do? We've got reload, we see, when the command is executed, we started a new nginx process, and the new nginx process to the reload parameter in the analysis, we know that we purpose is to control the nginx to reload the configuration file, it sends a signal to the master process, and then the next action, and we direct sending a signal to the master process.
Now, we know that when we are in the operation of nginx, nginx is doing what, then, the worker process is how to handle the request? Mention in front of us, the worker process is equal, each process, processing the request opportunity is the same way. When we provide 80 port HTTP service, A connection request., Each process may handle the connection, How to do? First, Each worker process is coming from the master process fork, In the master process., After the first set up listen socket, Then fork multiple worker processes, So that each worker process can go accept the socket (of course not the same socket, Only the socket for each process monitor in the same IP address and port, The network protocol which is allowed). Generally speaking, when a connection comes in, all in the accept in the socket process, will be notified, and only one process can be the accept connection, the other is accept failure, which is called the thundering herd phenomenon. Of course, nginx also not to pay no heed, it provides a accept_mutex this thing nginx, from the name, we can see this is a plus in the accept on a shared lock. With the lock, the same time, there would be only one process at a accpet connection, there will be no shock group problem. Accept_mutex is a controllable option, we can show off, is turned on by default. When a worker process in accept this connection, began to read request, resolution request, request processing, data, and then returned to the client, and finally disconnected, such a complete request is such. We can see, a request, entirely by the worker process, and only in a worker process.
Then, by using the nginx process model what benefits? Of course, a lot of good will. First, for each worker process, a separate process, without the need for locks, thereby eliminating the lock overhead, at the same time in the programming and problem investigation, will be a lot easier. Secondly, using a separate process, can make the effect not to each other, a process, other processes are still working, uninterrupted service, the master process is soon to start a new worker process. Of course, the abnormal exit the worker process, the program is definitely a bug, abnormal exit, will lead to the failure of all worker requests, but does not affect all of the requests, so reduces the risk of. Of course, there are a lot of good, everyone can slowly experience.
It says a lot about the process of nginx model, then we come to have a look, nginx is how to handle the event.
Some people may ask, nginx using worker way to process the request, each worker has only one main thread, concurrent number that can deal with very limited ah, how many worker can handle many concurrent concurrent, He Laigao? No, this is the wisdom of nginx, nginx uses asynchronous non blocking way to process the request, that is to say, nginx can also handle tens of thousands of requests. Think of the common working mode of Apache (APACHE also have asynchronous non blocking version, but because of its own some module conflict, so do not commonly used), each request will monopolize a worker thread, when the number of concurrent to thousands, and thousands of threads in the processing of requests. The operating system, is not a small challenge, thread bring memory footprint is very large, context switching threads of CPU has brought high overhead, the natural performance will not go up, but the cost is of no significance.
Why nginx can use asynchronous non blocking way to deal with it, or asynchronous non blocking what is going on? We go back to the origin, have a look a complete process request. First of all, the request to come, to establish a connection, and then receiving data, receive data, send data. Specific to the bottom, is read and write events, and when the read and write events are not ready to HERSHEY'S, must not operation, if not non blocking way to call, then a blocking call, event not ready, it can only be so, and other events are ready, you can continue. A blocking call into the kernel waiting, CPU will let out, the single thread worker, clearly not appropriate, when the network more events, everyone is waiting, CPU idle down with no one, CPU utilization rate does not go on, not to mention high concurrency. Well, you said the number of processes, with the threading model of Apache what is the difference, pay attention to, don't add context switching meaningless? So, in nginx, the most taboo system call blocking. Do not block, it is non blocking. Non blocking is not ready, events, immediately return to EAGAIN, tell you, the event is not ready, what do you think, come back later. All right, you wait for a while, then check the event, until the event is ready now, during this period, you can go to do other things, and then have a look good without incident. Although it is not blocked, but you have to come to check the status of events, you can do more things, but the overhead is not small. So, there will be a non blocking asynchronous event handling mechanism, specific to the system call is system calls such as select/poll/epoll/kqueue. They provide a mechanism, let you can simultaneously monitor multiple event, calling them is blocked, but you can set timeout timeout, in time, if the event is ready, it returns. This mechanism is to solve the two problems above us, take epoll as an example (in the latter case, we take epoll as an example, to represent the class of functions), when the event is not ready to HERSHEY'S, epoll put inside, the event is ready, we will go to read and write, when read and write back to EAGAIN when will it again, we added to the epoll. So, as long as the event is ready, we have to deal with it, only when all events are not ready to HERSHEY'S, in epoll there. In this way, we can handle a large number of concurrent with, of course, the concurrent requests, refers to the pending request, the thread is only one, so it can handle the request of course only one, just constantly switching between requests, switching is because asynchronous event is not ready, and take the initiative to give up. There is no switching cost, you can understand the processing of multiple ready event for circulation, fact is that. Compared with multi thread, this event processing method is of great advantage, do not need to create a thread for each request, the memory is also rarely, no context switching, event handling is very lightweight. The number of concurrent would not lead to unnecessary waste of resources (context switch). Concurrent number more, just take up more memory. I have tested the number of connections, the 24G memory on the machine, processing the number of concurrent requests to 2000000. Now the network server are used in this way, which is the main reason of nginx high performance.
As we have said before, to recommend a number set worker as the number of nuclear CPU in here, it is easy to understand, the number of worker more, will only lead to competition process of CPU resources, resulting in unnecessary context switches. Moreover, nginx in order to make better use of nuclear characteristics, provides the binding affinity of the CPU option, we can be a process of binding in a nucleus, failure so as not to because of the switching process by cache. Like the optimization of this small is very common in nginx, but also shows the author try hard to nginx. For example, nginx to do string 4 byte comparison, will be 4 characters into a int, to make comparisons, to reduce the number of instructions from the CPU.
Now, know nginx what will choose process model and event such as a model. For a basic web server, event usually have three types, network events, signal, timer. Know from the above explanation, network events by asynchronous non blocking can be a very good solution to fall. How to deal with the signal.?
First of all, signal processing. For nginx, there are some specific signal, represents the specific meaning. The signal will break off operation program currently, in the changing state, continue to implement. If the system call, it may cause a system call failure, need to re entrant. Processing of signals, we can learn some professional books, here not say. For nginx, if nginx is waiting for events (epoll_wait), if the program receives a signal, the signal processing function after treatment, epoll_wait returns an error, then the program again in epoll_wait call.
In addition, come to have a look. The epoll_wait function is to set a timeout at the time of the call, so the nginx with this timeout timer to achieve the. The inside of the nginx timer event is placed in a min heap inside, every time before entering the epoll_wait, the minimum time to get all the timer events from the minimum heap inside, enter epoll_wait in the calculation of epoll_wait timeout after. So, when no event, no interrupt signal, epoll_wait will timeout, that is to say, the timer event to. At this time, nginx will check all the timeout event, set the status of their timeout, and then to network events. Therefore, when we write nginx code, the callback function processing network events, the first thing to do is to judge usually timeout, and then to network events.
We can use a pseudo code to summarize the nginx event handling model:
while (true) {
    for t in run_tasks:
    timeout = ETERNITY;
    for t in wait_tasks: /* sorted already */
        if (t.time <= now) {
        } else {
            timeout = t.time - now;
    nevents = poll_function(events, timeout);
    for i in nevents:
        task t;
    if (events[i].type == READ) {
        t.handler = read_handler;
    } else (events[i].type == WRITE) {
        t.handler = write_handler;

Well, we talked about the process model, event model, including network events, signal, timer events.

Basic concepts of nginx(100%)


In the nginx connection is connected with the TCP package, including the connection of socket, read, write events. Using the nginx connection package, we can easily use nginx to handle the connection with the related things, for example, to establish a connection, send and receive data. But the nginx HTTP request processing is built on top of connection, so nginx can be used not only as a web server, can also be used as a mail server. Of course, provided by the use of nginx connection, we can deal with any back-end services.
With an TCP connection of the life cycle, we have a look how nginx handles a connection. First, when nginx starts, it will resolve the configuration file, get the port and the IP address of the monitor, and then in the nginx master process, first initialize the monitoring of socket (create socket, set the addrreuse option, bound to the specified IP address and port, and listen), then fork sub process out, and then the child process will compete accept a new connection. At this point, the client can initiate a connection to nginx. When the client and nginx three handshake, and nginx to set up a connection, at this time, a child process will accept successfully, the established connection socket, and then create an nginx connection to the package, which the ngx_connection_t structure. Then, set the read and write event handlers and add the read and write events to exchange data with the client. Finally, nginx or client to turn off the connection, to this, a connection will die in one's bed.
Of course, nginx also can be used as a client request to other server data (such as upstream module), at this time, creating a connection with other server, are encapsulated in ngx_connection_t. As a client, nginx gets a ngx_connection_t structure, and then create an socket, and set the properties of socket (such as non blocking). Then by adding the read and write events, call connect/read/write connection, finally, turn off the connection, and the release of ngx_connection_t.
In nginx, each process has a number of connections maximum limit, the limit and the system on the FD limit is not the same. In the operating system, through the ulimit -n, the largest number, we can get a process to open the FD nofile, because each socket connection will take out a FD, the maximum number of connections that will limit our process, also will be maximum number of concurrent directly affect our program can support, when FD is used up, and then when you create a socket, you will fail. However, here I want to say to the limit on the number of nginx connections, no direct relationship with the nofile, can be more than nofile, can be less than nofile. Process nginx by setting the worker_connectons to set the maximum use of each connectionvalue. In the implementation of nginx, is through a connection pool management, each worker process has a separate connection pool, connection pool size is worker_connections. Connection pool in here to save is not the actual connection, it is only a ngx_connection_t structure, the size of a worker_connections array. And, nginx will pass a list of free_connections to save all free ngx_connection_t, each time you get a connection, from idle connection list gets a, exhausted, and then put back in the free connection list.
Here, many people misunderstand the worker_connections parameters mean, think this is value nginx can connect the maximumvalue. In fact, the value is said each worker process can create the maximum value connection; therefore, the maximum number of connections, a nginx can be established, it should be worker_connections * worker_processes. Of course, here is the maximum number of connections to the HTTP request, local resources, can support concurrent number is worker_connections * worker_processes, and if HTTP as a reverse proxy, maximum concurrent number should be worker_connections * worker_processes/2. Because as a reverse proxy server, each concurrent will establish a connection with the client and backend services, will occupy two.
So, we have said a client to connect up, a plurality of idle process, will compete in this connection, it is easy to see, this competition will lead to unfair, if a process to get accept more opportunities, the idle connection soon ran out, if not ahead of time to do some control, when the accept to a new TCP connection, because can not be idle connection, but not the connection to other processes, will eventually lead to this TCP connection without treatment, will stop off. Obviously, this is not fair, some process free connection, but not the opportunity to deal with some process, because there is no spare connection, but artificially drop connections. So, how to solve this problem? First of all, nginx processing must first open the accept_mutex option, at this time, only the accept_mutex process to add the accept event, that is to say, nginx will control the process of whether to add accept event. Nginx uses a variable called ngx_accept_disabled to control whether to compete accept_mutex lock. In the first section of code, calculation of ngx_accept_disabled, value the value is connected; all 1/8 of the total number of nginx single process, minus the remaining free connection number, the ngx_accept_disabled can be a rule, when the remaining connection number less than total connection number 1/8, the value was greater than 0, and the remaining; the number of connections is small, the greater the value. Look at the second code, when ngx_accept_disabled is larger than 0, not to try to get accept_mutex lock, and ngx_accept_disabled minus 1, so, each execution here, will go to minus 1, until less than 0. Don't get accept_mutex lock, is to make connections to obtain the opportunity, very clearly, when free connection less, ngx_accept_disable is higher, and give more chances to acquire the lock, so that other processes have a greater chance of. Don't go to accept, your connection control down, connection pool will be used in other processes, such as control, nginx multi process connected balance.
ngx_accept_disabled = ngx_cycle->connection_n / 8
    - ngx_cycle->free_connection_n;

if (ngx_accept_disabled > 0) {

} else {
    if (ngx_trylock_accept_mutex(cycle) == NGX_ERROR) {

    if (ngx_accept_mutex_held) {
        flags |= NGX_POST_EVENTS;

    } else {
        if (timer == NGX_TIMER_INFINITE
                || timer > ngx_accept_mutex_delay)
            timer = ngx_accept_mutex_delay;

Well, first introduced on this connection, the purpose of this chapter is to introduce the basic concepts, know in nginx connect to what is on the line, and the connection is a relatively advanced usage, implementation and use in the back of the top module development will have a special chapter to explain the connection with the event.


We tell request this section, in nginx we refer to the HTTP request, specific to the data structure in nginx is ngx_http_request_t. Ngx_http_request_t is a HTTP request package. As we know, a HTTP request, including the request line, request, the request body, head, head, response response response body.
The HTTP request is a typical request response type of network protocol, and HTTP is the file protocol, so our analysis on request and the request header, and the output response and the response headers, is often the processing line by line. If we write a HTTP server, usually in a connection establishment, the client sends the request. Then we read a row of data, analysis of method, URI, http_version information contains the request in a row. And then every other line processing request header, and according to the request of method and request header information to decide whether the request and the request body length, and then read the request body. Get request, we process the request to output data, and then the regeneration response, response and the response body. After the response sent to the client, a complete request processing. Of course, this is the simplest approach webserver, actually nginx is doing, but there are some small differences, for example, when the first request after reading, began to request processing. Nginx through ngx_http_request_t to save the resolution request and output response related data.
Then, briefly about nginx is how to handle a complete request. For nginx, a request from the beginning of ngx_http_init_request, in this function, can set read event for the ngx_http_process_request_line, that is to say, the network event the next, will be implemented by the ngx_http_process_request_line. From the ngx_http_process_request_line function, we can see, this is to process the request line, just before speaking, the first thing is to process the request for processing the request is consistent. Through the ngx_http_read_request_header to read the request data. Then call the ngx_http_parse_request_line function to parse the request line. Nginx in order to improve the efficiency, the use of state machine parse request line, but in comparison to method, not directly use the string comparison, but the four characters into an integer, then a comparison to reduce the number of instructions from the CPU, has said the front. Many people may approach, clearly a request line contains the requested URI, version, but did not know that in the request line, also can contain host. For example, a request GET HTTP/1.0 such a request is legitimate, and host is, this time, nginx will ignore the request host headers in the request line, and with this is to find the virtual host. In addition, for the http0.9 version, is does not support the request header, so here is also special treatment. So, in the back of the request header, the protocol version is 1 or 1.1. The request line analysis to the parameter, will be saved to the ngx_http_request_t structure.
In the analysis of the request line, nginx will set read events of handler is ngx_http_process_request_headers, then the subsequent requests in the ngx_http_process_request_headers for reading and analysis. The ngx_http_process_request_headers function is used to read the request header, like the request line, or call ngx_http_read_request_header to read the request header, call ngx_http_parse_header_line to parse a request header, the request header parsing to be saved to the domain of headers_in ngx_http_request_t, headers_in is a linked list structure, save the request header all. But in HTTP some request is in need of special processing, the request header and request function stored in a mapping table, namely ngx_http_headers_in, at initialization time, will generate a hash table, when each resolves to a request after the head, will be in the hash table search, if found, processing the call the corresponding function to handle the request header. For example: processing function Host head is ngx_http_process_host.
When the nginx analysis to the two carriage returns, said end request head, this will invoke ngx_http_process_request to handle the request. Ngx_http_process_request sets the current connection to read and write the event handler for the ngx_http_request_handler, and then call ngx_http_handler to really start processing a HTTP request. There may be a rather strange, read and write event handlers are ngx_http_request_handler, in fact, in this function, according to the current event is read or write events are events, called ngx_http_request_t read_event_handler or write_event_handler. Because of this, we have read the request completed, have said before, the nginx approach is to not read the request body, so here we set read_event_handler to ngx_http_block_reading, which does not read data. Just said, the real start processing the data, in the ngx_http_handler inside this function, this function will set the write_event_handler to ngx_http_core_run_phases, and the implementation of ngx_http_core_run_phases function. Ngx_http_core_run_phases this function will execute multiple stages of request processing, nginx will be a HTTP request processing is divided into several periods, then this function is the implementation of these stages to generate data. Because ngx_http_core_run_phases will eventually produce data, so it is easy to see why the processing function, set the write event for ngx_http_core_run_phases. Here, I briefly describes the calling logic about function, we need to understand that the final is to call ngx_http_core_run_phases to handle the request, response header produced will be put on ngx_http_request_t headers_out, this part, I will put in a request processing inside to speak. The various stages of nginx processing is carried out on request, will eventually call filter to filter data, processing the data, such as Truncked transmission, gzip compression. The filter includes header filter and body filter, namely the response headers or processed in response to body. Filter is a linked list structure, there are header filter and body filter, the first implementation of all filter header filter, then the implementation of all filter body in filter. Finally, a filter, in the header filter of ngx_http_header_filter, the filter will traverse the response headers, finally to response header output in a contiguous memory, and then call the ngx_http_write_filter output. Ngx_http_write_filter body filter is the last one in the first body information, so nginx, after a series of body filter, also calls the ngx_http_write_filter to output (a diagram to illustrate).
It should be noted here that, Nginx will the request headers are stored in a buffer., The size of the buffer to set the configuration item client_header_buffer_size, If the user requests the head too big, This buffer is not installed, The nginx would be to allocate a new larger buffer to install the request header, The buffer can be set with the large_client_header_buffers, The large_buffer of the group buffer, For example configuration 4 8K, It is said that there are four the size of 8K buffer can be used. Note that, in order to preserve the integrity or request request header, a complete request or request header, needs to be placed in a contiguous memory inside, therefore, a complete request or request header, can only be saved in a buffer. So, if the request for more than the size of a buffer, it will return a 414 error, if a request header size larger than a size of buffer, will return to the 400 error. In the understanding of the parameters of the value and nginx, after the actual practice, the application scenario, we need to adjust these parameters according to the actual demand, to optimize our program.
Processing flow chart:
Request processing All of these, is a HTTP request life cycle of nginx. We'll have a look some of the concepts associated with the request.


Of course, in nginx, http1.0 and HTTP1.1 also support for long connection. What is the connection? We know, HTTP is a request to the TCP protocol, then, when the client to make the request, need to establish a TCP connection with the server, and each time the TCP is the need to connect the three handshake to determine, if the client and server network is poor, the three interactive consumption time more and more, and the three interaction will also bring the network traffic. Of course, when a connection is disconnected, the interaction will have four time, of course, the user experience is not important. While the HTTP request is a request response type, if we can know each request and response of head body length, then we can be in a connected above the execution of multiple requests, this is the so-called long connection, but the precondition is that we must first determine the head body length of request and response. For requests, if you need to have body for the current request, such as POST request, then the nginx will require the client specified in the Content-Length request header to indicate the size of the body, otherwise it returns a 400 error. That is to say, the request body length is determined, then the response body length? Have a look first to determine the response of body length in HTTP protocol:
For the http1.0 protocol, if the Content-Length head response, with the length of Content-Length can know the length of body, the client after receiving body, can according to the length of the received data, after received, said the request completed. And if there is no Content-Length header, the client will always receive data until the server, active disconnected, said body reception. For the HTTP1.1 protocol, if the response header of the Transfer-encoding for chunked transmission, said body is a stream output, body is divided into a plurality of blocks, each beginning identifies the current block length, at this time, body does not need to specify the length. If non chunked transmission, and Content-Length, according to Content-Length to receive data. Otherwise, if non chunked, and no Content-Length, the client receives the data, until the server automatically disconnect. From the above, we can see, in addition to the http1.0 without Content-Length and HTTP1.1 non chunked without Content-Length, the length of body is known. At this time, when the server after the output end body, will consider using long connection. Whether the use of long connection, also has the condition to limit. If the request header in the client connection close, said the client needs to turn off the long connection, if keep-alive, then the client needs to open the connection, if not connection the head of client request, then according to the agreement, if it is http1.0, are silent thought close, if HTTP1.1, the default is keep-alive. If the result is keepalive, then, nginx in the output end response body, will set the current connection keepalive attribute, then wait for a client's next request data again. Of course, Nginx can't wait forever, If the client has no data., Not always occupied this connection? So when nginx keepalive waits for the next request, There will be a maximum waiting time, And this time is configured by the option keepalive_timeout, If the configuration is 0, Said keepalive off, At this time, HTTP version of either 1.1 or 1, The client connection either close or keepalive, Will be forced to close.
If the server finally decided is open keepalive, then in HTTP head inside response, will also include the connection, the value "Keep-Alive", otherwise it is"Close". If connectionvalue for close, then the nginx response after the data, will take the initiative to turn off the connection. So, for the request of relatively large amount of nginx, keepalive off the last will produce more time-wait state socket. Generally speaking, when a visit to the client, need to visit the same server, open the keepalive's advantage is very great, such as image server, usually a Webpage contain many pictures. Open the keepalive will also reduce the number of time-wait.


In HTTP1.1, we introduce a new characteristic, namely pipeline. What is pipeling? Pipeling is actually a pipeline operations, it can be looked as a kind of sublimation of keepalive, because pipeling is also based on the long connection, purpose is to use a connection requests. Prior to keepalive, if the client to submit multiple requests, so the second request, must wait until the first request response received completely, to initiate, that is to say, the request is a serial of, a request by a request. Note, a complete request, including sending a request, processing request, response to a request. But for pipeline, the client does not have to wait until the first request after processing, we can immediately launched second request. As we know, the TCP connection is full duplex, send and receive at the same time, so, we can combine multiple requests are sent in turn head, processing on the server side, the client then receive, so that multiple requests is performed at the same time. Nginx is a direct support for pipeling, however, nginx of a plurality of request processing in pipeling is not parallel, is still a request by processing a request, but in the process the first request, the client can initiate second requests. In this way, the nginx using the pipeline reduction in processing a request, wait for the second time a request header data. In fact, the nginx approach is very simple, as mentioned before, nginx is in the read data, will read the data in a buffer, so, if nginx has finished processing a previous request, if it is found that the buffer and data, that the rest of the data is the beginning of the next request, then the next processing the next request, or set the keepalive.


lingering_close, Literally turn off delay, that is to say, when the nginx to close the connection, is not immediately close the connection, but wait for a period of time really turn off the connection. Why is it so? Let us have a look this scene. Nginx receives the client request, may be due to a client or server error, should immediately response error information to the client, and the nginx in response to the error message, large segment case is the need to close the current connection. If the client is sending data, or data has not arrived at the server, the server will connect off. Then, the data sent by the client will receive RST packets, at this time, the client data to the server, ACK will not be sent, that is to say, the client will not get the error message data transmitted by the service terminal. The client will certainly want to, this server good bully, motionless on the reset my connection, even the error messages are not.
In this scene, we can see, the key point is the server to the client sends a RST packet, leading to their sending data ignored on the client. Therefore, the key to solve the problem is to let the server, don't send a RST packet. Come to think of it, we send RST because we turn off the connection, turn off the connection is because we don't want to deal with this connection, also won't have any data to produce. For full duplex TCP connection, we only need to turn off write, read can continue, we only need to lose any data read on the line, so, when we turn off the connection, the client to send data, will not receive RST. Of course, we need to ultimately turn off the reading terminal, so we can set a timeout, after this time, turn off the reading, and then send the data to the client on the matter, as I think, so long time, send you error message should also read, then slowly none of my business, blame it on your RP is not good. Of course, the normal client, the read data, will turn off the connection, the server will turn off within the timeout period the read end. These are lingering_close. SO_LINGER protocol stack provides this option, a configuration which is to handle the lingering_close case, but the nginx is the realization of their own lingering_close. The significance of the existence of lingering_close is to read the rest of the client's data, so nginx will read a timeout, set with the lingering_timeout option, if the lingering_timeout time has not received data, directly off the connection. Nginx also supports setting a read time, set by the lingering_time, this time is nginx in the closed written after socket, retention time, the client needs to send out all data in this time, otherwise the nginx after this time, will directly off the connection. Of course, nginx support is configured to open the lingering_close options, through lingering_close to configure options. So, we in the actual application, whether should open the lingering_close? This is not recommended value fixed;, such as Maxim Dounin, the main function of lingering_close is to keep the client with better compatibility, but requires additional resources (such as connection will always take).
In this section, we introduce the basic concept, nginx, connected with the requirements of the section, we talk about the basic data structure.

The basic data structure(20%)

Nginx is the author of the ultimate pursuit of efficiency, realize their own nginx wind many characteristic; data structure and common functions. For example, nginx provides a string with length of string based on the compiler options, optimized the copy function ngx_copy. So, we write in the nginx module, should try to call nginx API, while some API but on the glibc macro definition. In this section, we introduce the use of techniques of string, list, buffer, chain and a series of basic data structure and the associated API and matters needing attention.


In the following nginx source directory src/core ngx_string.h|c, contains the string and string package related to the operation of the api. Nginx provides a belt length string structure of ngx_str_t, its prototype is as follows:
typedef struct {
    size_t      len;
    u_char     *data;
} ngx_str_t;

From the structure, data refers to the first character in the string data, the end of a string with length to express, not by the '0' to indicate the end of. So, when writing nginx code, method for processing strings and we usually use are very different, but remember, not to '0' end of the string, string manipulation, as provided by the use of nginx API to operate on strings. So, nginx, what good would it have done? First of all, said the string length by length, reduce the number of calculating the length of the string. Secondly, nginx can be repeated with a string of memory, data can point to any memory length, end of said, without having to copy with a copy of your string (because if we want to finish 0, but can not change the original string, so the potential need to copy a string). We are members of the ngx_http_request_t structure, can find a lot of string references a memory examples, such as request_line, URI, args, data part of the string, is to create a buffer when receiving data to memory, URI, args there is no need to copy out a copy. In this way, reduce a lot of unnecessary memory allocation and copying. Just because of this kind of characteristic, when you modify a string, you must pay attention to, if you modify the string can be modified, if modified, whether it will affect other references. The ngx_unescape_uri function in the back, you will see this point. Then, use nginx string will produce some problems, a lot of the system API function provided by glibc are mostly expressed by 'the 0' end of the string, so we call the system API, cannot directly into the str-> data. At this time, the usual practice is to create a str-> len + 1 the size of memory, then the copy string, the last byte is set to'0'. Comparison of the hack approach, the last character of the string after a character in the backup one, and then set to '0', after the call, and then by the backup to change back, but premise condition is, you have to determine the character can be modified, and there is a memory allocation, not out of bounds, but is generally not recommended. Next, have a look nginx provides the string operations related to API.

Initialize a string for the STR, STR must be a constant string, generally used only for the way to initialize variables declared a string variablevalue.

When you declare a variable is initialized to an empty string, the string, the string of length 0, data NULL.
ngx_str_set(str, text)

Sets the string STR is text, text must be a constant string.

Sets the string STR is an empty string, the length is 0, data NULL.
The above four functions, use must be careful, ngx_string and ngx_null_string can only be used to initialize, such as Fu value:
ngx_str_t str = ngx_string("hello world");
ngx_str_t str1 = ngx_null_string();

If so, there will be problems:
ngx_str_t str, str1;
str = ngx_string("hello world");    // Compile error
str1 = ngx_null_string();                // Compile error

In this case, you can call ngx_str_set and ngx_str_null these two functions to do:
ngx_str_t str, str1;
ngx_str_set(str, "hello world");

But we should pay attention to is, ngx_string and ngx_str_set in the call, string pass must be a constant string, otherwise it will get beat all errors. Such as:
ngx_str_t str;
u_char *a = "hello world";
ngx_str_set(str, a);    // Problems

void ngx_strlow(u_char *dst, u_char *src, size_t n);

The SRC of the first n characters converted to lowercase DST stored in the string, the caller needs to ensure the DST space is greater than or equal to n. The operation does not produce changes to the original string. If you want to change the original string, can:
ngx_str_t str = ngx_string("hello world");
ngx_strlow(str->data, str->data, str->len);

ngx_strncmp(s1, s2, n)

A case insensitive string comparison, comparing only the first n characters.
ngx_strcmp(s1, s2)

Do not distinguish with the length of the string is not sensitive.
ngx_int_t ngx_strcasecmp(u_char *s1, u_char *s2);

Distinguishing string with no length the size comparison of writing.
ngx_int_t ngx_strncasecmp(u_char *s1, u_char *s2, size_t n);

Comparative case sensitive string with length, only n characters.
u_char * ngx_cdecl ngx_sprintf(u_char *buf, const char *fmt, ...);
u_char * ngx_cdecl ngx_snprintf(u_char *buf, size_t max, const char *fmt, ...);
u_char * ngx_cdecl ngx_slprintf(u_char *buf, u_char *last, const char *fmt, ...);

The above three functions for string type, size, space of the second ngx_snprintf parameters specified in buf max, ngx_slprintf through last to indicate the size of the buf space. Recommend the use of second or third functions to type string, ngx_sprintf function or more dangerous, easy to produce the buffer overflow vulnerability. In this series of function, nginx compatible with glibc outside of a string, also added some convenient some escape characters of nginx type type, such as%V for type ngx_str_t structure. That in the nginx source file in ngx_string.c:
 * supported formats:
 *    %[0][width][x][X]O        off_t
 *    %[0][width]T              time_t
 *    %[0][width][u][x|X]z      ssize_t/size_t
 *    %[0][width][u][x|X]d      int/u_int
 *    %[0][width][u][x|X]l      long
 *    %[0][width|m][u][x|X]i    ngx_int_t/ngx_uint_t
 *    %[0][width][u][x|X]D      int32_t/uint32_t
 *    %[0][width][u][x|X]L      int64_t/uint64_t
 *    %[0][width|m][u][x|X]A    ngx_atomic_int_t/ngx_atomic_uint_t
 *    %[0][width][.width]f      double, max valid number fits to %18.15f
 *    %P                        ngx_pid_t
 *    %M                        ngx_msec_t
 *    %r                        rlim_t
 *    %p                        void *
 *    %V                        ngx_str_t *
 *    %v                        ngx_variable_value_t *
 *    %s                        null-terminated string
 *    %*s                       length and string
 *    %Z                        '\0'
 *    %N                        '\n'
 *    %c                        char
 *    %%                        %
 *  reserved:
 *    %t                        ptrdiff_t
 *    %S                        null-terminated wchar string
 *    %C                        wchar

Here special remind is, we are most commonly used in type ngx_str_t structure, the escape character is%V, is passed to the function must be a pointer type, otherwise the program will coredump off. This is our most easily mistake. For example:
ngx_str_t str = ngx_string("hello world");
char buffer[1024];
ngx_snprintf(buffer, 1024, "%V", &str);    // Note, STR address

void ngx_encode_base64(ngx_str_t *dst, ngx_str_t *src);
ngx_int_t ngx_decode_base64(ngx_str_t *dst, ngx_str_t *src);

These two functions for Base64 encoding and decoding, the str calls, and need to ensure that there is enough space to store the result in DST, if you do not know the specific size, can call the ngx_base64_encoded_length and ngx_base64_decoded_length to estimate the maximum space.
uintptr_t ngx_escape_uri(u_char *dst, u_char *src, size_t size,
    ngx_uint_t type);

Coding of SRC, according to type according to the different ways to code, if DST is NULL, the number of returns need to escape character, which can obtain the size of space required. Type type can be:
#define NGX_ESCAPE_URI         0
#define NGX_ESCAPE_ARGS        1
#define NGX_ESCAPE_HTML        2
#define NGX_ESCAPE_REFRESH     3

void ngx_unescape_uri(u_char **dst, u_char **src, size_t size, ngx_uint_t type);

Anti code on SRC, type is 0, NGX_UNESCAPE_URI, NGX_UNESCAPE_REDIRECT threevalue. If it is 0, then all characters in the SRC are transcoding. If NGX_UNESCAPE_URI and NGX_UNESCAPE_REDIRECT are encountered, '?' it's ending, the following character on the matter. But the difference between NGX_UNESCAPE_URI and NGX_UNESCAPE_REDIRECT is NGX_UNESCAPE_URI the need for transcoding encountered character, will transcoding, while NGX_UNESCAPE_REDIRECT is only for transcoding to non visible characters.
uintptr_t ngx_escape_html(u_char *dst, u_char *src, size_t size);

To encode the HTML label.
Of course, I here only introduce the use of some commonly used API, you may be familiar with what, in the actual use of the process, encountered do not understand, the fastest and most direct way is to look at the source code, the realization of API or nginx call to itself where API is how to do, the code is the best document.


Ngx_pool_t is a very important data structure, are used in many important occasions, many important data structures are also in use it. It really is a what? Simply put, it provides a mechanism to help management, a series of resources (such as memory, files etc.), makes use of these resources and the release of unified, exempt from release considering what the various resources in the use process, whether the omission of a release of fear.
For example, memory management, if we need to use memory, so always get memory from a ngx_pool_t object in a moment, in the final, we destroy the ngx_pool_t object, all of these memory have been released. So we don't need to malloc and free on the memory operation, do not have to worry about whether a block is malloc out of memory is not released. Because when the ngx_pool_t object for the destruction of all assigned when, out of the object in the memory will be released.
In the example we will use a series of documents, but we opened later, finally have closed down, so we put the file registration to a ngx_pool_t object, when the ngx_pool_t object is destroyed, all of these files will be closed.
Two examples from the above we can see that, when using the data structure of ngx_pool_t, all resources are destroyed in the release of this object, the unification of the release, it will bring about a problem, that is the resource life cycle (or by the occupancy time) with ngx_pool_t life cycle is consistent (ngx_pool_t also provides a small amount of operation to release resources in advance). From the performance point of view, this is not the best. For example, we need to use the A, B, C three resources, and end the use of B, A will not be used again, when using C A and B will not be used to. If you do not use ngx_pool_t to manage these three resources, that we may apply for A, from the system using A, and then in the release of A. Then apply B, B, and B release. Finally, the application of C, the use of C, and then release the C. But when we come to the management, these three resources to use a ngx_pool_t object of the A, B and C are occurred in the final, after being finished with C. Admittedly, this program usage over a period of time resources increase in the. But it also relieves the programmer respectively three resource life cycle to work. This is also the income, will be lost. It is a matter of choice, in specific circumstances, you care more about what.
You can see in the nginx of a typical ngx_pool_t scenario, for each HTTP request nginx processing, nginx will generate a ngx_pool_t object and the HTTP requst Association, need to apply to all processes in the resources from the ngx_pool_t object is obtained, when the HTTP requst processing is complete, all applications at in the process of resource, speak and release the associated ngx_pool_t object.
Ngx_pool_t related structures and operations are defined in the src/core/ngx_palloc.h|c file.
typedef struct ngx_pool_s        ngx_pool_t;

struct ngx_pool_s {
    ngx_pool_data_t       d;
    size_t                max;
    ngx_pool_t           *current;
    ngx_chain_t          *chain;
    ngx_pool_large_t     *large;
    ngx_pool_cleanup_t   *cleanup;
    ngx_log_t            *log;

The user ngx_pool_t from the general point of view, without the field to focus on the roles in ngx_pool_t structure. So here is not a detailed explanation, in the course of the use of certain operations that function, if necessary, will be described.
Below we explain the operation of ngx_pool_t.
ngx_pool_t *ngx_create_pool(size_t size, ngx_log_t *log);

Create an initial node size is size pool, log as the object of subsequent operation in the pool when the output log. There is a need to explain the choice of size, size size must be less than or equal to NGX_MAX_ALLOC_FROM_POOL, and must be greater than sizeof(ngx_pool_t).
Select more than NGX_MAX_ALLOC_FROM_POOL value will be wasted, because the space is not more than the limit is used (only in the first block of memory by the ngx_pool_t object management of memory, subsequent allocation if the first free memory block has run out, will redistribution).
Selection of less than sizeof (ngx_pool_t) value causes program collapse. Because the memory block initial size to use part of the information itself to store ngx_pool_t.
When a ngx_pool_t object is created, modified Max field of the object to be Fu value size-sizeof (ngx_pool_t) and small NGX_MAX_ALLOC_FROM_POOL of the two. Distribution from the pool memory block later, after the first block of memory to use, if you want to continue to distribute it, you need to apply for a memory from the operating system. When the memory size is less than or equal to Max field, allocating new memory block, D links in this field (actually field) a list management. When the block of memory to be allocated are bigger than max, then the application from the system memory is connected to a list of large field management. We'll call this chunk of memory chain and small memory chain.
void *ngx_palloc(ngx_pool_t *pool, size_t size);

Distribution in the pool block is a size sized memory. Note, the starting address of the function to allocate memory were aligned according to NGX_ALIGNMENT. Alignment operation can increase the processing speed of the system, but it will result in a waste of memory.
void *ngx_pnalloc(ngx_pool_t *pool, size_t size);

Distribution in the pool block is a size sized memory. But this function to allocate memory and not like the above function that are aligned.
void *ngx_pcalloc(ngx_pool_t *pool, size_t size);

This function is assigned size the size of the memory, and memory blocks were cleared of distribution. The interior is actually to call ngx_palloc achieve.
void *ngx_prealloc(ngx_pool_t *pool, void *p, size_t old_size, size_t new_size);

On a block of memory redistribution of the pointer P. If P is NULL, then directly allocates a new new_size size of memory.
If P is not NULL, new allocates a block of memory, and the old memory to copy the contents to a new block of memory, and then release the P old memory (specific can release the old, depending on the specific circumstances, not detailed here).
This function is implemented using ngx_palloc.
void *ngx_pmemalign(ngx_pool_t *pool, size_t size, size_t alignment);

In accordance with the specified alignment size alignment to apply for a block size for size memory. Here for the memory regardless of size will be placed under the management of large chunks of memory in the chain.
ngx_int_t ngx_pfree(ngx_pool_t *pool, void *p);

To be placed in large memory chain, is also a block release a list of memory management in the large field. The implementation of this function is the order of traversal large management of big memory list. So the efficiency is low. If you find a piece of memory, in this list is released, and returns a NGX_OK. Otherwise it returns NGX_DECLINED.
Because the operation efficiency is low, unless it is necessary, that is to say the memory is very large, it should be timely release, or generally do not need to call. In this pool to be destroyed anyway, it will release.!
ngx_pool_cleanup_t *ngx_pool_cleanup_add(ngx_pool_t *p, size_t size);

Cleanup ngx_pool_t in the field of management with a special list, the list of each record of a special need to release resources. For each node in the linked list contains resources how to release, is self explanatory. This also provides a great deal of flexibility. Mean, ngx_pool_t not only can manage memory, through this mechanism, can manage any need to release the resources, for example, close the file, or delete files etc.. Below we look at this list of each node type:
typedef struct ngx_pool_cleanup_s  ngx_pool_cleanup_t;
typedef void (*ngx_pool_cleanup_pt)(void *data);

struct ngx_pool_cleanup_s {
    ngx_pool_cleanup_pt   handler;
    void                 *data;
    ngx_pool_cleanup_t   *next;

Data: indicates that the node corresponding to the resource. Handler: is a function pointer, pointing to a data release corresponding resources function. Only one parameter of the function, is data. Next: point to the next element in the list. See here, the ngx_pool_cleanup_add function usage, I believe that everyone should have some understand. But the parameters of size is what role? This size is to store the data field points to the size of the resource.
For example, we need finally to delete a file. When we call this function, the size is specified as a string storage file size, and then call the function to add a cleanup list. This function returns the node to add new. We then set the data field for this node in the copy for the file name. The hander field Fu value as a function of deleted files (of course the function prototype according to void (*ngx_pool_cleanup_pt)(void *data)).
void ngx_destroy_pool(ngx_pool_t *pool);

This function is the release of all the memory in the pool held, and function of the handler field for each in turn calls the cleanup field management elements in the list of points, to release all the pool management of the resources. And the pool pointing to the ngx_pool_t release, not available.
void ngx_reset_pool(ngx_pool_t *pool);

This function to release all memory chunk of memory list on the pool, the memory block small memory of the chain are modified to use. But not to cleanup list item.


Ngx_array_t is an array structure of nginx internal use. The array structure of nginx cognition and everyone in the store of the C language built-in phase like, such as data storage area is actually a chunk of contiguous memory. But the array except for the data memory also contains some meta information to describe some related information. Below we from an array of definition about the details of the. The definition of ngx_array_t in src/core/ngx_array.c|h.
typedef struct ngx_array_s       ngx_array_t;
struct ngx_array_s {
    void        *elts;
    ngx_uint_t   nelts;
    size_t       size;
    ngx_uint_t   nalloc;
    ngx_pool_t  *pool;

Elts: point to the actual data storage area. The actual number of elements of a nelts: array. The size: array of individual elements of the size, in bytes. An array of nalloc: capacity. Said the array without causing expansion, a number of elements can store up to. When the nelts growth reached nalloc, if to store the elements in the array, the array expansion will lead to. An array of capacity will be expanded to 2 times the size of the original capacity. Is actually allocates a block of memory for a new block of memory, the size of the new is 2 times the size of the original memory. The original data is copied to a new block of memory. Pool: the array used to allocate memory memory pool. The ngx_array_t related to the operation function.
ngx_array_t *ngx_array_create(ngx_pool_t *p, ngx_uint_t n, size_t size);

Create a new array object, and returns the object.
The p: array to allocate memory using memory pool; the initial size of n: array, which can be a number in the case of no expansion could accommodate elements. Size: a single element size, in bytes.
void ngx_array_destroy(ngx_array_t *a);

The destruction of the array object, and release the corresponding memory to the corresponding memory pool. Need to pay attention to is, call the function, the array object value fields have not been cleared. So even when the object a of the field and meaningful value, but this object should never be used again, unless you are using the ngx_array_init function.
void *ngx_array_push(ngx_array_t *a);

In the array a new an additional elements, and returns a pointer to the new element. The returned pointer type conversion, into specific types, and then give the new element itself or the field (if the elements of the array is a complex type) Fuvalue.
void *ngx_array_push_n(ngx_array_t *a, ngx_uint_t n);

In the array a addition of n elements, and returns a pointer to the first element of these additional elements position.
static ngx_inline ngx_int_t ngx_array_init(ngx_array_t *array, ngx_pool_t *pool, ngx_uint_t n, size_t size);

If an array object is allocated on the heap, so when calling the ngx_array_destroy destroyed, if want to use again, you can call this function.
If an array of objects are allocated on the stack, then we need to call this function to initialize, after work, can use.
Matters needing attention:The array in the expansion, the old memory is not released, will cause the waste of memory. Therefore, the best plan well in advance to an array of capacity, a fix when creating or initialization, avoid repeated expansion, resulting in wasted memory.


Ngx_hash_t is the realization of hash nginx's own. Definition and Realization in src/core/ngx_hash.h|c. Ngx_hash_t realization and implementation of hash table described data structure textbook is the same. For conflict resolution methods commonly used have linear detection, two detection and open chain method. The use of the ngx_hash_t is one of the most commonly used, is also open chain method, using the hash method of the STL in the table.
But the ngx_hash_t implementation and its characteristics of several significant:
Unlike other ngx_hash_t hash table, you can insert delete elements, it can only be an initialization, then construct the hash table, which cannot be removed, nor in the inserted element. Ngx_hash_t open chain is not really open a linked list, actually opened for a period of storage space, almost as an array. This is because the ngx_hash_t at initialization time, will experience a process of pre computed, each barrel there will be many elements inside to calculated in advance, so that you know in advance of each bucket size. Then there is no need to use the list, for a period of storage space is enough. This also to a certain extent, save the memory usage. From the above description, we can see, the use of ngx_hash_t is actually very simple. Two steps, the first is the initialization, then you can find inside. Below we detailed look at.
The initialization of ngx_hash_t.
   ngx_int_t ngx_hash_init(ngx_hash_init_t *hinit, ngx_hash_key_t *names,
ngx_uint_t nelts);

First we see the initialization function. The first parameter to hinit this function is a collection of some parameter initialization. Names is an array of all key initializes a ngx_hash_t need. And nelts is the number of key. The first look at the ngx_hash_init_t type, the type of the initialization of a hash table provides some basic information need.
typedef struct {
    ngx_hash_t       *hash;
    ngx_hash_key_pt   key;

    ngx_uint_t        max_size;
    ngx_uint_t        bucket_size;

    char             *name;
    ngx_pool_t       *pool;
    ngx_pool_t       *temp_pool;
} ngx_hash_init_t;

Hash: this field if NULL, then the call to initialize the Korean, the field points to the newly created hash table. If the field is not NULL, so in the beginning, all of the data is inserted into the field of the hash table. Key: to generate hashvalue from string; hash function. The source code of nginx provides a default function ngx_hash_key_lc. The number of max_size: in the hash table bucket. The field more possibility, storage elements conflict in smaller, less storage elements in each bucket, the query up faster. Of course, this value larger, more cause waste of memory, (in fact also wasted much). The maximum size of bucket_size: per barrel, in bytes. If the initialization of a hash table, a bucket can not save all the elements belonging to the barrel, the hash initialization failed. Name: the hash table name. Pool: the hash table to allocate memory using pool. Temp_pool: the hash zero pool use, in the initialization is completed, the pool can be released and destroyed. The structure of the array to store the hash key look below the surface.
typedef struct {
    ngx_str_t         key;
    ngx_uint_t        key_hash;
    void             *value;
} ngx_hash_key_t;

Key and value have the meanings obviously, do not explain. Key_hash is calculated on the key using the hash function of value. After the analysis of the two structure completed, I think we should have been aware of this function should be how to use. The function initializes a hash chart, return NGX_OK, otherwise it returns NGX_ERROR.
void *ngx_hash_find(ngx_hash_t *hash, ngx_uint_t key, u_char *name, size_t len);

In hash search key corresponding value. In fact, here key is the real key (or name) calculated hashvalue. Len is the length of name.
If the search is successful, it returns a pointer to the value, otherwise it returns NULL.


Nginx in order to deal with the matching problem with a wildcard domain name, the ngx_hash_wildcard_t hash table. He can support two types of wildcard domain name. A wildcard in front, for example: "*", can also omit asterisks, directly."". Such key, can match the, such as Another is the wildcard in the end, for example: "*", please pay special attention to in the end is not located wildcards wildcard can be omitted from the start. Such a wildcard, can match,,, domain name. Note, a ngx_hash_wildcard_t type of hash table may contain wildcards in the former key or wildcards in the key. Cannot contain two types of wildcards and key.
Another point to note, is a ngx_hash_wildcard_t type hash table may contain wildcards in the former key or wildcards in the key. Cannot contain two types of wildcards and key. Construction of ngx_hash_wildcard_t type variables is accomplished through ngx_hash_wildcard_init function, and the query is done through the function ngx_hash_find_wc_head or ngx_hash_find_wc_tail. Ngx_hash_find_wc_head is a query wildcard in front of key contained in the hash table, and ngx_hash_find_wc_tail is the query contains wildcards in the key hash table.
The following details of these functions.
ngx_int_t ngx_hash_wildcard_init(ngx_hash_init_t *hinit, ngx_hash_key_t *names,
    ngx_uint_t nelts);

The function in constructing a can contain wildcard key hash table.
A collection of hint: to construct a wildcard hash table some parameters. The types of the corresponding parameters, see the ngx_hash_init function in the ngx_hash_t type description. An array of names: structures the Hash list all the wildcard key. Of particular note is where key has are pre treated. For example: "*" or "" are preprocessing is completed, into the"". "*" is treated as"". Why was that? Here to briefly describe the realization of the principle of wildcard hash table. When constructing this type of hash table, is actually constructed a hash table of a "list", through the hash table in the key "link". For example: for "*" will construct 2 hash tables, the first hash table has a key for the com table, the table value contains second points of hash table pointer, while the second hash table has a ABC list item, the item of value include pointing to the * on the value pointer. So when the query, such as query, check the com, through the examination of COM can be found in the second grade hash, second grade in the hash table, then look for ABC, and so on, until at some stage in the hash table look-up table corresponding to the value corresponds to a real &# 20540; rather than a point when the pointer table hash the next stage, query process ends. There is little need to pay special attention to, the corresponding element in the names array is value value (which is a real value location) must be divisible by 4, or is in a multiple of 4 is aligned. Because the value value the low two bits of the bit are useful, so must be 0. If you do not meet this condition, the hash query is not the correct results. Nelts: names the number of array elements. This function returns successfully executed NGX_OK, otherwise NGX_ERROR.
void *ngx_hash_find_wc_head(ngx_hash_wildcard_t *hwc, u_char *name, size_t len);

The function of query contains wildcards in the key hash table.
The pointer hwc: hash table object. Name: need to query the domain name, for example: Len: name length. This function returns the corresponding value wildcard matching. If not found, return NULL.
void *ngx_hash_find_wc_tail(ngx_hash_wildcard_t *hwc, u_char *name, size_t len);

The function of query contains wildcards in the key at the end of the hash table. Parameter and return value please take a function description.


Combination type hash, defining the hash table as follows:
typedef struct {
    ngx_hash_t            hash;
    ngx_hash_wildcard_t  *wc_head;
    ngx_hash_wildcard_t  *wc_tail;
} ngx_hash_combined_t;

From the definition of it, this type of actually includes three hash table, a hash table, a forward wildcard hash table and a backward wildcard hash table.
Nginx provide the type of role, is to provide a convenient container contains three types of hash table, when there is no wildcards wildcards and a set of key to construct the hash chart, to query in a convenient way, you do not need to consider a key is supposed to which type the hash table to check.
The construction of such a combination hash table, first define a variable of that type, respectively comprising three sub structure of hash sheet.
For the type hash queries, nginx provides a convenient function ngx_hash_find_combined.
void *ngx_hash_find_combined(ngx_hash_combined_t *hash, ngx_uint_t key,
u_char *name, size_t len);

The function of this combination in the hash table in the hash table, query subsystem, to see whether the matching, once found, immediately return search results, that is to say if there is more than one possible match, it returns only the first matching results.
The combination of hash: hash objects. Key: name calculated according to hashvalue. The specific content of name: key. Len: name length. Returns the query results, not found it returns NULL.


We see that when building a ngx_hash_wildcard_t, which needs to preprocess the key wildcard. This processing up more trouble. When a group of key, which has no wildcard key, there is also a time when contain wildcard key. We need to construct three hash table, a contains general key hash, a former to wildcard hash table, a backward wildcard hash table (or the three can be combined into a ngx_hash_combined_t hash table). In this case, in order to facilitate the construction of the hash table, nginx gives the type.
The types and the related operation functions are defined in the src/core/ngx_hash.h|c. The definition we first look at the types of.
typedef struct {
    ngx_uint_t        hsize;

    ngx_pool_t       *pool;
    ngx_pool_t       *temp_pool;

    ngx_array_t       keys;
    ngx_array_t      *keys_hash;

    ngx_array_t       dns_wc_head;
    ngx_array_t      *dns_wc_head_hash;

    ngx_array_t       dns_wc_tail;
    ngx_array_t      *dns_wc_tail_hash;
} ngx_hash_keys_arrays_t;

The number of hash table bucket hsize: will build. For the three types of constructs containing the structure information in the hash table will use this parameter. Pool: to construct the hash table using pool. Temp_pool: may be used in the construction of the temporary pool type as well as the final three hash process. The temp_pool can be in after the completion of construction, be destroyed. Here are just some temporary storage memory consumption. An array of keys: storage of all non wildcard key. Keys_hash: this is a two-dimensional array, represents the first dimension is the number of bucket, then keys_hash[i] is stored in all the key out of hashvalue on hsize modulo value I key. Suppose there are 3 key, respectively key1, key2 and Key3 assumptions hashvalue out of hsize modulo value is I, then the three key value is stored in keys_hash;[i][0],keys_hash[i][1], keys_hash[i][2]. The value in the calling process used to save and check whether there is a conflict, that is keyvalue whether there is duplication. Dns_wc_head: put forward the wildcard key is processed after the completion of thevalue. For example: "*" is completed, into a "" is stored in the array. The dns_wc_tail: store to wildcard key is processed after the completion of thevalue. For example: "*" is completed, into a "" is stored in the array. Dns_wc_head_hash: the value in the calling process used prior to the wildcard keyvalue save and check whether there is a conflict, or whether there is duplication;. Dns_wc_tail_hash: the value in the calling process used to save and check whether there is a conflict to the wildcard keyvalue, is whether there is duplication;. In the definition of a variable of this type, And the field pool and temp_pool Fu value later, You can call the ngx_hash_add_key function to all key added to this structure., This function will automatically achieve common key, Bring forward a key with backward wildcard key classification and inspection, And the value the corresponding field to store, Then by examining the structure of keys, dns_wc_head, dns_wc_tail three array is empty, To determine whether the construction of ordinary hash table, Prior to the wildcard hash table and the hash table to the wildcard (in the construction of three types of hash table at, Can use keys, dns_wc_head, dns_wc_tail three arrays).
After the construction of the three hash, can be combined in a ngx_hash_combined_t object, using the ngx_hash_find_combined search. Or is still maintained three independent variables corresponding to the three hash, from I to decide when and in which the hash table in the query.
ngx_int_t ngx_hash_keys_array_init(ngx_hash_keys_arrays_t *ha, ngx_uint_t type);

Initialize the structure, main is to initialize the fields of this structure of type ngx_array_t, return NGX_OK.
The structure of the object to which the pointer ha. Type: this field has 2 value can choose, namely NGX_HASH_SMALL and NGX_HASH_LARGE. For type hash table specified will be established, if it is NGX_HASH_SMALL, a size smaller bucket and the number of array elements. NGX_HASH_LARGE on the contrary.
ngx_int_t ngx_hash_add_key(ngx_hash_keys_arrays_t *ha, ngx_str_t *key,
void *value, ngx_uint_t flags);

A general cycle is called the function, a set of keys on value added to the structure. Returns the NGX_OK's success. Returns the NGX_BUSY means keyvalue repeat.
The structure of the object to which the pointer ha:. The key: parameter names from the. The value: parameter names from the. Flags: has two flags can be set, NGX_HASH_WILDCARD_KEY and NGX_HASH_READONLY_KEY. At the same time to use the logical and operator set on it. When NGX_HASH_READONLY_KEY is set, in the calculation of hashvalue when key, value not converted to lowercase characters, otherwise it will. When NGX_HASH_WILDCARD_KEY is set, the key may contain wildcards, will carry on the corresponding processing. If the two bit is not set, 0. On the use of this data structure, can the ngx_http_server_names function in src/http/ngx_http.c.


The filter module of nginx in the treatment from other filter module or handler module to pass over the data (actually needs to send to the client HTTP response). This data is passed to a list of the form (ngx_chain_t). And the data can be divided into several transfer. It is repeatedly calls the filter function, at different ngx_chain_t.
The structure is defined in the src/core/ngx_buf.h|c. The following definition we look at ngx_chain_t.
struct ngx_chain_s {
    ngx_buf_t    *buf;
    ngx_chain_t  *next;

2 field, next points to the next node list. BUF refers to the actual data. So in this list append node is very easy, as long as the end of the elements of the next pointer to a new node, the new node of the next Fu value NULL can be.
ngx_chain_t *ngx_alloc_chain_link(ngx_pool_t *pool);

This function creates a ngx_chain_t object, and returns a pointer to the object, or NULL on failure.
#define ngx_free_chain(pool, cl)                                             \
    cl->next = pool->chain;                                                  \
pool->chain = cl

The macro released a ngx_chain_t type. If you want to release the chain iteration, the linked list, each node can use this macro.
Be careful: The ngx_chaint_t release, not really freed memory, but only the object hanging in the corresponding to a field called chain the pool object chain, time for the next allocation object of type ngx_chain_t from the pool, fast from the pool-> chain took the first chain the element returned to, of course, if the chain is empty, will really be in this pool using the ngx_palloc function allocation.


The actual data of each node of this ngx_buf_t is the ngx_chain_t list. The structure is actually an abstract data structure, which represents a specific data. This data may point to a buffer in memory, might also point to a file of a part, may also be some pure metadata (metadata is read the list of instructions of different treatment on the read data).
The data structure is located in the src/core/ngx_buf.h|c file. We define it.
struct ngx_buf_s {
    u_char          *pos;
    u_char          *last;
    off_t            file_pos;
    off_t            file_last;

    u_char          *start;         /* start of buffer */
    u_char          *end;           /* end of buffer */
    ngx_buf_tag_t    tag;
    ngx_file_t      *file;
    ngx_buf_t       *shadow;

    /* the buf's content could be changed */
    unsigned         temporary:1;

     * the buf's content is in a memory cache or in a read only memory
     * and must not be changed
    unsigned         memory:1;

    /* the buf's content is mmap()ed and must not be changed */
    unsigned         mmap:1;

    unsigned         recycled:1;
    unsigned         in_file:1;
    unsigned         flush:1;
    unsigned         sync:1;
    unsigned         last_buf:1;
    unsigned         last_in_chain:1;

    unsigned         last_shadow:1;
    unsigned         temp_file:1;

    /* STUB */ int   num;

Pos: when buf data referred to in memory, POS refers to the data starting position. Last: when buf data referred to in memory, last refers to the end of data location. Buf file_pos: when the data pointed to by the time in the file, file_pos refers to this period of the beginning of data in the file offset. Buf file_last: when the data pointed to by the time in the file, file_last refers to the end of data position offset into the file. Start: when buf refers to data in memory, the whole memory contents may be included in multiple buf (such as in the middle of a data into other data, the data will need to be split open). The buf in the start and end point to a memory address and the beginning of the end. While POS and last to start the actual comprising the buf data and the end. End: explained. See Synonyms at start. Tag: is actually a pointer to a void* type, the user can object association arbitrary, as long as there is meaning to the user. File: when the buf content in the document is, file field points to the corresponding file object. Shadow: when the buf complete copy another all the fields when the buf, then the two point buf is actually the same piece of memory, or a file with the same part, the shadow field of the two buf are at each other's. Then for two buf, in the time of release, it requires the user to special care, specifically by where release, to consider in advance, if repeated release by the resources, may cause a crash! Temporary: is 1 indicates that the buf contains the content is in the block of memory, a user created, and can be in filter process for change, and will not cause problems. Memory: is 1 indicates that the buf contains the content is in memory, but the content that cannot be processed filter changes. Mmap: is 1 indicates that the buf contains the content is in memory, is through the use of MMAP memory mapped from the file is mapped into memory, the alterations of these content that cannot be processed by filter. Recycled: can be recycled. This is buf can be released. This field is usually used in combination with the shadow field, use the ngx_create_temp_buf function to create a buf, and is also a buf shadow, you can use this field to indicate that buf can be released. In_file: is 1 indicates that the buf contains the content is in the file. Flush: encountered a flush field is set to 1 buf chain, the chain data even if it is not the end of the data (last_buf is set, all signs to the content of the output is finished), will be output, will not be affected by the postpone_output configuration limits, but will be sending rate and other conditions. Sync: last_buf: data by multiple chain passed to the filter, this field is 1 indicates that this is the last buf. Last_in_chain: in the current chain, this buf is the last one. Of particular note is that last_in_chain buf is not last_buf, but last_buf buf must be last_in_chain. This is because the data to be more chain passed to a filter module. When last_shadow: in buf to create a shadow, usually to a newly created buf last_shadow set to 1. Temp_file: due to memory constraints, sometimes the contents of the buf need to be written to temporary files on disk, then, will set this flag. To create this object, can be assigned directly on a ngx_pool_t, then according to need, to the corresponding field of Fuvalue. Can also be used to define the 2 macros:
#define ngx_alloc_buf(pool)  ngx_palloc(pool, sizeof(ngx_buf_t))
#define ngx_calloc_buf(pool) ngx_pcalloc(pool, sizeof(ngx_buf_t))

The two macro uses like function, is not self explanatory.
To create the temporary field is 1 buf (that is, its content can be further modified filter module), can directly use the function ngx_create_temp_buf to create.
ngx_buf_t *ngx_create_temp_buf(ngx_pool_t *pool, size_t size);

Object to create a ngx_but_t type of this function, and returns a pointer to the object, create NULL on failure.
For this object, its start and end point to the newly allocated memory begins and where it ends. At the beginning, POS and last point to the newly allocated memory that, subsequent operations can store data in this newly allocated memory.
The pool: distribution buf and buf use the memory used by pool. Size: the buf using memory size. In order to cope with the use of ngx_buf_t, nginx defines a macro operation below.
#define ngx_buf_in_memory(b)        (b->temporary || b->memory || b->mmap)

Returns the content inside the buf is in the memory.
#define ngx_buf_in_memory_only(b)   (ngx_buf_in_memory(b) && !b->in_file)

Returns the content inside the buf only in memory, and not in the file.
#define ngx_buf_special(b)                                                   \
    ((b->flush || b->last_buf || b->sync)                                    \
     && !ngx_buf_in_memory(b) && !b->in_file)

Returns whether the buf is a special buf, containing only the special signs and does not contain the real data.
#define ngx_buf_sync_only(b)                                                 \
    (b->sync                                                                 \
     && !ngx_buf_in_memory(b) && !b->in_file && !b->flush && !b->last_buf)

Returns whether the buf is a sync contains only marks but does not contain special buf real data.
#define ngx_buf_size(b)                                                      \
    (ngx_buf_in_memory(b) ? (off_t) (b->last - b->pos):                      \
                            (b->file_last - b->file_pos))

Returns the data contained in the buf size, regardless of the data is in the file or in the memory.


Ngx_list_t as its name suggests, seems to be a data structure list. This argument, is not to count on. Because it is consistent with some characteristics of list type data structure, for example, can add elements, to achieve self growth, not like the array data structure types, influenced by the initial set of array capacity constraints, and it is with the list data structure of our common is the same, the internal implementation uses a linked list.
It can tell us the common list of list have what different? Different point is that it's node, a node unlike our common list, only for one element, the ngx_list_t node is actually a fixed size array.
During initialization, we need to set the size of elements of the space occupied, each node array size. In the add elements to the list inside, will the array in the node in the tail to add elements, if the array this node is full, add a new node to the list.
Well, see here, everyone should basically understand this list structure? Also don't know is it doesn't matter, let's look at the specific definition of it, these definitions and related operation functions defined in the src/core/ngx_list.h|c file.
typedef struct {
    ngx_list_part_t  *last;
    ngx_list_part_t   part;
    size_t            size;
    ngx_uint_t        nalloc;
    ngx_pool_t       *pool;
} ngx_list_t;

The last node last: to the list. The first storage specific elements of the linked list node part. Specific elements stored in size: list of required memory size. A fixed size array containing nalloc: per node capacity. Allocate memory for pool: the list using pool. Good, we look at the definition of each node.
typedef struct ngx_list_part_s  ngx_list_part_t;
struct ngx_list_part_s {
    void             *elts;
    ngx_uint_t        nelts;
    ngx_list_part_t  *next;

Start address specific elements of elts: nodes in the memory. The nelts: node has a number of elements. This value is not greater than nalloc; field chain header node in the ngx_list_t type. Next: points to the next node. We see a function provided by the operation.
ngx_list_t *ngx_list_create(ngx_pool_t *pool, ngx_uint_t n, size_t size);

Object to create a ngx_list_t type of this function, and the first node is assigned to the list memory storage elements.
Pool: allocates memory using pool. An array of fixed length n: the length of each node. A number of specific elements of the size: stored in the. Returns: value pointer back to create ngx_list_t object, or NULL on failure.
void *ngx_list_push(ngx_list_t *list);

The function of the tail in the given list an additional elements, and returns a pointer to the new storage space. If additional fails, it returns NULL.
static ngx_inline ngx_int_t
ngx_list_init(ngx_list_t *list, ngx_pool_t *pool, ngx_uint_t n, size_t size);

This function is used for objects of type ngx_list_t already exists, but its first node storage elements of the memory space is not distributed, can call this function to give the first node of this list to allocate memory storage elements.
So what will appear when there has been an object of type ngx_list_t, and the first node storage elements of the memory has not yet been distribution? That is a variable of type ngx_list_t was not created by calling the ngx_list_create function. For example: if a structure of a member variable is of type ngx_list_t, so when the object of this structure type is created, the member variable is created, but the first node of its storage elements in the memory is not allocated.
In short, if the variable of type ngx_list_t, if not you by calling the function to create ngx_list_create, then you must call this function to the initial words, otherwise, you go to this list additional elements may cause unpredictable behavior, or the program will crash!
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download

Posted by Jacob at December 06, 2013 - 11:13 PM