First, a simple four-layer load balancing experiment will be conducted, which will not involve many load balancing algorithms and only use the default round robin algorithm. In the subsequent seven-layer load balancing experiment, we will focus on testing different load balancing strategies and complete related experiments.
First, add the following stream instruction block configuration in nginx.conf:
The above configuration simulates two upstream servers with ports of 3000 and 300 1, and then specifies the addresses of these two upstream servers in the upstream instruction block, and sets the weight of the first one to 2. Because the weighted round robin algorithm is adopted by default, the weight of the default server is 1. Set to 2, which means that two of the three requests will be forwarded to port 3000 and one will be forwarded to port 300 1, which is also verified by the following tests.
In fact, it is similar to the four-tier configuration. In the seven-tier configuration, in addition to testing the most basic, we will also test several load balancing strategies mentioned above to further familiarize ourselves with the load balancing configuration in Nginx.
Add the following http instruction block in nginx.conf:
In the above configuration, we simulated three upstream servers with three ports, 8000, 800 1 and 8002. By default, the polling load balancing algorithm is used, and the weights of all three are 1. After the following http request operation, we can see that the http requests forwarded by Nginx will be evenly distributed among the three servers.
We open the comments of the ip_hash instruction. At this time, the default is to use the ip address of the client as the key of the hash, and then restart the Nginx service to perform the following command line operations:
Next, comment the ip_hash command. We open the comment of the line hash user_$arg_username. The hash command allows us to hash according to the key set by ourselves, and then select the upstream server according to the hash value. See the following Linux commands for specific tests:
Here we can see that with the user name parameter in the request, the hashing algorithm configured in Nginx will hash according to the user name parameter in the request as the key, and then map the upstream server according to the hashing result. When the user names are the same, the upstream servers selected must be the same. Only when the value of username changes can the returned response change.
Today, we have completed several test experiments, mainly aiming at the four-layer and seven-layer load balancing function of Nginx. This function will have more applications in microservice deployment. In order to ensure the high availability of services, high-traffic enterprises often horizontally expand multiple services with the same function and deploy them on multiple hosts. At this time, load balancing technology can come in handy. Nginx provides perfect load balancing function and various load balancing algorithms, which can meet the needs of most enterprises. If it is not enough, we can write internal development modules and integrate them into Nginx to meet the corresponding requirements. Therefore, Nginx is worth studying and further studying.