Often there are requirements to load balance your application in order to scale out and make your application more performant and NGINX can be used to do just that! It can distribute your traffic to several application servers.
There are several load balancing methods available in NGINX which are
- round robin – wherein the requests are distributed in a round-robin fashion
- least connected – request is sent to the server with least number of active connections
- ip hash – uses a hash-function to determine a server for sending the request
- weighted – distributes requests based on servers’ weight specified
I’ll try to elaborate more on it by describing an architecture wherein there’s a set of static content which is made available via NGINX on a domain name and this static content has certain dynamic aspects which are loaded from your application backend. Application backend is what would be load balanced here.
Let’s start with our static content first and assume it is available at /path/to/your/content. A sample NGINX configuration for this would look something like
server { server_name example.com; root /path/to/your/content; index index.html; include /etc/nginx/mime.types; }
where the domain example.com points to our static content available at /path/to/your/content.
Now, in order to render your dynamic content you’ll need a way in your static files to specifiy where that content should be made available from. For that you’ll need to provide the API details. But let’s try to do it via NGINX configurations. Let’s try to make your API available on the same domain as instructed in the configurations above but via URI which is /api. A way of doing it is by configuring the location directive in your server {} block configurations to offload all traffic from the specified URI to your application backend and the configuration of which could look something like
location /api/ { proxy_pass http://api/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }
proxy_pass – used to pass a request to a proxied server
proxy_http_version – sets the HTTP protocol version to 1.1
proxy_set_header – is used to set the headers from client to proxied server
proxy_http_version – sets the HTTP protocol version to 1.1
proxy_set_header – is used to set the headers from client to proxied server
But how do you instruct NGINX where /api points to? It is done using the upstreamdirective which has to be configured inside the http {} block usually located inside nginx.conf file. A sample configurations looks something like this
http { upstream api { server address-of-your-application-backend.com; }
Once the aforementioned is done all traffic from http://example.com/api will be redirected to the upstream server as specified in the upstream configurations.
For example, the information of a user of your application is available on your application backend at /user/:id API. This information can now be easily accessed using http://example.com/api/user/1.
Henceforth, upstream is where things start getting interesting which is where the load balancing instructions are specified. You can specifiy multiple servers inside the upstream block
http { upstream api { server address-of-your-application-backend-1.com; server address-of-your-application-backend-2.com; server address-of-your-application-backend-3.com; }
You can see that the load balancing method is not specified in the configurations above and it defaults to round-robin. All requests to get the dynamic data are now re-routed to servers specified in the upstream block in round-robin fashion.
Load balancing method can be specified inside the upstream block like this
http { upstream api { least_conn; server address-of-your-application-backend-1.com; server address-of-your-application-backend-2.com; server address-of-your-application-backend-3.com; }
Hope this helps in better understanding of NGINX load balancing and how it can be achieved as a part of your application architecture.
This article was first published on the Knoldus blog.
No comments:
Post a Comment