In this tutorial, you will learn what a load balancer does and how to use it. Load balancing lets you distribute traffic between multiple instances and is a perfect way to horizontally scale your application.
- A Fuga account.
- Two instances. (external instances are possible!)
Create a load balancer.
Go to the Fuga Dashboard, select ‘Networking’ on the left side of your screen, then click on ‘Load Balancer’.
Select the Medium size load balancer (the other options Small and Large, will be coming soon).
Under 'Pool members', select the two instances between which you want the traffic divided. Below you can add an external IP address.
After selecting the instances (or an external IP address), select the Protocol you would like to use for traffic balancing.
In this tutorial, we use HTTP, to balance the traffic between two webservers. For the Load Balancer port type, you have an option to create a certificate for your load balancer. This option is only available for terminated HTTPS and will create an SSL certificate so the load balancer can do the SSL offloading. And at the backend, it will communicate with HTTP. For members select HTTP and IP port 80.
After configuring the Load Balancer, you can select the Method of how you want to use it:
- Round Robin: The requests are routed equally to the next server in the queue in order of arrival.
- Least Connections: The request will be distributed to the server with the least number of active connections at the time.
- Source IP: A server is selected for the first request and then uses the source IP address (client IP address) to identify subsequent requests from that same client and route them to the same server.
After you have selected the 'method' of your load balancer, there are four options left with Monitoring. With these settings, you determine how you monitor the backend servers for aliveness. When a server fails the specified number of checks, it is removed from the pool until it succeeds the next check again.
- Protocol: checks if the backend server is available via the chosen protocol
- Timeout (seconds): A specified period (in seconds) that will be allowed to elapse before the HTTP request
- Delay (seconds): after every given HTTP request, there will be a delay in the following HTTP request
- Max retries: The number of subsequent health checks a given back-end server must fail before it is marked as down, or the number that a failed back-end server must pass to be marked as up again.
After configuring the load balancer, click on Create load balancer, it takes a few minutes before being deployed.After the deployment, you can always add new instances to the load balancer. And configure the Health monitoring.
In this tutorial, you have learned how to configure and create a load balancer. And how to attach instances to it.