How to create an HA VIP with Corosync on OpenStack

Running applications HA (Highly Available) has become the norm. One way of doing this is by deploying your applications in Kubernetes. Exposing your application endpoint is typically done using a load balancer, in Kubernetes you can run an Ingress for this. However, this still makes the node running the Ingress pod the single point of failure.

A solution to this problem is running Ingress pods on multiple nodes and let these nodes share a VIP (Virtual IP). This is where Corosync comes in. The VIP will live on one of the nodes. When this node fails, Corosync makes sure one of the other nodes takes over the VIP.

In this tutorial, we’re not going to install an entire Kubernetes cluster. Instead, as a proof of concept, we’re going to install three nodes with the Nginx web server.

The VIP and the instances are going to live in the range 10.0.0.0/24. We’re reserving 10.0.0.2-10.0.0.200 for instances and 10.0.0.201-10.0.0.255 for VIPs. We’re using just one VIP in this tutorial, which will be 10.0.0.201. In order to give the VIP and externally reachable address, we’ll make a Neutron port for the VIP and attach a floating IP address to it.

Fuga Cloud account

Configure virtual network

We need to create a network with a subnet and the range 10.0.0.0/24. We’re going to reserve everything up from 10.0.0.220 for VIP addresses.

openstack network create vip-net
openstack subnet create vip-subnet --network vip-net --allocation-pool start=10.0.0.2,end=10.0.0.200 --subnet-range 10.0.0.0/24

We need to attach the subnet to your projects router. To get a list of the routers you have type:

openstack router list

Giving a list that looks like this:

+--------------------------------------+----------------------------------------+--------+-------+-------------+------+----------------------------------+
| ID | Name | Status | State | Distributed | HA | Project |
+--------------------------------------+----------------------------------------+--------+-------+-------------+------+----------------------------------+
| edc48681-b094-4b8e-9b0e-d6452f86f5ce | router-internal-to-external-my-project | ACTIVE | UP | None | None | c7a894e64e9a495082f1170fbdde3aa3 |
+--------------------------------------+----------------------------------------+--------+-------+-------------+------+----------------------------------+

Use the router ID in the first column to attach the new subnet to the router, giving us access to the outside world.

openstack router add subnet edc48681-b094-4b8e-9b0e-d6452f86f5ce vip-subnet

Security groups

We need to set some firewall rules in order to access our instances and to make sure Corosync can form a cluster.

openstack security group create vip
openstack security group rule create vip --remote-ip 0.0.0.0/0 --protocol tcp --dst-port 22:22 --description "Allow incoming SSH traffic"
openstack security group rule create vip --remote-ip 0.0.0.0/0 --protocol tcp --dst-port 80:80 --description "Allow incoming HTTP traffic"
openstack security group rule create vip --remote-group vip --protocol udp --dst-port 5405:5405 --description "Allow internal Corosync traffic"

Launch Instances

Now that the network is in place, we need to launch the instances we’re going to run Corosync on.

To get access to the instances after we’ve launched them we need to set the key-name from your SSH key. If you don’t have one yet, see Configure secure access for instances.

 openstack keypair list

This gives a result that should look like this:

+----------------------+-------------------------------------------------+
| Name | Fingerprint |
+----------------------+-------------------------------------------------+
| my-key | a1:71:e0:c4:20:7f:90:c3:5f:67:a9:8e:78:7c:64:71 |
+----------------------+-------------------------------------------------+

Use in the following command the name of your key instead of my-key.

openstack server create --image "Ubuntu 18.04 LTS - Bionic Beaver - 64-bit - Fuga Cloud Based Image" --flavor "c1.small" --min 3 --max 3 --network vip-net --security-group vip --key-name my-key vip-instance

Configure Ports

We now need to do some configuration on the network ports of the instances. By default, OpenStack will drop traffic not belonging to the designated IP of an instance, so we need to allow the instances to use the VIP on the subnet they’re on. We’ll start with vip-instance-1.

openstack port list --server vip-instance-1

Giving a result like this:

+--------------------------------------+------+-------------------+-------------------------------------------------------------------------+--------+
| ID | Name | MAC Address | Fixed IP Addresses | Status |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------+--------+
| 5ef652b2-0eaa-4a8f-8b9d-53c6fe89e009 | | fa:16:3e:2a:f3:02 | ip_address='10.0.0.4', subnet_id='be475910-de5a-456f-a073-d3e6f8f4c84e' | ACTIVE |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------+--------+

Now let’s add our vip as allowed IP:

openstack port set 5ef652b2-0eaa-4a8f-8b9d-53c6fe89e009 --allowed-address ip-address=10.0.0.201

While we add it, let’s also attach a floating IP address to the port, so we can reach our instance.

openstack floating ip create external --port 5ef652b2-0eaa-4a8f-8b9d-53c6fe89e009

Do the same for the ports of vip-instance-2 and vip-instance-3.

Now we need to create a port for the VIP, so we can bind a floating IP and security group to it:

openstack port create vip-1 --network vip-net --fixed-ip ip-address=10.0.0.201 --security-group vip

Copy the newly create port ID and use it to attach a floating IP to it:

openstack floating ip create external --port ef2cd8eb-18fa-4bfa-b1c1-71b067a700f4

Note the floating IP somewhere, this is the VIP floating IP and we’ll use it to access our test application later.

Configure Corosync

Now that we have the OpenStack part of things configured we can focus on getting Corosync up and running.

Log in to your instances using the floating IP’s you attached to them.

ssh ubuntu@<instance-floating-ip>

On all three nodes install Pacemaker, which has Corosync as a dependency, and the Corosync CLI:

sudo apt update
sudo apt install pacemaker crmsh

Open the Corosync configuration in your favorite editor (which is obviously VIM) and set the bindnetaddr to the vip-subnet range (10.0.0.0).

sudo vim /etc/corosync/corosync.conf

Which looks like this:

...
interface {
...
bindnetaddr: 10.0.0.0
...
}
...

Check for syntax errors:

corosync -f

If everything is OK, restart corosync.

sudo systemctl restart corosync

The rest of the commands can be run on just one instance of the cluster, I’m using vip-instance-1.

You can check to state of the Corosync cluster with:

sudo crm_mon -1

The cluster first needs to form, so give it some time. After about a minute you should see this:

Stack: corosync
Current DC: vip-instance-3 (version 1.1.18-2b07d5c5a9) - partition with quorum
Last updated: Wed Jun 20 15:33:37 2018
Last change: Wed Jun 20 15:32:19 2018 by hacluster via crmd on vip-instance-3

3 nodes configured
0 resources configured

Online: [ vip-instance-1 vip-instance-2 vip-instance-3 ]

No active resources

We’re going to disable STONITH (Shoot The Other Node In The Head) because configuring it is outside the scope of this tutorial.

sudo crm configure property stonith-enabled=false

Now check the cluster configuration.

sudo crm_verify -LV

This should give an empty output.

Add the VIP to Corosync

Now that our Corosync cluster is up and running, we need to add our VIP as a Corosync service.

sudo crm configure primitive VIP ocf:IPaddr2 params ip=10.0.0.201 nic=ens3 op monitor interval=10s

The VIP should now be added a as service. Check with sudo crm_mon -1.

Stack: corosync
Current DC: vip-instance-3 (version 1.1.18-2b07d5c5a9) - partition with quorum
Last updated: Wed Jun 20 15:44:04 2018
Last change: Wed Jun 20 15:43:28 2018 by root via cibadmin on vip-instance-1

3 nodes configured
1 resource configured

Online: [ vip-instance-1 vip-instance-2 vip-instance-3 ]

Active resources:

VIP (ocf::heartbeat:IPaddr2): Started vip-instance-1

Install the web server

Let’s install Nginx on our instances, so we can test our public VIP. Do the following on all three instances:

sudo apt update
sudo apt install nginx

Test it

Let’s put it to the test. Browse to the floating IP you attached the to VIP port. You should see the Nginx welcome page.

nginx-welcome-page

In my case, the VIP lives on vip-instance-1, so I’m bringing it down.

openstack server stop vip-instance-1

The Nginx webpage should still be available. And you should see the VIP has been moved to one of the other instances:

Stack: corosync
Current DC: vip-instance-3 (version 1.1.18-2b07d5c5a9) - partition with quorum
Last updated: Fri Jun 22 09:00:25 2018
Last change: Wed Jun 20 15:43:28 2018 by root via cibadmin on vip-instance-1

3 nodes configured
1 resource configured

Online: [ vip-instance-2 vip-instance-3 ]
OFFLINE: [ vip-instance-1 ]

Active resources:

VIP (ocf::heartbeat:IPaddr2): Started vip-instance-2