Skip to content

How to make your Kubernetes pods and -services accessible to external networks

Estimated time to read: 4 minutes

In this tutorial, we will show you how to setup Kubernetes’ external load balancer feature using OpenStack LBaaS v2. If you expose a service type: “LoadBalancer” in Kubernetes, a load balancer will be created automatically. This way pods and services are accessible to external networks.

Introduction

A Kubernetes cluster, consisting of masters and minions, is connected to a private network, which is connected via a router to the internet. This way all the nodes can access each other and the internet.

All the pods and services created in the cluster are connected to a private container network. This is an overlay network that runs on top of the private subnet. Pods and services are assigned IP addresses from this container network so that they can access each other. The problem is that these IP addresses are not accessible from external networks, such as the internet.

To make pods accessible to external networks, Kubernetes provides the external load balancer feature. This can be done by specifying the attribute type: “LoadBalancer” in the service manifest. After the external load balancer is added, it will have external IP addresses in addition to the internal IP on the container network.

Please note that Kubernetes automatically uses the default security group. This cannot be adjusted. Don’t use the default security group when setting up your platform. Leave the security group empty and connect it to your Kubernetes nodes.

A security group will automatically be created for the ingress traffic. The load balancer will automatically be added in these security groups and the security group ‘default’.

Prerequisites:

  • A Fuga cloud Account
  • OpenStack Credential
  • OpenStack CLI tools installed
  • Kubernetes cluster (minimum one master on Fuga Release 2)
  • All Kubernetes nodes are added in the security group ‘default’
  • Please note: Don’t use this security group for other firewall configurations. Leave the 4 default rules and don’t adjust these rules.
  • SSH access to Kubernetes master

API credentials

Create your API credentials in the Fuga dashboard. We will need them later when we are configuring our Kubernetes cluster. When you are logged in to the dashboard, go to Account → Access. Copy and paste your password and save it in a good place; this password will only be shown once.

Step 1: Looking up your config values

You’ll need some information about your Fuga Cloud platform to configure your Kubernetes cluster. We are going the get this information using our OpenStack CLI tools.

Your domain id and project id can be found in your OpenRC / Dockerfile / Clouds.yaml file that you use to connect to Fuga Cloud. The variable can be found under:

os_project_id=“<PROJECT_ID>”
os_project_domain_id=“<DOMAIN_ID>”

We need the Subnet ID of the subnet where the load balancer and Kubernetes nodes live.

openstack subnet list

We’ll need the ID of the ”Public” network.

openstack network list

Step 2: Creating a cloud config

Log in to the Kubernetes master node with SSH. We are going to create a cloud config file so that Kubernetes knows which cloud we want to use. Use your favorite editor, for example nano or vim:

openstack keypair delete <Key Pair name>

If you want to see the current list of registered Key Pairs, use the following command.

nano /etc/kubernetes/cloud.conf

Copy and paste the following config and replace the values between the <> with the above variables.

[Global]
username=<OPENSTACK_API_USERNAME>
password=<OPENSTACK_API_PASSWORD>
auth-url=https://identity.api.ams.fuga.cloud:443/v3
tenant-id=<PROJECT_ID>
domain-id=<DOMAIN_ID>

[LoadBalancer]
subnet-id=<SUBNET_ID>
floating-network-id=<PUBLIC_NETWORK_ID>
manage-security-groups

Step 3: Add your Cloud.conf to the Kube-control-manager

We are going to add your newly created cloud.conf to the kube-control-manager, so it knows about the new configuration.

nano /etc/kubernetes/manifests/kube-controller-manager.yaml

Add the following under the command:

- --cloud-provider=openstack
- --cloud-config=/etc/kubernetes/cloud.conf

Also, add this extra volumeMount:

- mountPath: /etc/kubernetes/cloud.conf
  name: k8s-cloud
  readOnly: true

Add an extra volume entry:

- hostPath:
    path: /etc/kubernetes/cloud.conf
    type: FileOrCreate
  name: k8s-cloud

Step 4: Add you Cloud.conf to the Kubelet configuration

We are going to add the newly created cloud.config to the Kubelet configuration so that kubelet can use our new configuration.

nano /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Add to the rule service environment variable KUBELET_CONFIG_ARGS
--cloud-provider=openstack --cloud-config=/etc/kubernetes/cloud.conf

Finally, we need to restart our kubelet process:

systemctl daemon-reload
systemctl restart kubelet

There is a chance that Kubernetes has removed all security group rules in the 'default' security group. Click 'Manage rules' to check if they still exist. If they are gone you can re-add them with these commands:

openstack security group rule create default  --ingress --ethertype IPv4 --protocol any --remote-group default
openstack security group rule create default  --ingress --ethertype IPv6 --protocol any --remote-group default
openstack security group rule create default  --egress --ethertype IPv4 --protocol any --remote-ip 0.0.0.0/0
openstack security group rule create default  --egress --ethertype IPv6 --protocol any --remote-ip ::/0

This extra security group rule will open up the SSH port:

openstack security group rule create default  --ingress --ethertype IPv4 --dst-port 22:22 --protocol tcp --remote-ip 0.0.0.0/0

Conclusion

You have learned how you can set up the external load balancer feature in Kubernetes. If you completed the tutorial, you will now have a Kubernetes cluster that uses the external load balancer LBaaS v2 of OpenStack. This way your pods are reachable from external networks.