Skip to content

Lightweight Kubernetes cluster using kind and Ansible

Some days ago I was pushed to the kind project. A leightweight possibility to build a Kubernetes cluster on top of Docker for easy and fast testing Kubernetes deployments. I’ve written an Ansible role to install kind and create / delete clusters as you need.

Kind stands for “Kubernetes in Docker” and does exactly that. It enables you to build a whole Kubernetes cluster with multiple control-plane and worker nodes as separate containers within Docker. The simplicity of kind helps you as a developer, you build and delete kubernetes testing environments fast and easy. You can find more information on kind on the official documentation website here.

Although kind is quite easy to install, there are some prerequisites that are not explicitly named in the documentation (e.g. you need obiously Docker installed). That’s the reason I wrote a small Ansible role that does the installation task for you, as well as creates and deletes clusters as you like (with an adjustable amount of nodes). You can find this Ansible role in my Github repository.

As you might know of my other Ansible roles, I like to work using TAGs in my roles to make it more adjustable what you actually want the role to do. Then you just need to call the Ansible playbook (you can find it also in the repository) with the relevant TAG(s). I will give you some example calls down below.

There are three tasks, that the “ansible-kind” role can do for you:

  • Install kind (TAG install)
  • Create cluster (TAG create)
  • Delete cluster (TAG delete)

The only thing you need to run this playbook / role is Ansible installed on a system in your network. This can either be locally or remote, it doesn’t matter. At the time of writing this, for the installation, you need to install it on a Debian based system (e.g. Ubuntu).

There are some adjustments you can make by specifiying the according variable to your needs. Here are all the roles variables (all optional):

VariableDescriptionDefault valueRelevant for TAG
kind_versionSpecify the kind version you want to install (e.g. v0.12.0 or v0.10.0).
You can also specify latest.
latestinstall
kind_install_dirInstallation destination for the kind binary/usr/local/bininstall, create, delete
cluster_nameSpecify your kind clustername. Be aware, a cluster name must be unique.kindcreate, delete
control_nodesNumber of control-plane nodes that will be created using this role.1create
worker_nodesNumber of worker nodes that will be created using this role.2create
ansible-kind variables

To get it all going, clone the Git repository like this:

git clone https://github.com/thedatabaseme/ansible-kind.git

Obviously, the system you want to get it all setup needs to be in your Ansible inventory. In this demonstration, my systems name is vmkind. Then you’re all good to start. Install kind and all of it’s prerequisites by this simple ansible-playbook command.

ansible-playbook ansible-kind.yml -i hosts --tags "install" -e "HOSTS=vmkind" -u thedatabaseme -k -K

You will be asked for the users password and also for the become (sudo) password (specified by -k and -K). If you have SSH key authentication for your user enabled, you of course don’t need that. As you can see, we don’t have any other of the upper variables specified. So all is done as it is configured by default.

Let’s do the next step and create us a kind cluster on the host vmkind.

ansible-playbook ansible-kind.yml -i hosts --tags "create" -e "HOSTS=vmkind control_nodes=2 worker_nodes=3 cluster_name=mytestcluster" -u thedatabaseme -k -K

This will create us a cluster with 2 control-plane nodes and 3 worker nodes. What makes it now really easy to use is, that a so called kubeconfig is created automatically and is set as default in the user we specified during the playbook call. Cause ansible-kind will also install kubectl, we can SSH to the server and get an overview of the cluster and manage the cluster using kubectl. This all is done in 1-2 minutes max.

You still can use the kind binary to manage your created cluster or get an overview of all running clusters. For instance:

> kind get clusters
mytestcluster

When we’re done with testing, or if we have screwed up our test cluster, it’s time to delete it. This can be done like this:

ansible-playbook ansible-kind.yml -i hosts --tags "delete" -e "HOSTS=vmkind cluster_name=mytestcluster" -u thedatabaseme -k -K

Now all is gone. Let me just mention a few more things that are cool using ansible-kind. You are not limited to specify only one TAG per playbook call. You can combine them as you like (although create and delete doesn’t make sense, cause the cluster you’re creating will instantly getting deleted again). This means, you can easily specify to get kind installed on a blank system as well as getting a cluster build up. All within one playbook call. The only thing you need to do is to provide a comma separated list of TAGs in the --tags parameter.

Also, sometimes it makes sense to install kind on your local system for testing purpose. You can do so by providing localhost as the HOSTS variables value.

Also I plan to soon add some Vagrant plans to the repository. Which will make it even easier to build up a completely separated VM where kind will get installed. You can do then your testing there and just wreck the whole box after you’re finished. All without leaving any traces on your precious client.

Philip

Leave a Reply

Your email address will not be published. Required fields are marked *