About a year ago, I didn’t want to have anything to do with Docker or Kubernetes. I admit, I didn’t like the concept. This whole new fancy stuff, where is the use in it anyway and why should I not stick to my beloved Virtual Machines? Not one person could convince me otherwise or even could explain to me a real good usecase. What happened that changed my mind?
I changed my job. And with it came that I had to work with Kubernetes and the whole Cloud thingy. I don’t want to moan, but it was often a valley of tears and it is still sometimes. But after the said year, I now like Kubernetes most of the time (even if I have the feeling that it’s a single sided love story 🙂 ). I will give it a try and provide you 6 reasons why you should give Kubernetes a try.
Think big in the number of different workloads you want to handle on a Kubernetes cluster. Sky is the limit. As a system provider, you may want to use a huge datacenter full of servers as a shared resource pool for all of your customers. You spin up VMs, add them to a Kubernetes cluster and all are happy. Free yourself from the idea to build small separated server farms for every customer. Each one with it’s own IP range and hypervisor installed. That’s not how it works. That’s not how cloud works. But that’s a different topic.
As a customer you want to get your workload running. You want to deploy it fast, that it’s availablity is high and your performance is constant all the time regardless how many hundret of services you’re deploying. All this does Kubernetes for you.
The new Container world runs with Microservices. It’s hard to imagine for someone that knows huge Business Warehouse software that modern software tends to get smaller. At least it’s divided in smaller components that each can scale on their own and work together in the end but are mostly independant from each other. Think of a web-based application that uses message queues and a sharded database in the background. Each need to be available and need to scale independantly. You can just scale up your webserving component as you need it when a high traffic timerange shows up.
I admit, that there is still a huge amount of software out there, that is not a good fit when talking about Microservices. But modern software does! I don’t want to argue, that a good old Oracle database scales and performs great on Kubernetes with it’s 256GB of memory and every functionality of the software build in one big baloon (called monolith). It’s probably not the best idea to run such a beast on Kubernetes.
The Kubernetes ecosystem is huge. I feel that there is for every question at least three tools or APIs (like add ons to Kubernetes) that will do the job. And the best of it? It’s mostly open source. Ever heard of Git Ops (if not, here’s an article about ArgoCD), Ingress controllers or Operators? These are all concepts within Kubernetes that will do the boring stuff for you and some of them act more clever than some Sysadmins that I know (just kidding). Write some code that declares what you need and I guarantee you, there will be some addition for Kubernetes that will do the right thing with your code.
It’s more than a cluster software
A typical cluster software monitors a piece of software on some metrics. If the software dies or one of the depending resources become unavailable, it will try to move your precious software with all the needed infrastructure (Disk volumes, IPs…) to another piece of hardware and bring it back up there. Kubernetes does similar but it does a lot more than that. First of all, Kubernetes will not only ensure, that a cluster component (Pod) is available. It can also ensure that it has some replicas and that these replicas are distributed into multiple availability zones of your datacenter. It also loadbalances traffic over these replicas if you want it to do so. It handles the network traffic within the cluster for you. It knows about infrastructure components like disk volumes and so on. It’s really self-healing.
Kubernetes brings some real great autoscaling features with it. When hitting some limits in available resources, one can configure Kubernetes to automatically add a new cluster node and even better, it will also reduce the amount of nodes when the workload on them drops. On many Cloud provider Kubernetes platforms, this feature is available. Think of huge host environments with a hypervisor on it. This will spawn new VMs on demand, add them to the cluster and Kubernetes will deploy workload on it as it needs it.
Another autoscaler is the (horizontal or vertical) Pod autoscaler. It enables you to run as many pods as needed with as many resources as needed to handle the workload of a service.
I don’t know of any cluster software that does this for you.
It’s easy, right?
No it’s not. Getting an idea of the concept alone took me weeks if not even month. The opposite is the case in my opinion. But why is this then a good reason to give Kubernetes a try? Because it’s demanding! Isn’t that the reason why we do IT stuff? It really brings a glare to my eyes when I spent hours tinkering around with deployments of several components, glueing them together, to see in the end, that magically some things start to happen. This must be the feeling to see a car drive for the first time I imagine. Lights are blinking, steam is rising, bits are sweating and a small container is started to do it’s work together with dozens other other small containers.
So what you’re waiting for? Setup your Kubernetes cluster in your homelab and get your hands dirty… I should become a motivation coach 😉