Kubernetes: What and When
Kubernetes! Is it a cleaning product? Is it a Scandinavian Christmas pastry? Is it a Pokémon?
In today’s episode, guest chef Luiz Felipe Garcia joins us to explain what Kubernetes is, and why you might want to use it in deploying your applications. You’ll learn some signs to look for that indicate Kubernetes might be worth looking into, and some elements of development you may need to rethink when using it. Enjoy!
Video transcript & code
Kubernetes What and When
Today we're going to demystify the word Kubernetes and discover when you should consider it. This episode will not be a Kubernetes tutorial. Instead we’re going to talk about what it is, and whether you should consider using it on your team.
Kubernetes starts off with the assumption that you have “containerized” your application. In other words, you are deploying your apps in the form of Docker or RKT containers.
Kubernetes gives us a way to manage or “orchestrate” those containers. One way I like to describe it is to turn your infrastructure into an API.
Kubernetes abstracts away all aspects of running containers to allow developers an easy and standard way of deploying applications.
To build your own cluster is a whole other topic, but using one is surprisingly simple.
At the top level Kubernetes consists of an API, kubelets and controllers.
The API resources are used by you and its internal systems to reflect the configuration and status of your infrastructure.
The kubelet runs on each machine that is part of the cluster and constantly talks to the API updating the host's state.
The controllers observe the API resources and make changes to the infrastructure trying to always keep both in sync.
require 'sinatra' set :bind, '0.0.0.0' get '/' do 'Hello from Kubernetes' end
Let's use an example with the basic building block of Kubernetes to run a Sinatra application.
A good old hello world example should be enough.
We'll bind to all interfaces since this app will run on a container
And respond to the root path with our hello world
FROM ruby:2.5 RUN gem install sinatra COPY app.rb / EXPOSE 4567 CMD ["ruby", "/app.rb"]
To create our container, we'll use this simple Dockefile.
We inherit from the official Ruby image, …
…install Sinatra, …
…copy our app file, …
…expose sinatra's default server port, …
…and set the default image command to run our application.
Once we have that we need to build and publish the image to a registry that is accessible from the cluster.
For a simple demo, you can even use the public Docker registry.
apiVersion: v1 kind: Pod metadata: name: sinatra-app spec: containers: - name: sinatra-app image: my-sinatra-app-container-image ports: - containerPort: 4567
Now to run this container on Kubernetes, we need a pod definition file. Here's ours.
A Pod consists of one or more containers that share the same port space.
Pods run on Nodes. Those represent the actual machines that run your code and are normally setup by the cluster administrators.
This pod describes what image should be ran…
…and what ports should be exposed from that container.
$ kubectl apply -f pod.yaml pod/sinatra-app created
Kubernetes comes with a powerful CLI tool called kubectl (pronounced as kubecontrol). We'll use it to interface with the cluster.
Let's send our pod definition to the cluster with kubectl apply.
This will read the file and make a request to Kubernetes' API that will create or update our pod.
The cluster will detect the new configuration and start up the Pod on one of the available Nodes. It will also expose a port on the Node that will be forwarded to the configured port on your container.
$ kubectl get pods NAME READY STATUS RESTARTS AGE sinatra-app 1/1 Running 0 2s
You can see your pod by running kubectl get pods. Within a few seconds it should be with the running status.
To test that it's working, we can use a very handy command from kubectl called port-forward
$ kubectl port-forward pod/sinatra-app 4567 Forwarding from [::1]:4567 -> 4567 Forwarding from 127.0.0.1:4567 -> 4567
On another terminal, you can run the port forward command to forward ports from the pod directly into your local machine.
Now we can make requests to our localhost and they will get routed to the pod!
$ curl localhost:4567 Hello from Kubernetes
With a simple curl request, we get our first hello world from Kubernetes.
Kubernetes takes care of all the complexity around running the container, networking, resource management, logging and many others.
This is a very small example. Kubernetes has other resources that allow you to run any type of workload on a cluster like: Services, Deployments, Ingresses, CronJobs among others.
Note that from the developer's perspective, you only care about the resources you use on the cluster. All of the implementation details are abstracted away.
Now comes the big question: when should I use Kubernetes?
To start off, in my opinion, it should never be used as a starting point.
I would resist the urge to use a cool new technology unless it actually makes sense to do it. For a small application, Heroku is hands down the best solution.
You should start considering it if your application/organization fits some of these criteria:
If you use microservices, Kubernetes probably fits like a glove.
It has built in service discovery, standard deployments for any language, independent scaling and many other advantages.
If you have a big variation of workload, Kubernetes allows you to auto scale containers both horizontally and vertically as well as nodes.
For organizations with multiple teams and/or locations, Kubernetes offers namespaces for clusters that can share the same pool of nodes.
It also offers a standard way for managing the infrastructure in multiple clouds.
However even if you do fit some of these, perhaps the biggest change to consider is the culture shift.
Kubernetes forces you to completely change the way you think about your infrastructure.
You now configure your cluster and trust the tools will make it happen.
Traditional sysadmins need to be able to understand all of these concepts and it will change how they manage the infrastructure completely.
It's the opposite of traditional infrastructure management, which is all imperative: they tell the tools how to do things step by step.
Let's take an example of setting up an external load balancer for our Sinatra app with NGINX.
Traditionally you would SSH into a machine you selected, install NGINX and then configure load balancing between the IPs of other machines running our Sinatra application.
With Kubernetes you would create a Service resource that serves as load balancer.
Then we create an Ingress resource that would match paths to the Service.
Note that we do not even mention IPs here. You declare to Kubernetes what you want and it takes care of the implementation for you.
The same shift will also happen for developers. They now have to think about the infrastructure as a resource and that will change your day to day workflow.
Suddenly you no longer use Capistrano and SSH into machines.
How you debug, read logs, restart applications, deploy, rollback. It will all change.
Despite all of these caveats, Kubernetes is an extremely powerful platform and will continue to grow for the coming years.
If it looks like Kubnernetes is a good fit, or even if you want to learn more about it. I suggest you go to Google Cloud Plaftorm, sign up, and use their free introductory credits to experiment with a managed cluster.
The Kubernetes documentation is very good and full of examples for you to learn the ropes and get a feel for it.