Container Orchestration with Kubernetes: A Comprehensive Guide for Beginners

Posted on April 5th, 2024


When we previously used the term “containers” in a technical context, we were probably referring to Docker or rkt (pronounced rocket). Both of these container engines are open-source and facilitate the automation and abstraction of virtualization at the operating system level on Windows or Linux. One may conceptualize containers as nimble, scalable, and isolated virtual machines (VMs) where applications are executed. We can connect containers, establish security policies, restrict resource consumption, and more.

Docker was introduced first, and by the time CoreOS introduced RKT in late 2014, it had already gained considerable traction. Although we lean towards rkt due to its more lightweight and security-focused implementation, the focus of this article will be solely on Docker, which is the more widely used platform with more developed and battle-tested tools.

Orchestration Containers

The automated arrangement, coordination, and administration of software containers is referred to as Container Orchestration. If your existing software infrastructure consists of a Node.js application operating on a handful of containers that communicate with a replicated database and includes Nginx, Apache, PHP, Ruby, and Ruby, then container orchestration may not be necessary, and you can likely manage everything on your own.

What occurs if the size of your application continues to expand? Suppose you continue to add functionality until the system becomes an enormous monolith that consumes an excessive amount of CPU and RAM and is nearly impossible to maintain. You ultimately opt to divide your application into smaller components, with each element assigned a distinct responsibility and managed by a team, i.e., microservices in nature.

In addition to a queuing system, a caching layer is now required to improve performance, enable asynchronous task processing, and facilitate rapid data sharing between services. In a production environment, you may also run multiple instances of each microservice across multiple servers for increased availability.

You are now required to consider challenges such as:

  • Service Exploration

  • Load Equilibrium

  • Management of secrets, configuration, and storage

  • Health examinations

  • Auto-restarting, scaling, and healing of nodes and containers

  • Low-downtime deployments

In this regard, container orchestration platforms prove to be extraordinarily effective and practical, as they provide resolutions for the majority of the challenges above.

What options then remain? Presently, Kubernetes, AWS ECS, and Docker Swarm are, in that order of prominence, the leading contenders. Kubernetes is the most popular and has the largest community by a significant margin (consumption is projected to triple to fourfold by 2017). In addition, I have a great deal of faith in Kontena, primarily due to the fact that it is considerably simpler to configure than Kubernetes, albeit with less configurability and a lack of maturity.


Kubernetes k8s

Kubernetes is an open-source platform that provides container-centric infrastructure by automating the deployment, scaling, and administration of application containers across clusters of hosts.

It effectively resolves the challenges mentioned earlier, is highly portable (operating on bare metal, hybrids, most cloud providers, and most cloud providers), configurable, modular, and proficient at auto-placement, auto-restart, auto-replication, and auto-healing of containers. The most remarkable aspect of Kubernetes is, without a doubt, its extraordinary community, which features online and offline meetups in every significant city, KubeCon (yes, a Kubernetes conference does exist), tutorials, blog posts, and an abundance of support from Google, the official Slack group, and major cloud providers (Google Cloud Platform, AWS, Azure, DigitalOcean, etc.)..


Kubernetes Fundamental Ideas

Master node:

The master node functions as the execution environment for numerous controllers that handle tasks such as monitoring the cluster’s health, coordinating endpoints (which connect services and pods), interacting with the underlying cloud providers, and managing the Kubernetes API. It generally ensures that everything is functioning correctly and monitors worker nodes.

Worker node(minion):

The worker node, which is a minion, executes the Kubernetes agent responsible for Pod container execution via Docker or rkt. It also requests configurations or secrets, mounts necessary Pod volumes, performs health checks, and communicates the status of Pods and the worker node to the rest of the system.


The most basic and compact unit that can be created or deployed within the Kubernetes object model. It signifies an active process within the cluster. Multiple containers may be contained within.


Offers declarative modifications for Pods (such as the Pods’ template), including environment variables, labels, node selectors, volumes, and the number of Pod replicas to execute.

Set of Daemons:

Comparable to a Deployment, it executes a replica of a pod (or multiple pods) on all nodes or a subset of them. Beneficial for tasks such as cluster storage daemons (glusterd), log collection daemons (sumologic, fluentd), and node monitoring daemons (datadog).



The controller is responsible for ensuring that a predetermined quantity of Pod replicas is operational at all times (as defined in the Deployment).



A service is an abstract concept that establishes a logical collection of pods and a prescribed access policy for them, as determined by a label selector. In general, it is employed to grant access to Pods to external services or other services within the cluster (via targetPort, or NodePort or LoadBalancer objects, respectively).

Google Cloud Platform currently offers the simplest method to run Kubernetes, primarily due to its excellent k8s support, tools, free master nodes (you only pay for worker node instances), simple upgrades, and overall lower operating costs in comparison to other cloud providers. Although Azure recently declared native support for Kubernetes clusters, rumours of horror tales have circulated (a subject worthy of an additional blog post, perhaps).

Everything at Onfido was already running on AWS, so we’ve decided to stay with it instead of making the transition, which would have required a significant investment of time and resources. Additionally, AWS offers superior overall support and more developed tools in comparison to other public cloud providers.

This does require us to perform a number of tasks manually in K8s that Google would otherwise handle automatically. Still, it ultimately grants us greater control over cluster provisioning, maintenance, and the software we deploy.

AWS Provisioning of K8s

AWS provides four recommended methods for provisioning Kubernetes clusters:

At Onfido, we sought a tool that could effortlessly deploy a production-grade, highly available K8s cluster (bonus points if it supports Terraform and is free). We initially attempted the initial iteration of Kraken. It was free and supported Terraform, but it needed more user adoption, was overly complex, was not production-ready, and was incapable of generating HA clusters. Kube-aws are in a comparable state: not production-ready and devoid of HA clusters. In the end, we selected Kops because it fulfilled every one of our criteria: support for Terraform, HA, and production-grade clusters, cost-effectiveness, and maintenance by the core Kubernetes team.

Our production cluster was already configured with Koops when CoreOS Tectonic was released, so we have yet to have the opportunity to test it. Tectonic appears to be a viable option if you require a managed solution (up to 10 nodes are free) to provision and administer production-ready K8s clusters on AWS or bare metal.

Setup K8s on AWS with Kops + Terraform

Install Kops and Terraform


macOS: brew install kops && brew install terraform

Linux: Download the most recent Kops and Terraform binaries to follow up.


Create a new container on S3 to store configuration and state files associated with kops, Terraform, and k8s. Include the line below in .bash_profile or .bashrc:


export KOPS_STATE_STORE=s3://your-k8s-bucket/kops

Create a new folder and execute Kops to initiate the cluster’s initial configuration.

Configuring the create cluster command with the specified parameters will generate the cluster’s blueprint (S3 configurations, state files, and templates). Please be aware that at this time, no AWS resources (EC2, EBS, LB, DNS, etc.) will be created.

The edit cluster command will display the cluster’s Kops template. This phase consists of selecting the k8s version to be executed on the cluster and verifying that everything else is in order.

The update cluster command modifies the kops state. It creates a Terraform file in the current directory containing the role policies, launch configurations, and cluster infrastructure setup for the masters and nodes. Nothing has yet been produced in AWS.

We now delve into the Terraform file and make the following modifications to accommodate your desired configuration:

Establish a Terraform S3 backend state

  • Update the base AMI (Debian is the default, but stable CoreOS is the one we recommend).

  • Key pair name for AWS

  • VPC-Specific Subnets

  • Protective Groups

  • Examples of instance types

  • EBS dimensions

  • Enhanced AWS resources

Try standardizing the Terraform file (with variables) to make the installation of new clusters and future revisions faster. Double-check the role policies. It will also be more difficult to make modifications to the launch configurations after the cluster is running, so add any necessary components at this step if needed.

As bootstrap-channel. yaml is created by kops, you can upload the K8S templates to S3://your-K8S-bucket/kops//addons instead. The k8s dashboard, kube-dns, state-metrics, log collectors (fluentd, sumologic), monitoring (datadog, prometheus), and other services can be added to the cluster initialization if needed. At that point, bootstrap-channel.yaml, containing all the services that will be applied during cluster starting, will be generated by Kops. All that needs to be done is add the required services to the bootstrap file, along with their name and location.

Lastly, to bootstrap the cluster and configure all AWS resources, execute the following in the same directory as your Terraform file:

Terraform plan, and if all appears well, terraform plan.

It would help if you now had an operational HA K8s cluster with masters and nodes spanning multiple availability zones, prepared to host your microservices once the application is complete.

Should the need arise to disassemble your cluster

terraform destroy && kops delete cluster(cluster name)

you will have to replace (cluster name) with a actual cluster name.

K8s microservice deployments

Jenkins is utilized with a customized infrastructure for CI at Onfido. On top of Jenkins Pipeline, we have developed a DSL that simplifies the process of configuring a standardised deployment for a new service by a team. The preceding diagram depicts the overall deployment process.

Let’s examine the phases of our custom pipeline in greater detail:

Prior to sending a microservice mut fullfill the following two conditions

  1. A deploy folder must be present in the project folder, containing the k8s Deployment templates for production and development, respectively. The templates are modified with the Docker image build tag or a number of replicas per Pod using ktmpl. The final deployment files are generated during the deployment phase of the Jenkins build. We designated the templates with the naming convention <deploymentName>-template.<development/production>.yml.
  2. Jenkinsfile declaration of a Kubernetes context is required, as illustrated in the following snippet:

Upon the code being submitted to the project development branch

  1. BitBucket will trigger Jenkins to construct, test, and send a new Docker image to the Docker registry (AWS ECR in our case).
  2. Create or modify the development K8s cluster deployment.
  3. Following this, the cluster retrieves the newly created Docker image from the registry and performs a RollingUpdate on the deployment, which is a deployment strategy that guarantees zero disruption.

Manual approval is necessary prior to deploying the master branch into the production K8s cluster.

Unknown Information

In order to obtain environment variables for the microservice from k8s secrets, the configuration for secrets(‘example.yml’) can be defined similarly to how it is in the Kubernetes context of the Jenkinsfile above. The  example.yml file is stored in an AWS S3 bucket and is presented as follows: preferably encrypted.

The secrets(..) configuration allows for the specification of multiple k8s secret files, which are subsequently applied to the k8s cluster (development/production) during the Jenkins build’s deployment phase.

The Ingress

To enable external access to our microservice from outside the Kubernetes cluster, it is necessary to establish a service for it.

By establishing a service that listens on nodePort 30101 and round-robins TCP (or UDP, by preference) traffic to Pods with app: example and env: development on port 5000, the preceding example demonstrates.

By removing type: NodePort and nodePort: 30101, the example microservice will restrict its traffic reception to that location within the Kubernetes cluster, specifically at the example.development.svc.cluster.local:5000 (or internal DNS if kube-dns is utilized).

The Service nodePort will be made accessible via the networking interface (CNI) of the container, which in our case is Flannel and operates as a DaemonSet. This indicates that any traffic encountered by a node in our cluster on port 30101 will be routed to the microservice pods serving as an example.

To make our microservice accessible to the public, we must have a public AWS ALB (Application Load Balancer, also known as ELBv2) pointed to our cluster worker nodes if we operate our cluster within an AWS VPC. Onfido uses Terraform to set up and configure the DNS+ALB for the microservice; however, a k8s ALB ingress controller is an excellent alternative.

A schematic representation of the traffic flow:

Autoscaling, monitoring, and logging

We utilize a Fluentd collector implemented as a DaemonSet to accumulate logging information from containers, Docker, and Kubernetes. These logs are then transmitted to Sumologic, where time, cluster, Pod, or sophisticated regex can filter them. To ensure private log hosting, I recommend implementing ElasticSearch+Kibana and operating a logstash collector as a DaemonSet (also known as an ELK stack) on your cluster.

We utilize a datadog agent as a DaemonSet to transmit monitoring and metric information to our DataDog account. Alternatively, you can use Prometheus, which functions admirably but is more difficult to install and maintain. Additionally, you may utilize Heapster+InfluxDB+Grafana for primary metrics.

We attempted to autoscale the cluster nodes (as part of AWS ASG) using the OpenAI k8s autoscaler and the official node autoscaler recommended by Kops. However, both auto scalers proved to be unsuitable for our requirements and overly complex (the OpenAI k8s auto scaler only scales up when Pods is pending as they specify, batch-optimized). Thus far, our homegrown basic node autoscaler has performed admirably.

We utilize the Kubernetes HorizontalPodAutoscaler for generic Pod autoscaling; it scales Pods according to their CPU consumption and the target you specify. Additionally, we have Pods that utilize RabbitMQ, for which we have developed a simple yet custom Pod autoscale.


There are several factors to consider when operating a Kubernetes cluster beyond the scope of Google Cloud Platform:

  • Each container executed within your Pods must make a precise resource request, including CPU and RAM. By following this approach, Kubernetes will consistently allocate your Pods to their designated locations, and you will retain complete authority over the provisioning percentage within your cluster.


  • Ensure that the rotation of container logs on your nodes is performed appropriately; CoreOS should handle this automatically.


  • Conduct a Docker cleanup Pod on each node to ensure that no containers have unexpectedly terminated and that no images or volumes are lingering.


  • Deployments may .spec.revisionHistoryLimit  to purge obsolete ReplicaSets, old and completed jobs may be cleaned up, or a CronJob may be executed to cleanse replica sets and finished jobs.

In addition, if you are utilizing CoreOS as your distribution, ensure that these ports are accessible on your masters and nodes.


This blog post offered a practical guide to Kubernetes, covering its role in container orchestration, deployment on AWS with Kops and Terraform, and essential practices for microservice deployments, logging, monitoring, and maintenance. By understanding these concepts, you’ll be well-equipped to leverage Kubernetes for managing containerized applications in production.

Leave a Reply