The Kubernetes community has shown huge growth in recent years. According to the latest CNCF report, it has about 5.6 million developers using it today, up from 3.6 million last year. Edge computing, quantum computing, haptic feedback, 5G, blockchain and computer vision are among the most popular but not the only applications for Kubernetes. Significantly enhancing developer productivity and providing flexibility in the deployment process, this technology is increasingly being adopted by companies of different business domains.

The report also showed that Kubernetes is heavily used in large organizations with more than 500 employees. This demonstrates how effective this technology meets the needs of large businesses and how fast its implementation is growing, relying on big market players.

The rise in the use of Kubernetes is largely connected with the growing awareness of this technology. According to the report, 21% of back-end developers say they've heard of Kubernetes but don't know what it does, while 11% say they've never heard of it. One of the most interesting things about Kubernetes is that many software developers may not realize that Kubernetes is behind many of the most popular services. For example, such giants as Google, Spotify, Pinterest, Tinder, use it in their projects.

In this article, we will talk about what Kubernetes is and what its main components are. In addition, we will briefly look at the cases of when to use Kubernetes, and when it’s better to choose another option.

icon Techreviewer can help you get proposals from top agencies for FREE

What is Kubernetes?

Kubernetes (also known as K8s) is an open-source container orchestration platform. It adds another layer of management and automation for container engines such as Docker and CRI-O. Kubernetes can help you manage containerized applications and microservices architectures across complex environments, including public, private, and hybrid clouds. 

Kubernetes offers a rich set of capabilities that can help reduce manual work with automation. You can use Kubernetes to configure various automated processes, including deployments, maintenance, and scaling of containers across clusters of nodes.

Kubernetes runs containers on a shared operating system on host machines. However, containers remain isolated from each other unless you decide to connect them. This level of granularity enables you to achieve highly flexible and agile pipelines. 

If you are a DevOps professional, you must be familiar with Kubernetes basics. If you’re not already working with Kubernetes, you will be soon — that’s because organizations everywhere are moving their production applications and development processes to Kubernetes. Let’s take a look at the basic concepts used in the Kubernetes ecosystem.

10 Kubernetes Concepts


A container is responsible for packaging the application code and everything required to run it, including libraries and runtimes, such as containers, CRI-O, and Docker. This type of architecture makes containers highly portable. Once you deploy a container, it can run on its own regardless of the machine it runs on.

Containers are immutable – once deployed, you cannot make any changes to the container. You can deploy a container using a container image. Be sure to use or create immutable images. Immutability ensures that containers remain self-efficient and image behaviour remains predictable.


A pod is a basic execution unit for containers. A pod hosts a collection of containers, allowing these containers to share resources such as storage and networking while running independently of each other.

Sharing storage resources enables you to add persistent storage that exists outside a pod's lifetime. Pods can communicate with each other using local host networking. Each pod has its own unique IP address, available for access by the other pods.


Containers run in pods, and pods run on nodes. A node is a physical or virtual worker machine running several pods. Kubernetes uses the control plane to manage nodes. The control plane can automatically schedule pods across nodes in a cluster while accounting for the available resources on each node.


Kubernetes cluster includes a collection of worker nodes running containerized applications and the control plane services that help manage the nodes. The cluster contains node components, control plane components, and addons.

Control plane components include the Kubernetes API, a key-value store called etcd, a kube-scheduler that assigns new pods to nodes, a kube-controller-manager running controller processes, and a cloud-controller-manager.

Replica Set

A replica set ensures Kubernetes maintains a stable set of replica pods at all times. You can define the desired number of pods and Kubernetes ensures that once a pod crashes, it is quickly created to maintain the desired state. Replica sets are highly useful in achieving high availability.


Kubernetes deployment is a resource that lets you specify the desired state of replica sets or pods. It enables you to describe updates, which the deployment controller applies to change the actual state to the specified desired state. You can use deployments to create new replica sets, remove existing deployments, rollback to earlier deployment revisions, pause a rollout, and more.

Kubernetes Autoscaling

Kubernetes enables you to automate various management tasks, including scaling and provisioning. Kubernetes autoscaling allows you to create automated processes to save time and facilitate rapid response to changes in demand.

You can use any of the three autoscaling features – Horizontal Pod Autoscaler, Vertical Pod Autoscaler, and Cluster Autoscaler. You can use pod auto scalers alongside the cluster autoscaler to ensure that only the required resources are provisioned.

Namespaces, Labels, and Annotations

Namespaces, labels, and annotations facilitate effective collaboration. Here are the main characteristics of each feature:

  • Namespaces – enable you to isolate specific cluster resources. You can use namespaces to create a separate virtual environment and assign it to individual users, projects, or teams. The main advantage of namespaces is to limit user access and groups to specific Kubernetes objects.
  • Labels – key/value pairs describing attributes that can help you distinguish between resources within a specific namespace. You can use labels to organize subsets of objects and facilitate efficient queries and watches. For example, labels can help distinguish between release state, application tier, and customer identification.
  • Annotations – enable you to add arbitrary non-identifying metadata to objects. You can use annotations for declarative configuration toolings, such as build, release, or image information. Annotations also enable you to add contact information for relevant stakeholders.

Persistent Storage

Kubernetes uses volumes to provide different types of storage. A volume is a directory made accessible to a particular pod. The type of volume determines the directory, the medium, and the content available for the pod. You can use several types of storage within a single pod.

Here are the main storage components that offer persistent storage in Kubernetes:

  • Kubernetes PersistentVolumes (PVs) – each PV is a cluster-wide object connected to a specific storage provider that offers storage resources. Kubernetes allows administrators to provision PVs.
  • PersistentVolumeClaims (PVCs) – each pod can use a PVC to make a storage consumption request within a namespace. Kubernetes attempts to meet the request using a PV and may phase through different states. For example, Kubernetes can use an available PV for a PVC. However, a bound PV is not available for use.
  • StorageClasses – Kubernetes uses storage classes as an abstraction layer over different types of storage, including a huge number of various cloud storages. This enables a lot of options for storing data, including large amounts of information.


A Kubernetes service enables you to define a logical collection of pods and an access policy. Typically, a service group together pods that perform the same function and assigns them the same service name and IP address.

Services enable you to introduce discovery and routine between various pods, using labels and selectors to match pods with the relevant uses. It helps ensure frontend pods can use the relevant backend pods even as pods are replaced dynamically.

When should you use Kubernetes?

In this article, the ten basic concepts of Kubernetes are covered. Of course, concepts are not enough. For business software development it’s crucial to understand when and why to use Kubernetes to get the most from this technology and achieve business goals.

Kubernetes can accelerate software development through solutions built with a cloud-native ecosystem. At the same time, it allows the use of applications and data by modernizing business platforms and applications. 

Let’s take a brief look at three main cases of how Kubernetes can be used.

1. Kubernetes for multi-container orchestration

If you are dealing with managing dozens or thousands of containers manually, it’s likely to require a dedicated container management team to update, connect, manage and deploy these containers.

In order to gain the significant benefits of a system built with containers, it’s necessary to use container orchestrator tools like Kubernetes which can perform the following tasks:

  • Scaling up and down on demand
  • Orchestrating and integrating various modular parts
  • Communicating across clusters
  • Ensuring containers’ fault tolerance

2. Kubernetes for microservices architecture

How does Kubernetes fit in with a microservices concept? When breaking down a large-scale application into smaller parts, like microservices, we get more freedom and independence of action. Kubernetes helps to make all these independent pieces run together, describing infrastructure architecture that allows you to inspect and work through resource usage and sharing issues. 

3. Kubernetes for cloud management

Due to its nature, Kubernetes can be used on a private cloud (like OpenStack), a public cloud (AWS, Google Cloud Platform, etc.), or On-premises. This allows you to connect with your users no matter where they’re located, with increased security as a bonus, and helps avoid potential issues of ‘vendor lock-in’.

When should you not use Kubernetes?

Kubernetes was created with the purpose to solve a certain set of potential problems. So, Kubernetes is not the best choice if your project can be described as:

  • A simple or a small-scale project, i.e. a project with a small user base, or with a low load, or a simple architecture, and there are no plans to increase any of that.
  • A project is at the MVP stage now. In this case, it’s better, to begin with Docker Swarm. 

For these cases, using Kubernetes is unreasonably expensive and too complicated. If you feel that your project requires scaling and deployment abilities have reached their limits, orchestration with the help of Kubernetes can become a reasonable choice.

Anton Logvinenko
PHP/DevOps Group Leader

Get New Posts to Your Inbox

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
10 Kubernetes Concepts You Must Know