Kubernetes basic glossary


When I was starting to learn Kubernetes, I got overwhelmed with all of the elements it introduces. On the one hand, the more I was diving into it, the more new aspects I had to familiarize with. After a while though, I realized I wouldn’t leverage some elements at all, a couple of them I just don’t need yet and only a few are actually useful for my current requirements.
I strongly believe this might be true in your case as well and I see lots of people having similar issues I described. That’s why I decided to present here a minimal stack required to deploy your own application onto Kubernetes and expose it externally.
Prerequisites
I assume you already have a Kubernetes cluster available. It can be installed locally or hosted, among the others, on Google Cloud Platform.
Additionally, you may have a look at my recent article which will help you to set everything up:
How to configure Google Kubernetes Engine using Terraform
Image
An image is a lightweight, standalone, executable package of software that includes everything needed to run an application — its code, a runtime, system libraries and tools, environment variables, and configuration files.
Container
A container is launched by running an image. It’s a runtime instance of an image. It is what the image becomes in memory when executed.
Containers are an abstraction at the app layer, they can run on the same machine and share the OS kernel with other containers, each running as an isolated process in userspace.
Node
A node is a representation of a VM or a physical machine in Kubernetes where containers are deployed. Each node is managed by master components.
A node is not inherently created by Kubernetes: it is created externally by cloud providers or it exists in your pool of physical or virtual machines. Hence when Kubernetes creates a node, it creates an object that represents the node.
The services on each worker node include:
software that is responsible for running containers
a node agent which communicates with the Kubernetes Master
a network proxy which reflects Kubernetes networking services
Cluster
A cluster consists of at least one master node and multiple worker nodes. These machines run the Kubernetes cluster orchestration system. Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it. The actual application is deployed dynamically across the cluster.
The master is the unified endpoint for your cluster. All interactions with the cluster are done via Kubernetes API calls, and the master runs the Kubernetes API Server process to handle those requests. The cluster master is responsible for deciding what runs on all of the cluster’s nodes. Thus, each node is managed from the master.
Kubernetes
Kubernetes coordinates a cluster of nodes that are connected to work as a single unit. The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines. Kubernetes automates the distribution and scheduling of application containers across a cluster in a more efficient way.
End users can use the Kubernetes API directly to interact with the cluster. Kubernetes helps you make sure those containerized applications run where and when you want and helps them find the resources and tools they need to work.
Here is the hierarchy of components we described:
Once you are familiar with the basic concepts, it’s time to dive into more complex ones. We will use them to represent our Kubernetes config as code.
Pod
You are already familiar with Containers. The thing is they have to be run somewhere because Kubernetes doesn’t run them directly. The high-level structure that wraps one container, multiple containers but the same images or even multiple containers with different images is called a Pod.
You can think about Pod as a group of one or more containers with shared storage (volumes) and network. They share an IP address and port space and can find each other via localhost. However, they usually communicate with each other via Pod IP address.
A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources. A Pod is basically a running process on your cluster.
Note that grouping multiple containers in a single Pod is a relatively advanced use case. You should use this pattern only in specific instances in which your containers are tightly coupled. For example, you might have a container that acts as a web server for files in a shared volume, and a separate “sidecar” container that updates those files from a remote source.
ReplicaSet
Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (e.g., run multiple instances), you should use multiple Pods, one for each instance. In Kubernetes, this is generally referred to as replication.
A ReplicaSet ensures a specified number of pods are running at any given time. It fulfills its purpose by creating and deleting Pods as needed to reach the desired count. When a ReplicaSet needs to create new Pods, it uses its Pod template.
Deployment
A Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. It’s recommended using Deployments instead of directly using ReplicaSets.
Deployment is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. While ReplicaSets can be used independently, today they’re mainly used by Deployments as a mechanism to orchestrate Pod creation, deletion, and updates. When you use Deployments you don’t have to worry about managing the ReplicaSets that they create. Deployments own and manage their ReplicaSets. As such, it is recommended to use Deployments when you want ReplicaSets.
Here is the hierarchy of components we described:
In the end, it’s time to explain the remaining concepts to finally understand everything required to deploy our application to production and expose it externally to the world.
Service
A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them. Services enable loose coupling between dependent Pods. The set of Pods targeted by a Service is usually determined by a label (a key/value pair attached to objects for grouping purposes).
Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. By default, pods are essentially isolated from the rest of the world. Services allow your applications to receive traffic.
NodePort
A NodePort opens a specific port on every node of your cluster. It exposes your service on a static port on the node’s IP address. You can then access it by requesting <NodeIp>:<NodePort>. Kubernetes transparently routes incoming traffic on the NodePort to your container.
When configuring a NodePort you have to say which port it opens and which is the target port it forwards all requests to. The target port is usually a container’s port which it exposes.
Ingress
Ingress allows external traffic to your application. Because Kubernetes provides isolation between pods and the outside world, you have to open a channel for communication if you want to communicate with a running pod. Ingress is a collection of routing rules that govern how external users access services running in a Kubernetes cluster.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress. An Ingress can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, and offer name-based virtual hosting.
ClusterIssuer
Issuers represent a certificate authority from which signed x509 certificates can be obtained, such as Let’s Encrypt. You will need at least one ClusterIssuer in order to begin issuing certificates within your cluster.
They are particularly useful when you want to provide the ability to obtain certificates from a central authority (e.g. Letsencrypt, or your internal CA) and you run single-tenant clusters.
Here is the hierarchy of components we described:
Summary
By now, you should be pretty familiar with the basic concepts of Kubernetes. There are lots of elements beside them, however, you already know the most important ones to actually deploy your application.
It’s time to use these components programmatically and implement your first Kubernetes definition. You are ready to start learning how to do that from a code perspective as you know the theory quite well.
Good luck with diving into technical aspects of Kubernetes![Source]-https://blog.lelonek.me/kubernetes-basic-glossary-9ca0416e3948
Basic & Advanced Kubernetes Certification using cloud computing, AWS, Docker etc. in Mumbai. Advanced Containers Domain is used for 25 hours Kubernetes Training.


Comments

Popular posts from this blog

Why, when and how to return Stream from your Java API instead of a collection

What is Kubernetes?

Best way to learn Java programming