Pages

Tuesday, December 21, 2021

Kubernetes Fundamentals

Containers: Packages to get software to run reliably

Docker is the most popular container run time engine.


Why use Containers?

Containers allow standardisation, reduction of resource utilisation, fault isolation and immutability.


Benefits:

Self-contained: We can have package where all libraries and application bundle together in a single container image.

Portable: we can ship this image to different environment and we can be sure that it will work the same way.

Platform agnostic: 

Lightweight: as it doesn’t contain full blown OS

No Guest OS

Fault isolation: means if you have security vulnerability within a single image you can be sure that vulnerability can be kept within the image itself and that doesn’t effect other container images.

Immutable: If you have a problem with container image, you can destroy and recreate it from a single base image.The newly created image will work the same way as when it was first being created.


What is Kubernetes?

Kubernetes is portable, extensible, open-source platform for managing containerised workloads and services.


Short form of Kubernetes is K8s


Kubernetes is portable and cloud agnostic which means that you can be running your Kubernetes workload in google cloud today and ship it to the cloud providers like, Microsoft azure and AWS or even on-prem environment in vSphere relatively easily with minimal changes.


This is because all major cloud providers support Kubernetes. Because of multiple and hybrid cloud concepts it has become really difficult to manage but from VMware perspective we have come up with VMware Tanzu as a single management console that sits between all these different providers.


Instead of different user interface customer can use Vmware Tanzu to interact with multiple providers.


Kubernetes also enables a micro services way of building applications. This is because as nature of container is everything is in small building blocks.


Key features:

Portability: Faster speed to market

Ability to deploy anywhere and focus on delivery increase speed to market.


Scalability - Autoscaling

Kubernetes automatically detect the workload and it will scale up and down automatically by enabling micro services approach way of building applications.


High Availability - self healing

Kubernetes constantly does health check to match desired state. Also does load balancing and traffic routing intelligently.


Kubernetes Architecture:




kubectl: command line utility tool responsible for communicating with the Kubernetes back end services. 


Master node which is control plane of entire cluster.


Master node Components:


API server:

Entry point of the cluster

Validates requests that are going through it

Orchestrating all operations within the cluster.


Scheduler:

Selects optimal node to run pods(workloads) based on defined configurations

Selects where the pods should go, does not start the pod. For starting the pod, Kubelet component is responsible


Controller Manager:

Different controllers with different functions

Onboard new nodes, responsible for noticing and responding when nodes goes down.

Ensure that the correct number of pods are running.


etcd:

Stores data in key-value format

What resources are available, state of the cluster

Does not store application data

Backups of cluster state information are stored here.


Worker node Components:


kubelet:

Agent that communicates with Kube-Apiserver

Manages all activities of the worker node


Kube-Proxy

Network proxy that allows network communication for your pods

Maintains network rules


Worker node: receive instructions from the master node to run workloads.


Container Runtime(Docker): Software to run containers. 


Workload means Pods: Workloads are scheduled by schedular to run on worker nodes


Clusters: a set of nodes group together.


Yaml files are used to define declarative configurations


Pods:

The smallest object that you can create in Kubernetes.


Command to execute the yaml file

kubectl create -f webapp-pod.yaml

kubectl get pods

kubectl describe pod webapp-pod


ReplicaSet:

Construct that runs multiple instances of the same pod

Self-heals according to desired state

Desired state is defined by yaml file

ReplicaSet scales up and down automatically to the traffic

ReplicaSet helps you to be highly available and achieve zero down time


Label & Selectors

Helps to identify and associate different objects


kubectl get replicates

kubectl describe replicates replicates-1


Deployment:

A deployment is a higher level construct that helps to manage replicaSet

When we create ReplicaSet it automatically creates Pods as well, and its role is to make sure that Pods are highly available.

Some of additional features of Deployment are 

  • Rolling Update
  • Rollback changes
  • Pause and resume deployment

To execute deployment yaml

kubectl  create -f delpoyment-1.yaml ### declaratively 


Same thing can be achieved imperatively using

kubectl create deployment <deployment-name> -image=nginx:1.18.0


Services:

An abstract way to expose an application running on a set of pods via an endpoint

  • Stable IP address
  • Load balancing
  • Provides loose coupling between pods

3 types of services

  • clusterIP: mainly used for the internal communication. It refers to the traffic going on within the cluster itself. 
  • NodePort: If an external system want to communicate with internal resource in cluster, instead of communicating directly it will talk to service called NodePort
  • LoadBalancer: 


Create service yaml file

kubectl create -f <yaml file>


To get list of services

Kubectl get service <my-service>

kubectl get endpoint


Namespaces:

Are a way to divide cluster resources via multiple users


Characteristics of Namespaces:

  • Names of resources need to be unique within a namespace, but not across namespaces
  • Cannot be nested inside one another
  • Each resource can only be in one namespace

Secrets:

Stores sensitive data, eg passwords

2 steps process

  • Create secret
  • Inject into pods

Volumes:

Pods are ephemeral 

can be stopped or destroyed

built or replaced

each pod has its own IP address


Persistent Volumes - Interface to the actual storage

Cluster level resource that is used to manage your storage centrally


Persistent Volume Claims

  1. Pod request for storage volume via PVC
  2. PVC tries to find the most suitable volume in the cluster
  3. PV is claimed by the PVC, and the Phase the storage backend

K8s Admins and K8s user

K8s Admin setup and maintains the cluster

  • Manage and allocate resources for users (eg. Developer)
  • Manage security of the cluster
  • Storage provisioning

K8s user deploy apps in the cluster

  • Deploys app in the cluster directly or through CI/CD pipeline
  • Configure the apps to use the persistent volume through persistent volume claim



No comments:

Post a Comment