Kubernetes (k8s) is a container orchestration system for deploying and managing containers. Its design is influenced by Google's internal cluster management system Borg.
A k8s cluster consists of worker machines called nodes that run containerized applications. Every cluster has at least one worker node that hosts pods - the components of the application workload. The control plane manages the nodes and pods. In production, the control plane usually runs across multiple computers for fault tolerance and high availability.
𝗖𝗼𝗻𝘁𝗿𝗼𝗹 𝗣𝗹𝗮𝗻𝗲 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀
- API Server - Communicates with all k8s components and handles all pod operations
- Scheduler - Watches pod workloads and assigns them to nodes
- Controller Manager - Runs core control loops like the Node Controller and EndpointSlice Controller
- etcd - Key-value store that backs all cluster data
𝗪𝗼𝗿𝗸𝗲𝗿 𝗡𝗼𝗱𝗲 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀
- Pods - The smallest unit deployed and managed by k8s. Pods group containers and give them a single IP address.
- kubelet - An agent on each node that ensures container runtimes are running in pods
- kube-proxy - A network proxy on each node that handles routing and load balancing for services and pods
Monolith: one giant legacy program/application with all codes in it
This model is replaced by cloud native microservices app
Declarative model: Describe what you want(desired state) in a manifest file
what is Declarative model in kubernetes
In Kubernetes, the declarative model refers to the approach of specifying the desired state of the system in configuration files, rather than imperatively instructing the system on how to achieve that state. In other words, instead of issuing commands to Kubernetes to perform specific actions (such as creating, updating, or deleting resources), you define the desired state of your applications and infrastructure using YAML or JSON configuration files. Kubernetes then takes care of reconciling the actual state of the system with the desired state specified in these configuration files. The key characteristics of the declarative model in Kubernetes include: 1. **Desired State**: You define the desired state of your Kubernetes resources (such as Pods, Deployments, Services, etc.) in configuration files. This includes specifications for things like the number of replicas, container images, resource limits, network policies, and more. 2. **Declarative Configuration**: Configuration files are used to declare the desired state of the system, rather than issuing imperative commands to make changes. These configuration files are typically stored in version control systems (such as Git) and treated as the source of truth for the desired state. 3. **Reconciliation**: Kubernetes continuously monitors the actual state of the system and compares it against the desired state specified in the configuration files. If there are any discrepancies, Kubernetes takes action to reconcile the actual state with the desired state, automatically creating, updating, or deleting resources as necessary. 4. **Idempotent Operations**: Declarative configuration enables idempotent operations, meaning that applying the same configuration multiple times will result in the same state, regardless of the current state of the system. This helps ensure consistency and reliability in managing infrastructure and applications. Overall, the declarative model in Kubernetes provides a powerful and flexible way to manage complex distributed systems, allowing you to define and maintain the desired state of your applications and infrastructure efficiently, while Kubernetes handles the details of implementation and reconciliation.
Vmware: VM
Docker: container
Kubernetes: Pod
Pod: Automatic unit of scheduling which can have one or more containers
Replicaset: how many pods to configure
Deploy: updates and rollbacks
SVC: stable network abstraction
what is SVC in kubernetes
In Kubernetes, SVC stands for "Service." A Service is an abstraction that defines a logical set of Pods and a policy by which to access them. Services enable decoupling between the frontend and backend components of an application, allowing clients to access the application without needing to know the IP addresses or specific details about the Pods that implement it. Here are some key points about Kubernetes Services (SVCs): 1. **Networking**: Services provide a stable IP address and DNS name that clients can use to access the Pods associated with the Service. They abstract away the details of individual Pod IP addresses, allowing Pods to be scaled up or down without affecting client connectivity. 2. **Load Balancing**: Services automatically distribute incoming traffic across the Pods that are part of the Service. This load balancing ensures that requests are evenly distributed and that individual Pods do not become overwhelmed with traffic. 3. **Service Discovery**: Services provide a consistent way for clients to discover and communicate with backend Pods. Clients can use the DNS name of the Service to access the Pods, regardless of their underlying IP addresses. 4. **Types of Services**: Kubernetes supports different types of Services, including ClusterIP, NodePort, and LoadBalancer, each with its own characteristics and use cases. For example, ClusterIP Services expose the Service on an internal IP within the Kubernetes cluster, while NodePort Services expose the Service on a port across all Nodes in the cluster. 5. **Labels and Selectors**: Services use labels and selectors to determine which Pods they should route traffic to. Pods that match the selector specified in the Service configuration are considered part of the Service, and traffic is load balanced across them. Overall, Kubernetes Services play a critical role in enabling communication and connectivity between different components of an application, providing a layer of abstraction and reliability for networking in Kubernetes clusters.
TKG cluster
Tanzu Kubernetes Grid cluster. A fully upstream conformant Kubernetes cluster. a.k.a. TKC
Supervisor Cluster a.k.a. SV or SC.
This is the control plane running on vSphere that enables the deployment and management of TKG clusters.
Load Balancer or LB
A virtual machine used to load balance traffic between ingress networks and workloads.
HAProxy is used in this configuration but more load balancers will be introduced.
vDS vSphere Distributed Switch WCP Workload Control Plane
kubectl The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters.
What is kube-node-Lease?
This namespace holds Lease objects associated with each node. Node leases allow the kubelet to send heartbeats so that the control plane can detect node failure
Creating Objects using Imperative Method
Available Commands:
clusterrole Create a cluster role
clusterrolebinding Create a cluster role binding for a particular cluster role
configmap Create a config map from a local file, directory or literal value
cronjob Create a cron job with the specified name
deployment Create a deployment with the specified name
ingress Create an ingress with the specified name
job Create a job with the specified name
namespace Create a namespace with the specified name
poddisruptionbudget Create a pod disruption budget with the specified name
priorityclass Create a priority class with the specified name
quota Create a quota with the specified name
role Create a role with single rule
rolebinding Create a role binding for a particular role or cluster role
secret Create a secret using specified subcommand
service Create a service using a specified subcommand
serviceaccount Create a service account with the specified name
Creating Objects using Declarative Method
kubectl apply -f app02.yml
Working with Services
- Load Balancer
- kube-proxy
• - Runs on every Node in the cluster
• - iptables
- Types of Services
• clusterip Create a ClusterIP service
• nodeport Create a NodePort service
• external name Create an ExternalName service
• loadbalancer Create a LoadBalancer service
externalip - https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
Working with the Pod
1. Pod with single Container
2. Pod with Multi Container
3. Init Container Pod
• cnt01:
• cnt02:
• cnt03:
• initContainers:
• init01:
• init02:
• init03:
4. Sidecar Containers
5. Dynamic Scheduling
6. Manual Scheduling
• nodeName:
7. Scheduling based nodeSelector
8. Static Pods
10. ReplicaSet
• - Scaleup/down
11. Deployment
• - Rollout/Rollback
9. Working with Services
• - ClusterIP - Default
• - NodePort
• - LoadBalancer
- ExternalName
Various Types of Communications
- Between Containers in the same Pod - Loopback Interface or Localhost:Port or 127.0.0.1:Port
- Between Pod to Pod running on the same Node - Linux Bridge
- Between Pod to Pod running on the different Nodes - Overlay Network Plug-in (Weave-net)
- Between Service to a Pod - kube-proxy
- External to a Service
• - coreDNS
• - External Load Balancer
• - Ingress
• - ExternalIP
Pod Probes
- Readiness Probe - The kubelet uses readiness probes to know when a container is ready to start accepting traffic.
- Liveness Probe - The kubelet uses liveness probes to know when to restart a container.
- Startup Probe - The kubelet uses startup probes to know when a container application has started.
Cluster Upgrade
- Masters
• - Active
• - Passive
- Nodes
kubeadm - Tool
kubelet - Node Agent
kubectl - CLI
Kubernetes Storage
Docker - /var/lib/docker/containers/
Kubelet - /var/lib/kubelet/pods/
Various Storage Options
Ephemeral
• - emptyDir
Persistent Storage
• - hostPath
• - nfs
• - pv
- pvc
Static Storage Provisioning
- Storage Admin
• - Install and Configure Storage Solution - NFS
• - 192.168.100.22
• - /srv/nfs/storage
- Kubernetes Admin
• - Create a PV
• - Storage Device
- Developer
• - Create a PVC
• - Request for the Storage
- PV
• - A PersistentVolume (PV) is a piece of storage in the cluster
• - Static - Admin
• - Dynamic - StorageClass
- PVC
• - A PersistentVolumeClaim (PVC) is a request for storage by a user.
volumeMode:
• Filesystem
• Block
accessModes
• RWO - ReadWriteOnce
• ROX - ReadOnlyMany
• RWX - ReadWriteMany
RWOP - ReadWriteOncePod
Cluster -> Nodes - Pods -> Containers -> App
Reclaim Policy:
• - Retain
• - Delete
• - Recycle - Depricated
What is reclaim policy in kubernetes
In Kubernetes, the reclaim policy refers to the action that should be taken when a PersistentVolumeClaim (PVC) is deleted or released. PersistentVolumes (PVs) in Kubernetes can be dynamically provisioned or statically provisioned, and when a PVC is no longer needed, the reclaim policy determines whether the associated PV should be retained, recycled, or deleted.
There are three main reclaim policies in Kubernetes:
1. **Retain**: With the "Retain" reclaim policy, the PV associated with the PVC is not deleted when the PVC is deleted or released. Instead, the PV and its data are retained indefinitely, and it is the responsibility of the cluster administrator to manually reclaim or delete the PV if it is no longer needed.
2. **Delete**: With the "Delete" reclaim policy, the PV associated with the PVC is automatically deleted when the PVC is deleted or released. This results in the underlying storage resources being released and potentially reclaimed by the storage provider.
3. **Recycle**: The "Recycle" reclaim policy is deprecated as of Kubernetes v1.21 and will be removed in a future release. With the "Recycle" policy, the PV associated with the PVC is deleted when the PVC is deleted or released, and the underlying storage volume is formatted or scrubbed to remove any data. However, this policy has been deprecated due to security concerns and inconsistencies in implementation across storage providers.
By default, if no reclaim policy is specified in the PersistentVolumeClaim manifest, the default policy defined by the storage class is used. Additionally, the reclaim policy can be specified at the PV level or overridden at the PVC level.
Choosing the appropriate reclaim policy depends on factors such as data retention requirements, storage provider capabilities, and organizational policies regarding resource cleanup and reclamation.
Dynamic Storage
#Storage Admin
- Backend Storage
• - NFS Service
• - 192.168.100.22
• - /srv/nfs/storage
#Kubernetes Admin
- Create a Dynamic PV
- Deploy Storage Provisioner
- Create a StorageClass and Refernce a Storage Provioner
#Developer
- Create a PVC and Reference StorageClass
- Refernce a PVC within a Pod
ConfigMaps
- A ConfigMap is an API object used to store non-confidential data in key-value pairs.
- Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
- A ConfigMap allows you to decouple environment-specific configuration from your container images, so that your applications are easily portable.
Secrets
- A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key.
- Such information might otherwise be put in a Pod specification or in a container image
Vanilla kubernetes can run on 5000 servers. Within single cluster we can run 5000 nodes
On each server we can run docker engine as CRI
Kubelet is node agent which manages containers
Kubernetes manages pods
Pod is collection of containers
We connect only to master ie kube-apiserver which redirect connection to one of the nodes
Pod storage etcd
Kube-schedular: which node of cluster pod to run application on
Single application can have multiple pods and replicas taken care by kube-controller mgr
Pods run on worker nodes
Pods runs on each node =110
Pods = one or more containers
Single cluster can be divided into multiple virtual clusters called namespace
Nested namespace is not supported
Always we run command from master node
If you delete NS ..all objects within NS will be deleted
Accidental deletion is not supported
All pods need to be in same node .. As per requirement, we can't have some pod in one node and other in another node
Any object we create goes in default node if not specified
There is also constraint on how many pods can run on single nodes
CRD comes into picture when you want more features to kubernetes without impacting it
Logging directly to worker node and create pod is called static pod
All control pods are running as static pods -- usecase of static pods
What is CRD in kubernetes?
In Kubernetes, CRD stands for Custom Resource Definition. CRDs allow users to define custom resources and their schema in a Kubernetes cluster, extending the functionality of Kubernetes beyond its built-in resources like Pods, Services, Deployments, and others.
Here are some key points about CRDs:
1. **Custom Resources**: CRDs enable users to define their own custom resources, which can represent any kind of object or application-specific resource that is not provided by default in Kubernetes. Examples include databases, message queues, machine learning models, monitoring configurations, and more.
2. **Schema Definition**: When creating a CRD, users specify the schema for the custom resource, including its API fields, validation rules, default values, and other metadata. This allows Kubernetes to enforce consistency and validate the configuration of custom resources.
3. **Controller Logic**: After defining a custom resource using a CRD, users typically write controllers or operators to manage the lifecycle and behavior of the custom resource. Controllers watch for changes to custom resources and reconcile the actual state of the resource with its desired state, performing actions such as creating, updating, or deleting underlying Kubernetes objects.
4. **Extensibility**: CRDs provide a powerful mechanism for extending Kubernetes with domain-specific or application-specific functionality. By defining custom resources and controllers, users can encapsulate complex logic, automate operational tasks, and integrate third-party systems with Kubernetes seamlessly.
5. **Community and Ecosystem**: CRDs have become a popular tool for Kubernetes users and ecosystem developers to create and share custom extensions and integrations. Many open-source projects, vendors, and organizations provide CRDs and controllers for common use cases, enabling users to leverage pre-built solutions or develop their own customizations.
Overall, CRDs are a fundamental building block for extending Kubernetes with custom resources and domain-specific functionality, enabling users to adapt Kubernetes to their specific requirements and use cases.
Working with services
- Load balancer
- Kube-proxy
• Runs on every node in cluster
• Kube-proxy use iptables for loadbalancing
4 services
Clusterip is for accessing cluster
To access service from outside .. Change to NodePort on yml file. Service is also available internally
First port inside cluster and 2nd port outside cluster
To access externally: nodeip:2nd port
We can also have public ip ..don't want to reveal real ip
Kubectl get rs
Pods with less age will get delete
In case of edit etcd we don't have to apply
But make changes in yaml and apply is recommended
The older the pods the more are stable
Kube-poxy is used for both cluster ip and external ip
Kubectl explain pod
Along with application container .. We can have more containers like metric, logs basically helper container
Independent pod if created .. Will be deleted so better to have rs so that it will create once incase of deletion
So each tier label we can have single RS (replicaset)
Per application have 1 RS
Deployment makes use of RS…when introduce new version ..its helps in rollout
Yaml for RS and deployment
Kubectl get apps all
Kubectl get rs
Kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
authserver 1/1 1 1 3d
cluster-api 1/1 1 1 3d
common-agent 1/1 1 1 3d
KIND: Pod
VERSION: v1
DESCRIPTION:
Pod is a collection of containers that can run on a host. This resource is
created by clients and scheduled onto hosts.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
Specification of the desired behavior of the pod. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
status <Object>
Most recently observed status of the pod. This data may not be up to date.
Populated by the system. Read-only. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
Rollout means upgrade
Rollback means downgrade
Kubectl explain deployment ---recursive
We can generate yaml for any objects
Kubectl create --help
Kubectl expose deployment web-server --port=80
Kubectl get svc
Default type is clusterIP
Replicaset get details from etcd
Kube-proxy relies on all nodes
To get application ip, describe the pod
Deletion is support on pod level not container level
Init container start in sequence. If one init container fails other init container will not start. To satisfy some application pre-check. But application container start in parallel.
Sidecar is type of another application container. It is kind of plug and play
If there are multiple replicas .. Replicas are distributed equally across pods.it is default behaviour of schedular.
By default it is dynamic scheduling.
We can also do manual scheduling. Using node name. Kube schedular will not involve here.
We can have labels to any object
System pods runs on master.. Applications pods run worker
Terraform and ansible tools to create labels
Kube api server connect to schedular
By default kubernetes logs are not enabled in control plane. Schedular specific logs
Pod priority .. .default is 0 … higher the value higher is priority .. In case of resource issue .. Priority pod will schedule and other pod will get evicted and can be scheduled on other nodes.
In case of probes .. Schedular will not come into picture .. It is all about kubelet
Schedular comes in picture for complete pod
Dynamic scheduling in Kubernetes refers to the process by which the Kubernetes scheduler automatically assigns Pods to nodes in the cluster based on available resources, constraints, and user-defined policies. The scheduler continuously monitors the state of the cluster and makes decisions about where to place Pods to ensure efficient resource utilization, high availability, and optimal performance.
Here are some key points about dynamic scheduling in Kubernetes:
1. **Node Selection**: When a new Pod is created or an existing Pod needs to be rescheduled (e.g., due to node failure or scaling), the Kubernetes scheduler evaluates the available nodes in the cluster to determine the best placement for the Pod. Factors considered during node selection may include available CPU and memory resources, node affinity and anti-affinity rules, pod affinity and anti-affinity rules, node taints and tolerations, and resource requests and limits specified in the Pod's configuration.
2. **Scalability and Resource Utilization**: Dynamic scheduling enables Kubernetes clusters to scale efficiently and automatically distribute workload across nodes to achieve optimal resource utilization. By dynamically placing Pods on nodes with available capacity, Kubernetes ensures that resources are utilized effectively and that Pods are not over-provisioned or under-provisioned.
3. **High Availability**: Dynamic scheduling helps improve the resilience and availability of applications by automatically redistributing workload in the event of node failures or disruptions. When a node becomes unavailable, the Kubernetes scheduler identifies alternative nodes where affected Pods can be rescheduled to maintain service availability.
4. **Integration with Cluster Autoscaler**: Dynamic scheduling works in conjunction with the Kubernetes Cluster Autoscaler, which automatically adjusts the size of the cluster by adding or removing nodes based on workload demands. The scheduler can schedule Pods on newly added nodes to accommodate increased workload or rebalance workload across existing nodes to optimize resource usage.
5. **Custom Schedulers and Policies**: Kubernetes allows users to customize the scheduling process by implementing custom schedulers or defining scheduling policies using features like node affinity, pod affinity, pod anti-affinity, node taints, and tolerations. These customizations enable users to align scheduling decisions with specific requirements, constraints, and business priorities.
Overall, dynamic scheduling plays a critical role in the efficient operation of Kubernetes clusters, enabling automated workload placement, resource optimization, and high availability of applications.
Static Pod:
Are scheduled on master nodes.
Kube-apiserver, kube-controller-manager, kube-schedular, etcd
Kube-proxy is also called load balancer
Pod restart (deleted + added)
Ready(1/1) : right side is total number of container
kubectl get nodes -o wide
watch kubectl get all -n nsxi-platform
kubectl get svc(services)
kubectl api-resources
systemctl status docker
kubectl get pods -A
kubectl describe nodes | grep -i -e "Name: " -e "Taints"
kubectl get pv(persistent volume)
kubectl get sc (storageclass)
kubectl get cm(configmap)
Name: napp-cluster-default-sd54b-68xhv
Annotations: cluster.x-k8s.io/cluster-name: napp-cluster-default
cluster.x-k8s.io/owner-name: napp-cluster-default-sd54b
Taints: node-role.kubernetes.io/master:NoSchedule
Hostname:
Name: napp-cluster-default-sd54b-7m2s6
Annotations: cluster.x-k8s.io/cluster-name: napp-cluster-default
cluster.x-k8s.io/owner-name: napp-cluster-default-sd54b
Taints: node-role.kubernetes.io/master:NoSchedule
Hostname:
Name: napp-cluster-default-sd54b-gn595
Annotations: cluster.x-k8s.io/cluster-name: napp-cluster-default
cluster.x-k8s.io/owner-name: napp-cluster-default-sd54b
Taints: node-role.kubernetes.io/master:NoSchedule
Hostname:
Name: napp-cluster-default-workers-kgvqh-766459bb7f-2tsm5
Annotations: cluster.x-k8s.io/cluster-name: napp-cluster-default
cluster.x-k8s.io/owner-name: napp-cluster-default-workers-kgvqh-766459bb7f
Taints: <none>
Hostname:
Name: napp-cluster-default-workers-kgvqh-766459bb7f-djjkj
Annotations: cluster.x-k8s.io/cluster-name: napp-cluster-default
cluster.x-k8s.io/owner-name: napp-cluster-default-workers-kgvqh-766459bb7f
Taints: <none>
Hostname:
Name: napp-cluster-default-workers-kgvqh-766459bb7f-mj7sh
Annotations: cluster.x-k8s.io/cluster-name: napp-cluster-default
cluster.x-k8s.io/owner-name: napp-cluster-default-workers-kgvqh-766459bb7f
Taints: <none>
Hostname:
Name: napp-cluster-default-workers-kgvqh-766459bb7f-mtx9f
Annotations: cluster.x-k8s.io/cluster-name: napp-cluster-default
cluster.x-k8s.io/owner-name: napp-cluster-default-workers-kgvqh-766459bb7f
Taints: <none>
Hostname:
# k api-resources
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
# k get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
vsan-default-storage-policy csi.vsphere.vmware.com Delete Immediate true 119d
vsan-default-storage-policy-latebinding csi.vsphere.vmware.com Delete WaitForFirstConsumer true 119d
# k get cm
NAME DATA AGE
cluster-api-application-config 2 3d
clusterapi-feature-ctrl 1 3d
Kubernetes version : +1 and -1 version upgrade is allowed
During upgrade in case of failures … it will automatically rollback to pervious version
During upgrade: kubeadm + kubelet + kubectl all needs upgrade
Pods has 3 directores: kube-proxy, netwroking, pod
In kubernetes every object gets unique uid
In docker every container has unique id
By default pods are ephemeral and data is not persistent.
Daemons check in config map after every 1 minutes .. So it will take 1 min to update same in pod
Secret and configmap should not exceed 1MB
Secret is encrypted using TLS during transition
ConfigMap can be used as volume as backend
By default all pods with same or different namespace can communicate with each other
By default service to service communication is restricted within namespace
To have control we can have network policies.
what is upstream kubernetes?
In the context of Kubernetes, "upstream" typically refers to the official development repository or source code of Kubernetes maintained by the Kubernetes project itself. It is the primary location where new features, enhancements, bug fixes, and improvements to Kubernetes are developed and released. Here are some key points about the upstream Kubernetes project: 1. **Official Repository**: The upstream Kubernetes project is hosted on GitHub under the organization "kubernetes" (https://github.com/kubernetes/kubernetes). This repository contains the source code, documentation, and other resources related to the Kubernetes project. 2. **Development Process**: The upstream Kubernetes project follows an open-source development model, where contributions are made by individual developers, companies, and organizations from around the world. Development activities, including code reviews, issue tracking, and release management, are coordinated through GitHub. 3. **Release Cycle**: Kubernetes follows a regular release cycle, with new versions (major, minor, and patch releases) being published approximately every three months. Each release undergoes a rigorous development, testing, and validation process before being made generally available to users. 4. **Community Involvement**: The upstream Kubernetes project has a large and active community of contributors and users who participate in various aspects of the project, including code contributions, testing, documentation, support, and advocacy. The Kubernetes community is inclusive and welcomes participation from anyone interested in contributing to the project. 5. **Distribution**: Many Kubernetes distributions, platforms, and vendors build their offerings based on the upstream Kubernetes project. These distributions may add additional features, integrations, and customizations on top of the core Kubernetes platform, but they often track the upstream project closely to incorporate new features and fixes. Overall, the upstream Kubernetes project serves as the central hub for development and innovation in the Kubernetes ecosystem, driving the evolution of the platform and enabling its adoption by organizations worldwide.
A Visual Overview of Kubernetes
Containers revolutionized modern application development and deployment. Unlike bulky virtual machines, containers package up just the application code and dependencies, making them lightweight and portable. However, running containers at scale brings challenges. Enter Kubernetes!
Kubernetes helps deploy, scale, and manage containerized applications across clusters of machines.
𝗖𝗼𝗿𝗲 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀
Control Plane: The brains behind cluster management, handling scheduling, maintaining desired state, rolling updates etc. Runs on multiple machines for high availability.
Worker Nodes: The machines that run the containerized applications. Each node has components like kubelet and kube-proxy alongside the application containers.
The smallest deployable units in Kubernetes are Pods. A Pod encapsulates one or more tightly coupled containers that comprise an application. Kubernetes assigns Pods to worker nodes through its API server.
𝗞𝗲𝘆 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗖𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀
- Scalability: It's easy to scale applications up and down on demand. Just specify the desired instance count, Kubernetes handles the rest!
- Portability: Applications can run anywhere - on premise, cloud, hybrid environments etc. No vendor lock-in!
- Resiliency: Kubernetes restarts failed containers, replaces unhealthy nodes, and maintains desired state, reducing downtime.
- Automation: Manual tasks like rolling updates, rollbacks are automated, freeing teams to focus on development.
𝗧𝗿𝗮𝗱𝗲𝗼𝗳𝗳𝘀
The power of Kubernetes comes with complexity. Installing, configuring, and operating Kubernetes has a steep learning curve. For many teams, it's overkill.
Managed Kubernetes services help by handling control plane management, letting teams focus only on applications and pay for just the worker resources used.
𝗜𝘀 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗮 𝗚𝗼𝗼𝗱 𝗙𝗶𝘁?
Consider:
- Are you running containers already at meaningful scale?
- Will portability or resiliency resolve production issues?
- Is your team willing to invest in learning and operating Kubernetes?
If you answered yes, Kubernetes may suit your needs. Otherwise, containers without orchestration may still get the job done.
No comments:
Post a Comment