Pages

Saturday, December 25, 2021

Containers, Dockers and Kubernetes

With Virtualization we have these shortcomings

Potential OS overheads: These OS has Capex and Opex costs. Each OS also consumes resources from the physical server. Each OS is potential attack vector.

  • License costs
  • Admin
  • Patching
  • Updates
  • AV and more

These Leads us to containers.


Containers:

Only one operating system. Take a physical server and store an operating system, and then essentially we carve and slice that OS into secure containers. Then inside the containers we can an app. This means we got more free space to spin off more containers and more apps for the business. These containers are so fast and ideal for the situation likes tearing things down and bring up on demand because there’s no virtual machine and no extra Operating system to boot before your app can start. In container model, only one base OS is there and it’s already running. So all of the apps are in containers are securely sharing a single OS. Most containerised apps spin up in probably less than a second. From Admin perspective we need to manage only one OS.






Containers are the standardised way for you to package your application, it’s configuration and dependencies together into a single logical object.

What are applications?

Application are software programs that are developed to perform specific tasks and execute on a computer.

To computers, applications are the binary instructions for a computer to execute tasks known as a process.


Container leverages Linux Kernel Features:

  • Namespaces
  • Control Groups (cgroups)

Namespaces allow the operating system to limit what a process can see, such as other processes, the file system, and more. Process isolation and file system isolation are two of key components.


Cgroups on the other hand limit what resources a process can do or use, how much CPU, how much memory and so on.


Basically what docker or container runtime will do is take the root filesystem that you give them in the form of a Docker image and run that with a whole bunch of Namespaces around it and optionally with some of these CPU and

Memory constraints around it as well.


Finally we have container image which contains the binary that we want to run, as well as any associated files or dependencies needed.


A registry is simply a collection or a repository of images.


To pull a image form public or private registry we use a docker command along with pull sub command

Contianer Demo:

Container can run on VM, server, BM or laptop. Only the thing is machine should be running Docker. 

Docker on Windows runs only windows app and docker on Linux runs only Linux apps. It may be possible to run your Linux apps on Docker on Windows. 




Download a single image to the Docker host

Docker image is a pre-packed application or kind of a VM template. Basically it got everything wrapped up into a single bundle that you need to run an application. It can contain a web server that runs some static content. To fire up a container from this image we use this command.


Docker container run -d —name web -p 8000:8080 <Image name>


This will create a unique id which represent the container.


Doing this we exposed our Docker host ip with port 8080.


We can also stop the container by running 


Docker stop web

To start again 

Docker container start web 


Microservices and Cloud native:

Legacy apps or monolithic apps, as called sometimes these are those monstrous apps where everything that the app does is pretty much baked into a single binary(program). So everything is lumped into a single program.


May be your app has web front end, search, auth, stock, check out services. Its just a nightmare from developer view point if you want update or fix, let’s say just the search part of the app, it is a whole big exercise on the entire code base. So you’re hacking the entire app, and you’re testing and you’re recompiling the whole thing. And on the operations front, if you got an issue, let’s say if with same search functionality again, the only way to roll out the fix is to take the entire app down as everything is lumped into a single program. 


Fortunately micro services and cloud native on the other hand break out all of those different components and make each service its own little mini app or mini service. Still they all will talk to each other to make full app experience, but updating that search fix now become way easier for the developer and operator. Now the developer only needs to touch the search code when it update the search feature. And from operations perspective they need to only roll out a new version of the search service.


So the main intention here is build, deploy and manage apps in a way that lends itself to modern business requirements or cloud computing requirements as we often call them.


So it isn’t really anything to do with deploying on the cloud. You can absolutely run a cloud native app in your on-prem data center. Cloud native is all about how the app’s built and managed, so we can do things like scale the front end independent of the back end. Also you can integrate on each feature independently.


So containers improve on nearly everything offered by hypervisors, and they pave the way for more modern cloud native and Microservices applications.


Docker the Company:

Company Docker, Inc is the main sponsor behind the container technology with the same name. Initially it was a company called dotCloud that provided a developer platform on top of Amazon Web Services. They’d been using containers to build their platform on top of AWS. They came up with new tool to help them spin up and manage their containers. That in-house tech was Docker. Name is deriver from dock + worker


Docker the technology:

Containers are like fast lightweight virtual machines, and Docker makes running our apps inside of containers really easy. Docker is open-source and lives in Git-hub

Docker community Edition

  • Open source
  • Lots of contributors
  • Quick release cycle


Enterprise Ediiton (EE)

  • Slower release cycle
  • Additional features
  • Official support


Demo:

Build the docker image on Docker installed host.


Docker image build -t <imagename> .

All docker doing here is taking our source code and doing all the hard work to package it as a container or is an image actually. An image is like a stopped container


Check for image

Docker image ls <imagename>

All our source code is packaged and ready to use as a container.


Now we will push this image to the registry or Docker Hub. You can have your own on-prem or private registries.


Docker image push <image_name>


Now run it as container, give it a name, make it available on the network


Docker container run -d —name web -p 8000:8080 <image_name>







The Docker workflow involves several key steps and concepts that enable developers to create, deploy, and manage containerized applications efficiently. Here's an explanation of the Docker workflow along with the main components and processes involved:


1. **Dockerfile**:

   - The Dockerfile is a text-based script that defines the instructions for building a Docker image. It specifies the base image, environment variables, dependencies, commands, and configurations needed to create a containerized application.


2. **Docker Image**:

   - A Docker image is a lightweight, standalone, executable package that contains everything needed to run a containerized application, including the application code, runtime environment, libraries, and dependencies. Images are built from Dockerfiles using the `docker build` command.


3. **Docker Container**:

   - A Docker container is a running instance of a Docker image. Containers are isolated, portable, and can be deployed consistently across different environments. They encapsulate the application and its dependencies, ensuring consistency and reproducibility.


4. **Docker Registry**:

   - A Docker registry is a repository that stores Docker images. Public registries like Docker Hub provide a centralized location to share and discover Docker images, while private registries can be used for storing proprietary or sensitive images within an organization.


Now, let's walk through the Docker workflow:


1. **Write Dockerfile**:

   - Developers start by writing a Dockerfile that defines the build instructions for their application. This includes specifying the base image, copying files, setting environment variables, installing dependencies, and defining the container's entry point.


2. **Build Docker Image**:

   - Once the Dockerfile is ready, developers use the `docker build` command to build a Docker image based on the instructions in the Dockerfile. This command creates a new image layer by layer, caching intermediate layers for faster builds.


   ```bash

   docker build -t myapp-image:v1 .

   ```


3. **Run Docker Container**:

   - After building the Docker image, developers can run a Docker container using the `docker run` command. This command starts a new container based on the specified image, assigns resources (e.g., CPU, memory), exposes ports, mounts volumes, and sets runtime options.


   ```bash

   docker run -d -p 8080:80 --name myapp-container myapp-image:v1

   ```


4. **Manage Docker Containers**:

   - Developers can manage Docker containers using various Docker CLI commands. This includes starting, stopping, restarting, pausing, and removing containers as needed. Docker provides commands like `docker ps`, `docker start`, `docker stop`, `docker rm`, etc., for container management.


   ```bash

   docker ps -a                # List all containers

   docker start myapp-container   # Start a stopped container

   docker stop myapp-container    # Stop a running container

   docker rm myapp-container      # Remove a container

   ```


5. **Push/Pull Docker Images**:

   - Docker images can be shared and distributed using Docker registries. Developers can push their local images to a registry using the `docker push` command and pull images from a registry using the `docker pull` command.


   ```bash

   docker login                     # Log in to Docker Hub or private registry

   docker push myusername/myapp-image:v1   # Push image to registry

   docker pull myusername/myapp-image:v1   # Pull image from registry

   ```


6. **Continuous Integration/Continuous Deployment (CI/CD)**:

   - Docker is often integrated into CI/CD pipelines to automate the build, test, and deployment processes. Tools like Jenkins, GitLab CI/CD, and GitHub Actions can trigger Docker builds, run tests in containers, and deploy Dockerized applications to production environments.


This Docker workflow enables developers to create portable, scalable, and consistent environments for their applications, streamlining the development, testing, and deployment lifecycle.



Kuberneters:

Google was running search and stuff on containers from 90’s. Every google search runs on its own container this means spinning up billions of containers. So to make it possible they built a couple of in-house systems to help.

Initially they built something called Borg then Omega and finally Kubernetes. So Kubernetes came out of google and its open source and these days it is star project for the cloud native computing Foundation.

All major cloud players AWS, google and Azure offer hosted Kubernetes services and so does IBM and bunch of others. We can also get Kubernetes for On-prem.

It is one of the most extensive platform like it does stateless, stateful, batch work, long running, security, storage, networking, serverless, or Functions as a service, machine learning.

All of  these stuff it can do anywhere, in the cloud and on-prem and on your data center and even in your laptop when you are developing. The name Kubernetes is Greek for helmsman or captain, the person who steers the ship.


Docker provides the mechanics for starting and stopping individual containers, pretty lower level stuff. Kubernetes on there hand doesn’t care about low-level stuff like that. Kubernetes care about higher-level stuff, like how many containers to run in, maybe which nodes to run them on, and things like knowing when to scale them up or down even how to update your containers without downtime.





Like conductor in orchestra, Kubernetes is a conductor which is issuing commands to Docker instances, telling them when to start and stop containers and how to run them.


Comparing to Vmware we can think of Docker as ESXI, the low level hypervisor and Kubernetes can be considered as vCenter that sits above a bunch of hypervisors.


We have Kubernetes cluster to host applications, and it can be anywhere. Each of these nodes is running some Kubernetes software and a container runtime. Usually the container runtime’s Docker or Containers, but others do exist. Sitting above all of this is the brains of Kubernetes(K8’s control plane), and that’s making the decisions like the conductor int he orchestra.


Consider we have a web app with front end and back end. The web front ed maybe containerised Nginx, and lets say its containerised MySQL on the back end. We can say Kubernetes to get one container in back end and 2 container in front end and Kubernetes deploys it. And One more thing it decide is which nodes to run stuff on. Lets say if load on front end increases and two containers are not enough then based on the situation it spins up two more and it does it without human intervene. Literally when load goes up in front end Kubernetes has enough intelligence to spin up more containers. Same holds good when load decrease or node goes down. It’s come up with new node also called self healing.


Point to remember is Docker’s doing all the low-level container spinning up, spinning down stuff, but it only does it when Kubernetes tells it to, meaning Kubernetes is managing a bunch of Docker nodes.


Kubernetes is the absolute business for decoupling your applications from the underlying infrastructure. 


Suitable Workloads:

Stateless:

- Doesn’t remember stuff


Stateful:

- Has to remember stuff  


Clouds are providing the infrastructure and Docker and Kubernetes are providing the tools for building the apps.


With containers it is possible to deploy some of our legacy apps directly inside of containers. Containers are great for new modern apps, the stateless and stateful bits.


Docker the company and Docker the technology have been around for a while. The company started as dotCloud in around 2010 but then it rebranded itself as Docker Inc around 2013.


Orchestration:

Apps Comprise:

  • Multiple parts (services)
  • Multiple requirements

Game Plan

  • Describe the app

Document Game plan in version control system


Key to automation

  • Ordered startups
  • Intelligent scheduling

Give Game plan to orchestrator (K8s)


Let orchestrator deploy and manage app.


As these apps keeps growing, scale make things very complex. So this cannot be done manually so we need some solid system to deploy and manage these apps.


So the core of the container orchestration is define our app, how all parts interact, provision the infrastructure, and then deploy and manage the app. Thats orchestration.


This all can be done using orchestrator mostly Kubernetes and let the orchestrator deploy the app and manage it.








No comments:

Post a Comment