Pages

Friday, June 3, 2022

Vmware NSX Intelligence & Security Use Cases

Traditional approaches to security analytics are relied on a centralised architecture or either agents based. The first challenge that we see generally is the size and footprint these sorts of a centralised processing model requires. Racks and racks of servers and storage, particularly in scale or also if you're leveraging an agent based solution, there's a huge operational overhead to manage agents across hundreds or thousands of endpoints. 

This also has additional network overhead and degradation because you have to duplicate traffic across all the endpoints. There are multiple trade offs that you get with a network based solution that has limited contexts, looking at ports and protocols so guest can easily be compromised. Other trade off that you often see to get around this is sampling of data means you're not collecting all of the data and context and another challenge is some of the solutions are tied to specific physical devices and don't work across all networks.In contrast, with NSX intelligence we have built and delivered a native distributed analytics engine that leverages the rich and unique workload context that we have in NSX to deliver security policy management and visibility through analytics across the DC. 

NSX can provide services like routing, stateful security enforcement at scale using a distributed model with centralised management. Within NSX intelligence we have built analytics capabilities on top of the NSX platform and a unique data using the same model and it allows us to do it far more efficiently and in a lightweight form factor compared to our competitors and the reason for this is NSX architecture allows us to do in line processing for multiple functions. Essentially we implement these as different steps in the packet processing pipeline so we have distributed firewall followed by IDS/IPS. With this, analytics can be provided with minimal overhead directly in kernel space and this benefits and differentiate us from competitors. With the fact that NSX intelligence builds on top of the proven NSX platform and DFW engine we don't need to rely on agents or third party solution to go from an analytics driven recommendation to actual effective policy. We're already in line for all of the traffic in every flow anyway, so storing and analysing the data just allows us to do more which is one of the core principles of the NSX. We're also leveraging a number of patented an innovative AI and ML algorithms. Because of this architectural advantage we don't have any trade offs in terms of sampling or being too far away from the workload we see everything stored and can actually act on analytics from this data. 

Using a streaming based architecture, we also heavily optimize the data platform through distribution because at the source we're able to do things like deduplication compression and matching on the unique flows and it's these optimisations that allow us to deliver NSX intelligence in light footprint. While still storing up to thirty days of historical data in our data platform we've built features in our NSX intelligence engine that leverage this time series historical data and enable an extensible solution that can serve as a hub for analyse data that we can share with existing security tools and other VM products. 

These are the three primary security use cases and features for NSX intelligence. NSX intelligence visualisation, security policy recommendations, network traffic analysis. 
1. Visualisation helps with visibility of workloads and security postures when it comes to the East West traffic within the data center. Intelligence learns about all traffic flows in an environment and give customers that visibility that they're either lacking or have been struggling. 
2. Security policy recommendations provide the ability to simplify and automate NSX security recommendations for NSX firewall rule groups and services based on that traffic flow that we observe using distributed analytics engine. 
3. Network traffic analysis allows you to protect against known threats and apply segmentation through networking. It is proactive and behavioural security controls that utilize all these contacts that we have in NSX to report on potential issues and threats in any environment.

##
VMware NSX Intelligence provides advanced analytics and security capabilities for VMware NSX environments. Here are some common use cases for NSX Intelligence:

1. **Threat Detection and Mitigation**:
   - NSX Intelligence utilizes advanced analytics and machine learning to detect anomalous behavior and potential security threats within the NSX environment. It continuously monitors network traffic, identifies suspicious activities, and provides real-time alerts to security teams.
   - Security teams can leverage NSX Intelligence to detect and mitigate various types of threats, including malware infections, data exfiltration attempts, and lateral movement by attackers.

2. **Policy Enforcement and Compliance**:
   - NSX Intelligence helps organizations enforce security policies and compliance requirements by providing granular visibility into network traffic and application behavior. Security policies can be defined based on application context, user identity, and other factors, ensuring that only authorized traffic is allowed.
   - NSX Intelligence can automatically enforce security policies, quarantine compromised endpoints, and block malicious traffic in real-time, helping organizations maintain compliance with industry regulations and security standards.

3. **Micro-Segmentation and Zero Trust Security**:
   - NSX Intelligence enables organizations to implement micro-segmentation and zero trust security models by creating security policies based on application awareness and workload context. This approach allows organizations to segment their network into smaller, isolated zones and restrict lateral movement between workloads.
   - By leveraging NSX Intelligence's visibility and analytics capabilities, organizations can gain insights into application dependencies and communication patterns, making it easier to define and enforce micro-segmentation policies effectively.

4. **Advanced Threat Prevention**:
   - NSX Intelligence integrates with third-party security solutions, such as intrusion detection/prevention systems (IDPS), antivirus software, and security information and event management (SIEM) platforms, to enhance threat prevention capabilities. It can correlate security events from multiple sources, identify emerging threats, and take automated actions to mitigate risks.
   - NSX Intelligence enables organizations to implement a defense-in-depth strategy by combining network-based threat detection with endpoint security controls, reducing the likelihood of successful cyberattacks.

5. **Traffic Visibility and Forensics**:
   - NSX Intelligence provides detailed visibility into network traffic, including application-level insights, flow telemetry, and historical data. Security teams can use this information for forensic analysis, incident response, and troubleshooting purposes.
   - NSX Intelligence's traffic visibility capabilities enable security teams to identify security incidents quickly, investigate the root cause of incidents, and take appropriate remediation actions to contain and mitigate the impact of security breaches.

Overall, VMware NSX Intelligence empowers organizations to strengthen their security posture, improve threat detection and response capabilities, and enhance compliance with regulatory requirements in virtualized and cloud environments. By leveraging advanced analytics, automation, and integration with existing security tools, NSX Intelligence helps organizations address evolving cybersecurity challenges and protect critical assets from advanced threats.

Saturday, January 15, 2022

Docker CLIs


What is Docker?

Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization. It allows you to package an application with all its dependencies into a standardized unit for software development and deployment.

Key Concepts:

1. Containers
- Lightweight, standalone executable packages
- Include everything needed to run an application
- Ensure consistency across different environments

2. Images
- Read-only templates used to create containers
- Built from layers, each representing an instruction in the Dockerfile
- Can be shared via Docker Hub or private registries

3. Dockerfile
- Text file containing instructions to build a Docker image
- Defines the environment inside the container
- Automates the image creation process

4. Docker Compose
- Tool for defining and running multi-container Docker applications
- Uses YAML files to configure application services
- Simplifies complex setups with a single command

5. Docker Swarm
- Native clustering and scheduling tool for Docker
- Turns a pool of Docker hosts into a single, virtual host
- Enables easy scaling and management of containerized applications

Benefits of Docker:

• Consistency: "It works on my machine" becomes a thing of the past
• Isolation: Applications and their dependencies are separated from the host system
• Efficiency: Lightweight containers share the host OS kernel, reducing overhead
• Portability: Containers can run anywhere Docker is installed
• Scalability: Easy to scale applications horizontally by spinning up new containers

Best Practices:

1. Keep images small and focused
2. Use multi-stage builds to optimize Dockerfiles
3. Leverage Docker Compose for local development
4. Implement proper logging and monitoring
5. Regularly update base images and dependencies
6. Use volume mounts for persistent data
7. Implement proper security measures (e.g., least privilege principle)

Getting Started:

1. Install Docker on your machine
2. Familiarize yourself with basic commands (docker run, build, pull, push)
3. Create your first Dockerfile and build an image
4. Experiment with Docker Compose for multi-container setups
5. Explore Docker Hub for pre-built images and inspiration

Docker has become an essential skill for developers and operations teams alike.

Its ability to streamline development workflows, improve deployment consistency, and enhance scalability makes it a crucial tool in modern software development.



𝗜𝗺𝗮𝗴𝗲 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁
- Use 𝚍𝚘𝚌𝚔𝚎𝚛 𝚋𝚞𝚒𝚕𝚍 to create images from Dockerfiles, the blueprint for containers.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚙𝚞𝚕𝚕 to download pre-built images from registries like Docker Hub.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚙𝚞𝚜𝚑 to upload your images to remote registries.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚒𝚖𝚊𝚐𝚎𝚜 lists locally stored images.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚛𝚖𝚒 removes unwanted images.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚝𝚊𝚐 tags images for organizational purposes.

𝗖𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿 𝗟𝗶𝗳𝗲𝗰𝘆𝗰𝗹𝗲
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚛𝚞𝚗 launches a container from an image.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚜𝚝𝚘𝚙 and 𝚍𝚘𝚌𝚔𝚎𝚛 𝚔𝚒𝚕𝚕 halt running containers gracefully or forcibly.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚛𝚎𝚜𝚝𝚊𝚛𝚝 restarts a stopped container.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚛𝚎𝚗𝚊𝚖𝚎 to rename existing containers.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚕𝚘𝚐𝚜 prints logs of a container.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚎𝚡𝚎𝚌 runs commands interactively in a container.

𝗡𝗲𝘁𝘄𝗼𝗿𝗸𝗶𝗻𝗴 𝗮𝗻𝗱 𝗦𝘁𝗼𝗿𝗮𝗴𝗲
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚗𝚎𝚝𝚠𝚘𝚛𝚔 manages custom networks containers connect to.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚟𝚘𝚕𝚞𝚖𝚎 creates sharable storage volumes containers can mount.

𝗠𝗮𝗶𝗻𝘁𝗲𝗻𝗮𝗻𝗰𝗲
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚜𝚢𝚜𝚝𝚎𝚖 𝚙𝚛𝚞𝚗𝚎 cleans up unused containers, images, volumes, etc.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚛𝚖 deletes stopped containers.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚒𝚗𝚜𝚙𝚎𝚌𝚝 shows in-depth info on a container.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚜𝚝𝚊𝚝𝚜 provides real-time container resource usage stats.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚙𝚜 lists running containers.

𝗗𝗼𝗰𝗸𝗲𝗿 𝗖𝗼𝗺𝗽𝗼𝘀𝗲
- 𝚍𝚘𝚌𝚔𝚎𝚛-𝚌𝚘𝚖𝚙𝚘𝚜𝚎 𝚞𝚙 starts a multi-container app from a compose file.
- 𝚍𝚘𝚌𝚔𝚎𝚛-𝚌𝚘𝚖𝚙𝚘𝚜𝚎 𝚍𝚘𝚠𝚗 stops and destroys the resources.
- 𝚍𝚘𝚌𝚔𝚎𝚛-𝚌𝚘𝚖𝚙𝚘𝚜𝚎 𝚕𝚘𝚐𝚜 aggregates logs from the containers.

𝗔𝗱𝗱𝗶𝘁𝗶𝗼𝗻𝗮𝗹 𝗖𝗼𝗺𝗺𝗮𝗻𝗱𝘀
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚌𝚙 copies files between host and containers.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚍𝚒𝚏𝚏 shows filesystem changes in a container.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚝𝚘𝚙 displays running processes in a container.
- 𝚍𝚘𝚌𝚔𝚎𝚛 𝚜𝚎𝚊𝚛𝚌𝚑 searches for images on Docker Hub.


Docker CLIs:

## for docker version

docker version 


## to check no of container running

docker ps  


## to check docker images

docker images 


##Run a container using the redis image: 

docker run redis


##Stop the container you just created: 

docker stop <container-id>


##How many containers are PRESENT on the host now?Including both Running and Not Running ones: 

docker ps -a


##What is the image used to run the nginx-1 container? 

Run the docker ps command and check under the IMAGE column.


##What is the name of the container created using the ubuntu image?

Run the docker ps command and look at the NAMES column.


##What is the ID of the container that uses the alpine image and is not running?

Run the docker ps -a command and identify the ID of the container that uses alpine image.


##Delete all containers from the Docker Host.Both Running and Not Running ones. Remember you may have to stop containers before deleting them

To stop all the containers at once, run the command: docker stop $(docker ps -aq)

To remove all the stopped containers at once, run the command: docker rm $(docker ps -aq)


##Delete the ubuntu Image: 

Run the command docker rmi ubuntu


##You are required to pull a docker image which will be used to run a container later. Pull the image nginx:1.14-alpine

Run the command docker pull nginx:1.14-alpine


##Run a container with the nginx:1.14-alpine image and name it webapp

Run the command docker run -d --name webapp nginx:1.14-alpine and check the status of created container by docker ps command.


##Cleanup: Delete all images on the host.Remove containers as necessary

Stop and delete all the containers being used by images.

Then run the command to delete all the available images: docker rmi $(docker images -aq)

$ docker ps -a

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                     PORTS               NAMES

611162e0a78c        nginx:1.14-alpine   "nginx -g 'daemon of…"   4 minutes ago       Exited (0) 2 minutes ago                       webapp

$ 

$ docker stop 611162e0a78c

611162e0a78c

$ 

$ docker rm 611162e0a78c

611162e0a78c

$ 

$ docker ps -a

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

$ 

$ docker rmi $(docker images -aq)

Untagged: nginx:1.14-alpine

Untagged: nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7

Deleted: sha256:8a2fb25a19f5dc1528b7a3fabe8b3145ff57fe10e4f1edac6c718a3cf4aa4b73

Deleted: sha256:f68a8bcb9dbd06e0d2750eabf63c45f51734a72831ed650d2349775865d5fc20

Deleted: sha256:cbf2c7789332fe231e8defa490527a7b2c3ae8589997ceee00895f3263f0a8cf

Deleted: sha256:894f3fad7e6ecd7f24e88340a44b7b73663a85c0eb7740e7ade169e9d8491a4c

Deleted: sha256:a464c54f93a9e88fc1d33df1e0e39cca427d60145a360962e8f19a1dbf900da9

 

##Which of the ports are the exposed on the CONTAINER?

Run the command docker ps and look under the PORTS column. Ports on the right(after ->) are exposed on the container.


##Which of the ports are published on Host?

Run the command docker ps and look under the PORTS column. Ports on the left(before ->) are published on the host.


##Run an instance of kodekloud/simple-webapp with a tag blue and map port 8080 on the container to 38282 on the host.

$ docker run -p 38282:8080 kodekloud/simple-webapp:blue


##We just downloaded the code of an application. What is the base image used in the Dockerfile?Inspect the Dockerfile in the webapp-color directory

You can either open the file using vi /root/webapp-color/Dockerfile (or using commands such as cat/more/less/vim e.t.c) and look for the FROM instruction or search for it directly using grep -i FROM /root/webapp-color/Dockerfile


##To what location within the container is the application code copied to during a Docker build?

Inspect the Dockerfile in the webapp-color directory.

Open the Dockerfile and look for COPY command.


##When a container is created using the image built with this Dockerfile, what is the command used to RUN the application inside it.

Inspect the Dockerfile in the webapp-color directory.

Open the Dockerfile and look for ENTRYPOINT command


##Build a docker image using the Dockerfile and name it webapp-color. No tag to be specified.

Move to the directory first by using the cd command and verify the path of the working directory from pwd command 

$ cd /root/webapp-color/

$ pwd

/root/webapp-color

Now, run the docker build command within that directory :-

$ docker build -t webapp-color . 


##Run an instance of the image webapp-color and publish port 8080 on the container to 8282 on the host.

Command: docker run -p 8282:8080 webapp-color


##What is the base Operating System used by the python:3.6 image?If required, run an instance of the image to figure it out.

Run docker run python:3.6 cat /etc/*release* command.


##Build a new smaller docker image by modifying the same Dockerfile and name it webapp-color and tag it lite.Hint: Find a smaller base image for python:3.6. Make sure the final image is less than 150MB

Modify Dockerfile to use python:3.6-alpine image and then build using docker build -t webapp-color:lite .


##Inspect the environment variables set on the running container and identify the value set to the APP_COLOR variable.

To know the running containers, run the command docker ps and identify the running container name or container id.

Run this command to get the env fields from the inspect command: docker inspect <container-name> | grep -A 10 Env

Replace container-name with the correct one.


##Run a container named blue-app using image kodekloud/simple-webapp and set the environment variable APP_COLOR to blue. Make the application available on port 38282 on the host. The application listens on port 8080

Run the command : docker run -p 38282:8080 --name blue-app -e APP_COLOR=blue -d kodekloud/simple-webapp

To know the env field from within a webapp container, run docker exec -it blue-app env


##Deploy a mysql database using the mysql image and name it mysql-db.Set the database password to use db_pass123. Lookup the mysql image on Docker Hub and identify the correct environment variable to use for setting the root password.

Run the command: docker run -d -e MYSQL_ROOT_PASSWORD=db_pass123 --name mysql-db mysql

To know the env field from within a mysql-db container, run docker exec -it mysql-db env


##What is the ENTRYPOINT configured on the mysql image?

Run: cat Dockerfile-mysql | grep ENTRYPOINT

$ cat Dockerfile-mysql | grep ENTRYPOINT

ENTRYPOINT ["docker-entrypoint.sh"]


##What is the CMD configured on the wordpress image?

$ ls

Dockerfile-mysql  Dockerfile-python  Dockerfile-ubuntu  Dockerfile-wordpress  app

$ 

$ cat Dockerfile-wordpress | grep CMD

CMD ["apache2-foreground"] 


##What is the final command run at startup when the wordpress image is run. Consider both ENTRYPOINT and CMD instructions

Open the file /root/Dockerfile-wordpress and inspect the both ENTRYPOINT + CMD instructions.


##What is the command run at startup when the ubuntu image is run?

Run: cat Dockerfile-ubuntu | grep CMD

$ cat Dockerfile-ubuntu | grep CMD

CMD ["bash"]


##Run an instance of the ubuntu image to run the sleep 1000 command at startup.Run it in detached mode.

$ docker run -d ubuntu sleep 1000

eeaf9457c45528e374f1ed607d944ce62a75a93249c5b8ce41fbf125771eb702 


First create a redis database container called redis, image redis:alpine.

$ docker run --name redis -d redis:alpine

Unable to find image 'redis:alpine' locally

alpine: Pulling from library/redis

59bf1c3509f3: Pull complete 

719adce26c52: Pull complete 

b8f35e378c31: Pull complete 

d034517f789c: Pull complete 

3772d4d76753: Pull complete 

211a7f52febb: Pull complete 

Digest: sha256:4bed291aa5efb9f0d77b76ff7d4ab71eee410962965d052552db1fb80576431d

Status: Downloaded newer image for redis:alpine

05cfbd9839847667f2652b2e88aa259c4287c9145ff972f55dd05fcaaf2c95f3


Next, create a simple container called clickcounter with the image kodekloud/click-counter, link it to the redis container that we created in the previous task and then expose it on the host port 8085

The clickcounter app run on port 5000

$ docker run -d --name=clickcounter --link redis:redis -p 8085:5000 kodekloud/click-counter

Unable to find image 'kodekloud/click-counter:latest' locally

latest: Pulling from kodekloud/click-counter

540db60ca938: Already exists 

a7ad1a75a999: Pull complete 

37ce6546d5dd: Extracting [=================================================> ]  10.49MB/10.57MB

ec9e91bed5a2: Download complete 

767433e10bb0: Download complete 

156f0b0493cb: Download complete 

3fe82d8a2401: Download complete 

4a41f7c94204: Download complete 

473063430a4f: Download complete 

452c68a16ccd: Download complete

Digest: sha256:530e4532a718e8f5cbda05844a6c0638ebe8898fa4c4307ee6afbdd5d1f213db

Status: Downloaded newer image for kodekloud/click-counter:latest

52ea3950550080d1596d1ef70e0930f91453ea865f33dc732af1fc648e9139b1


Let's clean up the actions carried out in previous steps. Delete the redis and the clickcounter containers.

To stop the containers: docker stop <CONTAINER-NAME>

To delete the containers: docker rm <CONTAINER-NAME>


$ docker stop clickcounter

clickcounter

$ docker rm clickcounter

clickcounter


$ docker rm redis      

Error response from daemon: You cannot remove a running container 05cfbd9839847667f2652b2e88aa259c4287c9145ff972f55dd05fcaaf2c95f3. Stop the container before attempting removal or force remove


docker stop 05cfbd9839847667f2652b2e88aa259c4287c9145ff972f55dd05fcaaf2c95f3 

05cfbd9839847667f2652b2e88aa259c4287c9145ff972f55dd05fcaaf2c95f3

 

$ docker rm 05cfbd9839847667f2652b2e88aa259c4287c9145ff972f55dd05fcaaf2c95f3

05cfbd9839847667f2652b2e88aa259c4287c9145ff972f55dd05fcaaf2c95f3


##Create a docker-compose.yml file under the directory /root/clickcounter. Once done, run docker-compose up.

The compose file should have the exact specification as follows -

  • redis service specification - Image name should be redis:alpine.
  • clickcounter service specification - Image name should be kodekloud/click-counter, app is run on port 5000 and expose it on the host port 8085 in the compose file.


Use the below compose file:

services:

  redis:

    image: redis:alpine

  clickcounter:

    image: kodekloud/click-counter

    ports:

    - 8085:5000

version: '3.0'

Then run a docker-compose up -d command. To run containers in a background, added -d flag.


##What location are the files related to the docker containers and images stored?

$ ls /var/lib/docker

builder  buildkit  containerd  containers  image  network  overlay2  plugins  runtimes  swarm  tmp  trust  volumes 


##What directory under /var/lib/docker are the files related to the container alpine-3 image stored?

$ cd /var/lib/docker/containers/  

$ ls

478adb8e835e0e03d5d95abba19b44c1182825b211abbd948b048f74cd6ceaa5  81f7a8bb1a567de73ba537834dbe75a33d3a3eee30052b73e9a095fe9f6c1538

4ba10cfbae7aeec4f9bb76fc8a4ef926b24f35dee397698125f3a7b0bf1a23a0

$ 

$ docker ps -a

CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS                          PORTS               NAMES

478adb8e835e        alpine              "/bin/sh"           About a minute ago   Exited (0) About a minute ago                       alpine-3

81f7a8bb1a56        alpine              "/bin/sh"           About a minute ago   Exited (0) About a minute ago                       alpine-2

4ba10cfbae7a        alpine              "/bin/sh"           About a minute ago   Exited (0) About a minute ago                       alpine-1


##Run a mysql container named mysql-db using the mysql image. Set database password to db_pass123

Note: Remember to run it in the detached mode.

$ docker run -d --name mysql-db -e MYSQL_ROOT_PASSWORD=db_pass123 mysql

134daac876ad80eb806e5abde9a893da2d3a44db44758965fd88263e8c60f0af


##We have just written some data into the database. To view the information we wrote, run the get-data.sh script available in the /root directory. How many customers data have been written to the database?

$ pwd

/var/lib/docker/containers

$ cd /root

$ sh get-data.sh

mysql: [Warning] Using a password on the command line interface can be insecure.

id      Name    Phone   Email

1       Kareem  130-5655        Duis.volutpat.nunc@quamCurabitur.org

2       Ruby    1-584-149-0770  Nulla.tempor@vitaeorciPhasellus.org

3       Rowan   199-8663        consectetuer.adipiscing.elit@Sedmalesuada.co.uk

4       Alisa   220-6017        elementum.sem.vitae@enimMauris.edu

5       Ella    731-0337        fermentum@nec.net

6       Tiger   658-4480        quis.diam@odiovelest.net

7       Felix   1-274-848-3378  Mauris.vel@arcu.com

8       Karina  1-390-796-3451  sagittis.semper@odioapurus.co.uk

9       Davis   605-8539        venenatis.vel@risusDonecnibh.com

10      Mohammad        1-590-174-1489  ornare.sagittis.felis@natoque.ca

11      Zane    362-1770        Aenean.euismod@condimentum.co.uk

12      Piper   1-231-386-6903  nunc.sed.pede@nascetur.ca

13      Marshall        1-383-729-4990  Cras.interdum.Nunc@neceuismod.ca

14      Zena    241-6641        Fusce.mollis.Duis@lobortis.org

15      Abdul   1-748-387-9935  eget.lacus.Mauris@Crasvehicula.com

16      Chase   1-401-241-9169  ante.dictum.mi@nascetur.org

17      Zahir   921-0663        non@nonummyutmolestie.edu

18      Brenda  1-691-909-5827  Quisque.ac@magnaCras.co.uk

19      Laura   1-562-983-9565  Quisque.ornare.tortor@sollicitudinadipiscing.ca

20      Madison 1-348-737-0587  Quisque.varius@Intinciduntcongue.org

21      Tanek   991-6278        dignissim.magna@Pellentesqueutipsum.net

22      Dakota  893-0792        Nullam.enim.Sed@nulla.net

23      Boris   1-297-302-5792  non.sollicitudin@eleifendegestasSed.co.uk

24      Celeste 723-6729        mauris.rhoncus@eunulla.edu

25      Connor  1-203-901-7531  et@loremipsumsodales.edu

26      Perry   1-756-607-9187  eros.turpis@tristiquepharetra.co.uk

27      Hayfa   1-609-407-3019  non.lobortis.quis@malesuadafringilla.net

28      Todd    343-0454        id.erat@arcu.org

29      Fuller  881-7273        non.feugiat.nec@adipiscingelit.net

30      Rama    1-927-605-0610  nonummy.ultricies.ornare@malesuada.co.uk


##Run a mysql container again, but this time map a volume to the container so that the data stored by the container is stored at /opt/data on the host.

Use the same name : mysql-db and same password: db_pass123 as before. Mysql stores data at /var/lib/mysql inside the container.


$ docker run -v /opt/data:/var/lib/mysql -d --name mysql-db -e MYSQL_ROOT_PASSWORD=db_pass123 mysql

9c58ef00b9efdf85b11c741b9c594c0ed80b7063ffd96c5507c4fc2ec6c15694


##Disaster strikes.. again! And the database crashed again. But this time we have the data stored at /opt/data directory. Re-deploy a new mysql instance using the same options as before.

Just run the same command as before. Here it is for your convenience: docker run -v /opt/data:/var/lib/mysql -d --name mysql-db -e MYSQL_ROOT_PASSWORD=db_pass123 mysql

$ docker run -v /opt/data:/var/lib/mysql -d --name mysql-db -e MYSQL_ROOT_PASSWORD=db_pass123 mysql

f053ae4c9257f6f077f31bd8984ccfec18311f14eaf835cd94bf9461e08b3f58


Explore the current setup and identify the number of networks that exist on this system.

$ docker network ls 

NETWORK ID          NAME                DRIVER              SCOPE

497c0830787a        bridge              bridge              local

5de628a52135        host                host                local

e32507f046e4        none                null                local


##We just ran a container named alpine-1. Identify the network it is attached to.

Run the command docker inspect alpine-1 and look under the Networks section.


##What is the subnet configured on bridge network?

Run the command docker network inspect bridge


##Run a container named alpine-2 using the alpine image and attach it to the none network.

$ docker run --name alpine-2 --network=none alpine


##Create a new network named wp-mysql-network using the bridge driver. Allocate subnet 182.18.0.1/24. Configure Gateway 182.18.0.1

Run the command: docker network create --driver bridge --subnet 182.18.0.1/24 --gateway 182.18.0.1 wp-mysql-network

Inspect the created network by docker network inspect wp-mysql-network


##Deploy a mysql database using the mysql:5.6 image and name it mysql-db. Attach it to the newly created network wp-mysql-network

Set the database password to use db_pass123. The environment variable to set is MYSQL_ROOT_PASSWORD.

Run the command: docker run -d -e MYSQL_ROOT_PASSWORD=db_pass123 --name mysql-db --network wp-mysql-network mysql:5.6


##Deploy a web application named webapp using the kodekloud/simple-webapp-mysql image. Expose the port to 38080 on the host.
The application makes use of two environment variable:
1: DB_Host with the value mysql-db.
2: DB_Password with the value db_pass123.
Make sure to attach it to the newly created network called wp-mysql-network.

Also make sure to link the MySQL and the webapp container.

Run the command: docker run --network=wp-mysql-network -e DB_Host=mysql-db -e DB_Password=db_pass123 -p 38080:8080 --name webapp --link mysql-db:mysql-db -d kodekloud/simple-webapp-mysql