Introduction to Docker in DevOps
DevOps is a natural evolution of software development. DevOps is not just a tool, a framework, or just automation. It is a combination of all these. DevOps aims to align the Dev and Ops teams with shared goals. A developer builds an application and sends it to the tester. However, the environments of development and testing systems are different; thus, the code does not work. There are two solutions to this: Docker and Virtual Machines.
Docker has been used widely in many DevOps toolchains. Docker's platform provides numerous features that make it popular among developers. Some features include:
Application isolation
Portability
Security management
Ease of software delivery
Scalability
π What is Docker?
Docker is a platform that enables the creation, deployment, and running of applications with the help of containers. A container is a unit of software that packages the code and all its dependencies together so that the application becomes runnable irrespective of the environment.
The container isolates the application and its dependencies into a self-contained unit that can run anywhere. Containers remove the need for physical hardware, allowing for more efficient use of computing resources. Containers provide operating-system-level virtualization. Additionally, using Docker commands, developers can easily manage these containers, enhancing their productivity and workflow efficiency.
Docker vs. Virtual Machines
While both Docker and Virtual Machines (VMs) provide isolation, they do so in different ways. VMs run a full operating system along with the application, which can be resource-intensive. Docker, on the other hand, shares the host OS kernel and runs isolated applications, making it more lightweight and efficient.
Advantages Of Docker:
No pre-allocation of RAM.
CI (Continuous Integration) efficiency: Docker enable you to build a container image and use that same image across every step of the deployment process.
Less cost & light in weight.
It can run on physical H/w / Virtual H/w or on cloud.
You can re-use the image.
It took very less time to create container.
Disadvantages of Docker:
Docker is not a good solution for application that requires rich GUI.
Difficult to manage large amount of containers.
Docker does not provide cross-platform compatibility, means if an application is designed to run in a docker container on windows, then it can't run on linux or vice-versa.
Docker is suitable when the development OS and testing OS are same. If the OS is different, we should use VM.
No solution for Data recovery & backup.
Components of Docker:
- Docker Daemon:
Docker daemon run on that Host OS.
It is responsible for running containers to manages docker services.
Docker daemon can communicate with other daemon.
- Docker Client:
Docker users can interact with docker daemon through a client.
Docker client uses command and Rest API to communicate with the docker daemon.
When a client runs any server command on the docker client terminal, the client terminal sends these docker commands to the docker daemon.
It is possible for docker client to communicate with more than one daemon.
Docker Host:
Docker Host is used to provide an enviroment to execute and run applications. It contains the docker daemon, images, containers, networks and storages.
Docker HUB / Registry:
Docker registry manages and stores the docker images. These are two types public and private.
Docker images:
Docker images are the read only binary templates used to create docker container.
Single file with all dependencies and configuration required to run a program.
- Docker Container:
We can say that the images is a template and the container is a copy of that template.
Container is like a Virtual Machine.
Image becomes container when they run on docker engine.
Docker file:
Docker file is basically a text file. It contains some set of instructions.
Basic Docker Commands:
Let's understand a few basic Docker commands along with their usage in detail. The following are the most used Docker commands for beginners and experienced professionals:
Update the package index
sudo apt-get update -y
Install the latest version of Docker
sudo apt-get install docker.io -y
Check docker version
docker --version
Check status of docker service
sudo service docker status OR systemctl status docker
press "q" for exit.
To see all images present in your local machine
sudo docker images
Give permission to docker user to run docker commands without "sudo".
sudo usermod -aG docker $USER
Now check docker user added or not in group at last line.
cat /etc/group
Now restart your instance.
sudo reboot
It take few minutes to restart or you can connect your instance again.
Again see all images present in your local machine.
docker images OR docker images ls
To see running container
docker ps
To see all container active or inactive.
docker ps -a
To find out images in docker hub.
docker search <image name>
To download image from docker hub to local machine.
docker pull <image name>:tag
To create and run container by giving name using docker hub image.
it- interactive terminal
docker run -it --name <container name> <image name> /bin/bash
To exit from container.
exit
To start container.
docker start <container ID / Name>
To go inside container
docker attach <container name>
To stop container.
docker stop <container name>
To delete stop container.
rm- remove
docker rm <container name>
To remove a local image.
docker rmi <imagename>:tag
To tag an image.
docker tag <sourceimage>:tag <newimage>:tag
Build an image from dockerfile.
docker build -t <imagename> path_of_Dockerfile
Push an image to docker hub.
docker push <image_name>:tag
Inspect details of an image.
docker image inspect <image_name>:tag
Save an image to tar archive.
docker save -o <image_name>.tar <image_name>:tag
Load an image from tar archive.
docker load -i <image_name>.tar
Prune unused images.
docker images prune
To view container logs.
docker logs <container_name/ID>
What is an image?
A Docker image is like a snapshot of an app that contains everything needed to run it. This includes the code, libraries, tools, and settings.
Docker images are made from Dockerfiles, which are instructions to create the image step by step. Think of an image as a recipe, and when you run the image, it becomes a container. A container is like a mini-computer running your app in its own isolated space.
Dockerfile:
A Dockerfile is a simple text file with instructions on how to build a Docker image. It tells Docker what to include and how to set up the environment.
Writing a Dockerfile:
A Dockerfile is a text file that contains a series of instructions for building a Docker image. Each instruction in the Dockerfile adds a new layer to the image, allowing you to specify the environment, dependencies, and commands needed to run your application.
Here's an example of a simple Dockerfile for a Python application:
#Use the official Python base image
FROM python:3.9-slim
# Set the working directory in the container
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed dependencies specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
Understanding the Dockerfile:
FROM
: Starts with a base image, like the basic setup for your app.WORKDIR
: Sets the working directory for subsequent instructions.COPY
: Copies files from the host system into the container.RUN
: Runs commands to set up your app in the container.EXPOSE
: Tells Docker which port the container will use.ENV
: Sets environment variables in the container.CMD
: Specifies the command to run when the container starts.
DOCKERFILE SYNTAX :
FROM <base_image>
WORKDIR /app
COPY . .
RUN [command]
CMD ["",""]
DOCKERFILE Example:
In this example we are creating an image of an imaginary python app
FROM ubuntu:latest
WORKDIR ./app
COPY . .
RUN apt-get -y update && apt-get install -y python
CMD ["python","app.py"]
Commands:
To create a Docker image, use the docker build
command. Hereβs how:
Go to the folder with your Dockerfile and other files.
Build the image using this command:
docker build -t myapp:latest .
-t myapp:latest
: Names the imagemyapp
with the taglatest
..
: Means the current directory.
If your Dockerfile has a different name or is in another folder, specify the path like this:
docker build -t myapp:latest -f /path/to/Dockerfile .
Pushing Image to Docker Hub
- Log in to Docker Hub:
docker login
- Tag the image:
docker tag local-image:tag username/repository:tag
local-image:tag
: The name and tag of your local image.username/repository:tag
: Your Docker Hub username and repository name.
- Push the image to Docker Hub:
docker push username/repository:tag
This uploads your Docker image to Docker Hub, where others can access it.
What is Docker Compose?
Docker Compose is a handy tool that helps you run multiple containers for your applications effortlessly. Imagine you have different parts of your app (like a web server, database, etc.) running in separate containers. Docker Compose lets you manage all these containers easily with just one command.
Control your app stack: Manage services, networks, and storage all in one YAML file.
One command magic: Start all your services with a single command.
Works everywhere: Use it in development, testing, staging, and production.
What is YAML?
YAML is a simple language for writing data that humans can easily read and write. It's often used for configuration files.
Easy to read: Unlike other formats, YAML is designed to be easy to understand.
Widely used: You'll find it in many programming and automation tools, like Ansible.
Example YAML File
# Comment: This is a supermarket list using YAML
---
food:
- vegetables: tomatoes
- fruits:
citrics: oranges
tropical: bananas
nuts: peanuts
sweets: raisins
Docker Commands for DevOps Engineer
Here are some essential Docker Compose commands you should know:
Start Containers:
docker-compose up
Stop and Remove Containers:
docker-compose down
Build Images:
docker-compose build
Start Existing Containers:
docker-compose start
Stop Running Containers:
docker-compose stop
Restart Containers:
docker-compose restart
List Running Containers:
docker-compose ps
Show Logs:
docker-compose logs
Run Command Inside a Container:
docker-compose exec [service_name] [command]
Pull Images:
docker-compose pull
Tasks
Task 1:
Learn to use the
docker-compose.yml
file.Set up the environment, configure services, and link containers.
Use environment variables in the
docker-compose.yml
file.
Task 2:
Pull and run a pre-existing Docker image.
Run the container as a non-root user.
Reboot the instance.
Inspect running processes and exposed ports.
View container logs.
Stop, start, and remove the container.
Steps:
Pull Docker Image:
docker pull nginx
Give User Permission:
sudo usermod -aG docker $USER sudo reboot
Run Container as Non-Root User:
docker run --name my_container -d -p 8080:80 --user 1000:1000 nginx
Inspect Container:
docker inspect my_container
View Logs:
docker logs my_container
Stop Container:
docker stop my_container
Start Container:
docker start my_container
Remove Container:
docker rm my_container
Conclusion:
Docker has revolutionized the way applications are developed, tested, and deployed in the DevOps landscape. By providing a lightweight, portable, and efficient solution for containerization, Docker bridges the gap between development and operations teams, ensuring consistency across various environments. Understanding Docker's components, commands, and the differences between Docker and traditional virtual machines is crucial for leveraging its full potential. As Docker continues to evolve, it remains an indispensable tool for modern DevOps practices, driving innovation and efficiency in software development and deployment processes.