Docker: A to Z

TECH WOLF
9 min readMar 11, 2024

--

In this post we will talk about docker installation and the basic docker commands we use for day to day deployments. First we will start with the command line and then will move to Dockerfile and docker-compose file we use to generate image and run container respectivly.

Docker installation:

Please refer the above link to get the docker installed on your OS. Depending on which os you use the installation process is going to vary. You may also refer the below links for the same:

  1. https://www.youtube.com/watch?v=5nX8U8Fz5S0 (windows)
  2. https://www.youtube.com/watch?v=5_EA3rBCXmU (Ubuntu)
  3. https://www.youtube.com/watch?v=-EXlfSsP49A (MAC)

These are the links I got from the youtube channels @ProgrammingKnowledge and Simplilearn.

Once you are done installing docker on your system you can check it using:

docker -v
docker --version

This will show you the version of the docker you are using and also is the indication that docker is installed.

Next before proceeding with the real commands let us get ourself clarified with few terms used in docker:

Image:

Docker image, or container image, is a standalone, executable file used to create a container. This container image contains all the libraries, dependencies, and files that the container needs to run. A Docker image is shareable and portable, so you can deploy the same image in multiple locations at once — much like a software binary file

Container:

A Docker container is a runtime environment with all the necessary components — like code, dependencies, and libraries — needed to run the application code without using host machine dependencies. This container runtime runs on the engine on a server, machine, or cloud instance. The engine runs multiple containers depending on the underlying resources available.

The major difference between image and container is that , A Docker container is a self-contained, runnable software application or service. On the other hand, a Docker image is the template loaded onto the container to run it, like a set of instructions.

You store images for sharing and reuse, but you create and destroy containers over an application’s lifecycle.

To get the list of all available images in docker. The command is:

docker images

If you are running it for the first time then it will be empty.

Now, to download the public docker image the command is:

docker pull ${image_name}

You can get public docker image on dockerhub. Example to install the image of postgres. Search for postgres in docker hub.

docker pull postgres

This will pull you the latest version of postgres docker image on your local or any docker container.

Now, let’s say our image is not public as we don’t want our container to be accessed by just anyone. In that case we need to login to docker either of dockerhub or some other cloud vendor like AWS or GCP. The aws service for containerizing is ECR (Elastic Container Registery) or
Artifact Registry API (GCP).

If you are using docker hub then the step is simple. Simply:

docker login

Enter the username and password and then you can simply pull your image using

docker pull image-name

if you have private registery then you need to login using the command suggested by the respective cloud provider.

refer https://docs.aws.amazon.com/AmazonECR/latest/userguide/getting-started-cli.html for aws login

Now, once the image is pulled to run an image as a container the command is:

docker run --name some-postgres \ 
-e POSTGRES_PASSWORD=mysecretpassword \
-d postgres

Above is a sample code to run a postgres image first time on docker.

To view all the running container the command is:

docker ps  # only running container
docker ps -a # all container

Now to stop a running container the command is:

docker stop container_id
or
docker stop container_name

To restart a container the command is:

docker start contaier_id
docker start container_name

you can get the container id or container name using docker ps -a. As shown in the image above.

To delete a container from our local the command is:

docker rm container_id
docker rm container_name

To delete the image from a docker the command is:

#ensure there is no running or stopped container of that image left to be deleted
docker rmi image_name

Port mapping:

In docker, all applications in the container runs on particular port when you run a container.

If you want to access the particular application with the help of a port number you need to map the port number of the container to the port number of the host.

docker run -p <HostPort>:<Container Port> postgres
e.g: docker run -p 6000:5432 --name postgres-image -e POSTGRES_PASSWORD=mysecretpassword -d postgres

Now the host device laptop/pc/server can access the running container on port 6000.

Container Logs:

While the code is running tracking the container logs become a important part to have an idea how the container is working.

The command is:

docker logs conatiner-id
docker logs container-name

To tail the logs the command is:

docker logs --tail 10 container-id  # replace 10 with number of lines req.

To view the incremental logs in the terminal we can use:

docker logs -f container-id

Docker network:

Docker Networking allows you to create a Network of Docker Containers managed by a master node called the manager. Containers inside the Docker Network can talk to each other by sharing packets of information. In this article, we will discuss some basic commands that would help you get started with Docker Networking

refer https://www.geeksforgeeks.org/basics-of-docker-networking/ for more details.

To create an new docker network the command is:

docker network create network-name
e.g: docker network create mongo-network

This will create a new network.

using docker network in a container:

docker run -d --network mongo-network --name my-mongo \
-p 27017:27017 \
-e MONGO_INITDB_ROOT_USERNAME=mongoadmin \
-e MONGO_INITDB_ROOT_PASSWORD=secret \
mongo
docker run -d \
-p 8081:8081 \
-e ME_CONFIG_MONGODB_ADMINUSERNAME=mongoadmin \
-e ME_CONFIG_MONGODB_ADMINPASSWORD=secret \
-e ME_CONFIG_MONGODB_SERVER=my-mongo \
-e ME_CONFIG_BASICAUTH_USERNAME=user \
-e ME_CONFIG_BASICAUTH_PASSWORD=user \
--name mongo-express \
--net mongo-network \
mongo-express

here — net has served the network to the container.

To view all available network the command is :

docker network ls

With this we have covered almost all the section of docker terminal commands.

Next we will see how we create Dockerfile to create image docker-compose to run multiple images in container and then push the image to docker registery.

For that i have picked a small node js code with a simple interface to demonstrate. I am referring the code of Nana (from youtube) for the same.

Clone the repository then we will understand each docker file one at a time.

Dockerfile

FROM node:13-alpine

ENV MONGO_DB_USERNAME=admin \
MONGO_DB_PWD=password

RUN mkdir -p /home/app

COPY ./app /home/app

CMD ["node","/home/app/server.js"]

If you are little bit familiar with docker then you must have seen this file in your code. We will undestand what it means line by line.

First thing first, First of all any new docker image we are forming need to base from some other image. Here in this case we are basing from node:13-alpine. If you have Java you must have seen openJdk:${version}.

FROM node:13-alpine

Is for basing.

ENV MONGO_DB_USERNAME=admin \
MONGO_DB_PWD=password

sets the environment variable.

RUN and CMD both serves a similar purpose of running bash command. But CMD serves as an entrypoint for the image to be containered. we can have multiple RUN command but single CMD.

RUN mkdir -p /home/app

COPY ./app /home/app

The above will first create a /home/app folder in the docker container and

then COPY will copy all the content of /app path of host machine to /home/app of docker.

Now to build the image out of the container the command is:

docker build -t my-app:1.0 .
#you can replace my-app with any proper name and 1.0 with any string or number

if everything is right. You will see your docker images in the image list.

Now we have built our image. The only thing we are left with is running the image using docker-compose.

docker-compose.yaml (mongo.yaml in our git code)

version: '3'
services:
my-app:
image: my-app:1.0
ports:
- 3000:3000
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
mongodb:
image: mongo
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
volumes:
- mongo-data:/data/db
mongo-express:
image: mongo-express
ports:
- 8080:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=password
- ME_CONFIG_MONGODB_SERVER=techworld-js-docker-demo-app_mongodb_1
- ME_CONFIG_BASICAUTH_USERNAME=user
- ME_CONFIG_BASICAUTH_PASSWORD=user
volumes:
mongo-data:
driver: local

The above is complete code. Lets first break the code into small chunk and will understand each chunk.

version: '3'
services:
my-app:
image: my-app:1.0
ports:
- 3000:3000
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password

While writing a docker-compose file the above is the minimum requirement.

version: Gives the version of docker-compose

services: A container for an application that can actually include several container instances running the same image.

my-app: Name of the container to run

image: image built or pulled from online registery.

ports: port mapping — <host>:<container>

environment: environment variables to declare.

Now, from the definition of service it is clear that we can have muliple container instance in a service. That’s what we have done by defining mongodb and mongo express container to run at same time.

The point to remember here is in command line we created network but here we have not created any. Why?

→ The thing is all the images in the same service by default comes under the same isolated network. We donot require to explictily define the network.

Volume mounting:

If you have observed in the above code there is keyword volume which we have not explained anywhere. So what is this?

Once you start using docker you will come to know that once the docker container is destroyed then all its related content also gets erased. If the application hosted is stateless then there is no issue, but consider a database hosted. If the container failed for some reason then that would be a disaster. To avoid the volume is mounted for these db. So that any operation in db remains synced with the storage and once even if container dies. The data can be recovered from the same state.

For that, we first need to add what all volumes we are going to need at the service level itself.

volumes:
mongo-data:
driver: local

Next use this volume inside the container for mounting.

mongodb:
image: mongo
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
volumes:
- mongo-data:/data/db

The above mounting is called anonymous named mounting where we don’t give the path to the container where to save the file rather give it some name to reference it somewhere if require.

Running the docker-compose:

The command to run all the services using docker compose the command is:

docker-compose -f mongo.yaml up -d
// You can replace mongo.yaml with your own compose file

This will run all the containers in the service.

Pushing the container image to private registery like ECR.

  1. Login to ECR registery (you can get the aws command from it’s cloud guide)
  2. tag the image
    e.g: Image name: Convention:-> registeryDomain/imageName:tag
  3. docker push registeryDomain/imageName:tag

e.g:

docker tag mya-app:1.0 1234.dkr.ecr.eu-central-1.amazonaws.com/my-app:1.0
docker push 1234.dkr.ecr.eu-central-1.amazonaws.com/my-app:1.0

--

--