Updated: Jan 25
In my previous article, I reviewed the central concept of containers (You can find it here: Understanding Containers). Today's article will focus more on implementing them using docker, which is by far the most popular container environment in the world and the de-facto standard for containers.
Here is an image describing docker architecture, and let's go through them together:
First, let's start with the docker server, also known as "Docker daemon," this is a service run on the server and is responsible for managing the docker containers, including some basic operations such as Starting/Shutting down servers, Crates containers based on predefined images Keep a stack on their activities and exposes an API for managing tools.
Next are the images, which are the building block of containers. Image contains a set of software and configurations for a container to run (similar to what exists in virtual machines). Images are static files and NOT run; they exist on the server disk waiting to be called when a new container should be structured. We can get these files from the "Container Registry," a collection of images from where you can pull the image you want and either use it as is or modify it to be more suitable for your purpose.
And then we have the containers themselves. A container is an image that is built and run on-demand. The container is run on a sandbox, created and managed by the Docker daemon, and is based on the instance of the image.
The last part of the puzzle is the client; a CLI sent instructions to the Docker daemon. This is the gateway to the whole functionality of Docker and using it, you can perform different tasks such as modifying images, running containers, etc.
So that is the basic architecture of Docker, but I still want to focus more on the part of the customizing images. The ability to pull an image from a repository and modify it is one of the most significant advantages of docker. The modification part is done using "Docker File." This type of file contains instructions for building custom images; it can instruct the document to copy files, run commands, and more. Docker files are usually tiny because of the variety of baseline images to begin from, and the customization required is not that big.
Here is an example of a Docker File:
#This is a sample Image FROM ubuntu MAINTAINER email@example.com RUN apt-get update RUN apt-get install –y nginx CMD [“echo”,”Image created”]
Let's review each line:
#This is a sample Image - A comment, you can add it anywhere with the help of the # command. FROM ubuntu - tells docker from which base image you want to base your image (In this example we will use ubuntu image) MAINTAINER firstname.lastname@example.org - The person who will maintain this image. RUN apt-get update - update our Ubuntu system RUN apt-get install –y nginx - install the nginx server on our ubuntu image CMD [“echo”,”Image created”] - Just use it to display a message to the user.
This is a classic representation of a docker file, and most files will look quite similar to the example above.
At the beginning of this article, I said that docker is the current industry standard for containers, which shows in the incredible support in various platforms. First is supported by all major operating systems (Windows, Linux, and OSX), and most importantly, it is also supported by all major cloud providers, including Azure. What does it mean? It means that you can deploy docker to the cloud and run. Furthermore, we have cloud providers such as Azure and Amazon that have their container registry that can be used to manage your images.
Amazon has Elastic Container Registry (ECR). Azure has Azure Container Registry (ACR), which makes it even more attractive to organizations and their DevOps/IT departments because using it means they can develop on their local machines and then package the code in docker, push it to the cloud, and it will just run.