Docker 101: Docker Registry & Docker Engine

A beginner’s introduction to Docker & why it’s awesome?

Published in
7 min readFeb 9, 2020

--

Docker 101, is a new series that I am beginning to give you guys a closer look at one of the most buzzing word of the DevSecOps world. This new technology have been revolutionizing the way deployment takes place while easing the work for the people at the DevOps department.

I have been working very closely with the DevOps team at the organization I currently work at. For a guy who comes from a pen-testing background, listening to the word “Docker” day in and day out became frustrating to a point where I decided to just leave security all together for some time and get involved in the DevOps part of things.

For the past few days, I have been binge-watching Docker tutorials, understanding why is it required, what are its benefits and how it saves time and resources. After putting in the work, I have realised that dockers are here to stay for a long time and how beneficial it is for everyone from development to deployment and management.

Last week I published my fifth article, Docker 101: Docker Compose. You can go back to those article and have a look at them so to be better able to understand the following article.

Let’s recap!

Docker Compose files are extremely useful as they help us setup entire cluster of dockers effortlessly. It is a .yaml file that has all the configurations set in place so that we can simply run the “docker-compose up” command to carry out the entire task. Docker compose has evolved over three different versions with their certain pros and cons and has also enhanced its networking capabilities drastically with now being able to support docker swarm.

In my previous articles I have covered most of the basics of Docker and the methodology to create it, optimise the process by using dockerfile and how to automatically setup the entire network of dockers using the docker compose.

What we didn’t discuss is the case scenario of how to host the docker image so that everyone can use it or how just entering the docker run command executes everything perfectly. Above all we need to understand the Docker engine in itself and how it manages all these complex work.

Docker Registry

Docker registry can be easily understood as the cloud platform where various public and private Docker images are stored so that they can be accessed anywhere and anytime it is needed.

$ docker run nginx

When we execute a command like the one mentioned above, what happens is that the ngingx docker image is downloaded, once that’s done then the docker image is set up, it runs the image and spawn a container.

When we type in ‘docker run nginx’ what we really mean is the following

$ docker run docker.io/nginx/nginxdocker.io - Name of the docker registry
nginx - Name of the User / Account
nginx - Name of the Image / Repository

So whenever we don’t type in the name of the user/account, it is by default considered to be similar to the image/repository name. All of this that we just discussed is in regards with the registry that are the public . Gogle has its own registry gcr.io and so does various other organisations too.

There might be a lot of organisations that are jumping on the cloud wagon but trust me when I say that the secure way to go about it is definitely to set up things on your own premise. So let’s discuss a bit about the private docker registry as well.

Private registry are the ones that you can set up on your premise or on your private cloud. So, once that is done you can push or pull your docker images back and forth from your private registry.

$ docker run -d -p 5000:5000 --name registry registry:2

This command helps us establish a connection with our private registry on the port 5000. So that then we can start to push our docker images to our private registry. Before we push our docker image we need to tag it first.

$ docker image tag my-image localhost:5000/my-image

Type in the above command with the name of your docker image that you want to upload on the private registry.

$ docker push localhost:5000/my-image

Now that we have the registry setup and the docker image tagged and ready to be pushed we just enter the above command and send it to our registry.

$ docker pull localhost:5000/my-imageor $ docker pull 192.168.1.10:5000/my-image

If we want to pull our docker image from our private registry we can run the following command or we can change ‘localhost’ with the IP address of the machine depending on where we are pulling the docker image from.

Docker Engine

Docker Engine is the brain of the entire docker architecture, every host machine where docker is installed has the docker engine. A docker engine mainly consists of three features.

  • Docker Daemon

Oversees background processes that manages docker objects, such as images, containers, volumes and networks

  • Rest API Server

It is the API interface that programs use to talk to Docker Daemon and provide instructions

  • Docker CLI

The command line interface that we have been using to input these commands

We can have the docker CLI on another machine and operate on the docker engine without having to ssh into the machine.

$ docker -H=remote-docker-engine:2375 <command> <image-name>e.g.
$ docker -H=192.168.1.10:2375 run nginx

Let’s have a look at the containerization and how docker uses namespaces to isolate the workspace.

Let’s have a look at how it maintains to isolate the workspace in regards to Process ID.

The image on the side represents a proper way in which docker containers manages to isolate the workspace.

The container here is represented as the child system which is spawning process ID of its own and Linux system is having its own process ID. The one thing about process ID that you need to understand is that under no circumstance two processes can have the same process ID.

So to take care of this problem the operating system and the containers do this thing where the container provides their own process ID in the child system and the operating system provides a corresponding process ID.

In the above case the process ID 1 & 2 inside the container is matched on the one to one basis with the process ID 5 & 6. The process ID are assigned in such a fashion that the host or the container knows about it and it still functions perfectly.

We know that the docker containers use the host system’s CPU and memory to carry out their task, what we need to understand is how much of the CPU and the memory do these containers use and how to decide which containers need more and which ones don’t.

By default their is no restriction over how much of the resource can a docker container use, so depending on the work load one docker container can always increase their CPU usage and RAM consumption but as the workload falls it can then go back to consuming the minimal required resources. The problem arises when these containers end up taking huge or nearly all the CPU and memory space to carry out their work, rendering other processes to a halt.

So we need a way to restrict the resource that the dockers can use.

$ docker run --cpu=.5 ubuntu

This assigns a maximum of only 50% of the CPU resource to this container, thus making sure that the CPU consumption by this container never exceeds that given limit.

$ docker run --memory=1000m ubuntu

This implies that this particular docker container can use only a maximum of 1000 megabyte of memory. These are very handy when we want to keep a tight check on the resource consumption by these docker containers or when we are running low on hardware resources.

Conclusion

Docker Registry are the cloud space where we store the docker images that we create for the ease of access so that we can just download the docker image and can use it anywhere we need. Companies have both public and private registry just as most of them have private and public code repositories.

Docker Engine is the brain behind the entire Docker architecture that helps run the entire show. It mainly consists of three features the docker CLI, the Rest API server and the Docker daemon. All of these working together helps setup the entire docker containers by executing a few set of commands.

This is my last article of this Docker 101 series. Its time to get back to security and write more quality articles.

If you enjoyed it please do clap & let’s collaborate. Get, Set, Hack!

Website : aditya12anand.com | Donate : paypal.me/aditya12anand
Telegram : https://t.me/aditya12anand
Twitter : twitter.com/aditya12anand
LinkedIn : linkedin.com/in/aditya12anand/
E-mail : aditya12anand@protonmail.com

Credits

To present you with this content I had to go through a lot of video content and lab environments.

  1. Docker for Beginners — KodeKloud
  2. Docker for Beginners — Lab Environment

Docker Tutorial for Beginners — Edureka

Follow us on Dev, Sec & Ops to read related articles.

--

--