Welcome to Containerverse

Welcome to Containerverse

"Docker: Because sometimes the best things come in small packages."

DevOps and its tools are the talks of the town in the technology industry and one such tool that is gaining immense popularity is Docker. But what made Docker suddenly so well-known and what promoted its widespread use? Let's investigate...

To be precise, Docker is not a technology but a tool or a platform that uses container technology. The simplicity of this technology itself has resulted in the adoption of Docker as a tool. But what is container technology exactly? What are containers, and why are they so often discussed and utilized? To begin with, we must address virtual machines.

What are Virtual Machines

Many of us might have used VirtualBox to run a virtual machine on our devices. Virtual machines use the concept of Virtualization. They act as separate devices from the machine that hosts them. However, they use the same hardware. Wait for a second, two devices on a single hardware... How does that work?

Virtual Machines have their separate operating systems distinct from the host machine. They also have a different memory which is partitioned from the host's memory. On a single physical machine, they can operate independently thanks to hypervisor management.

Use of Virtual Machines

Virtual machines are used to reduce the space required to host applications on multiple on-premise servers. Having multiple software running on a single host reduces the cost of new hardware and also saves space. Moreover, they are safe as they have no physical presence. They are independent of the host machines and hence easy to replicate elsewhere. However, there is a problem! Virtual machines require significant computational power. They function slowly if the infrastructure is not up to par. What is the best approach to resolving this dilemma?

It's Container Time...

Containers can be considered lightweight alternatives to Virtual Machines. Both VMs and Containers achieve virtualization but the technique is different. Containers run on a single operating system of the host itself. They isolate on the process level. Because multiple containers run on the same operating system, they are lightweight.

Use of Containers

There are many instances in the software world where an app operates flawlessly in one machine but breaks in a different environment. This might be because of the dependencies, version stability, and many other variables. The containers are a boon in such cases. An application can be simply packaged along with its source code, libraries, dependencies, etc, and sent around the world. This means that containers can be used to bundle an application along with its environment in an isolated, predictable, and repeated way.

But we still haven't discussed the agenda of this article... Docker itself!

Dive into Docker

The problem of replicating applications is solved by Containers. Docker is designed to make it easier to create, deploy and run applications inside the containers. Docker manipulates all Docker-related objects with its own set of commands. Docker uses Docker Images to create these containers. The images are built based on a Dockerfile. The Docker images can be stored in a public Docker Registry. Well, that sounds like a lot of jibberish! Let's understand them one by one.

Docker Images

Our containers need some template on which they can be built. Docker images provide that template. An image defines a container. Every image is created using a layered approach, with some files bundled into each layer. The image runs layer by layer to give a final container as output. These layers are immutable. Once executed, they stay on the local system. The already existing layers help reduce the computational time the next time the same images are run. Docker containers will begin to run within seconds of running a Docker image. This speed advantage is a significant benefit of Docker.

Dockerfile

A Dockerfile is used to create the Docker image. The Dockerfile essentially contains the commands for creating an image. Each line in the Dockerfile represents a layer that will be used to build the entire Docker image. After that, these images can be used to spin up containers in any environment.

Docker Compose

When certain commands are executed, the images spin up containers. While working with many containers, it is not feasible to run commands for each container. Manual intervention may result in errors. Docker Compose is used to solve this problem. It is a file that contains all of the image details, as well as ports, environment variables, and so on. This docker-compose file allows you to start and stop containers with a single command. For easier management, the docker-compose file stores data from all images in a single location.

Docker Registry

The images built by Dockerfile can be stored in public repositories. This means that anyone who wants to run the application can simply pull an image from the registry and use it to spin up a container. This is the primary reason Docker images are uploaded to public registries.

Docker Architecture

How does Docker manage all these in such a super fast and easy fashion? The answer lies in the architecture!

Docker is built on a client-server model. The client is essentially the command-line interface (CLI) that is used to execute commands. The Docker Daemon receives requests from the client and manages Docker objects

Typical Docker Workflow

Docker is useful throughout the software development and deployment process. Consider the example of building a Node.js application using MongoDB as a database using Docker.

The fully functional Node.js application should be containerized and replicated elsewhere. For this, an image for that application must be created. The source code and its dependencies will be included in the image. Our app's custom image will be uploaded to Docker Hub. A separate MongoDB container will be used to connect to the database. This database container is built by using the official MongoDB image found on the public registry called Docker Hub.

The production server will simply pull our custom Node.js image and the MongoDB image. The containers will be built using these images. The containers will be connected inside a network so that the application starts running perfectly.

Virtual Machine vs Container!

I'm sure you're wondering... Why do we still use Virtual Machines if containers are so cool and popular?

The reason is safety! Although containers are becoming more secure, virtual machines (VMs) remain more trustworthy. Another reason to use VMs is data persistence.

Even though almost everyone uses Docker, virtual machines are far from obsolete. They should be thought of as compatible with each other, not as opposing elements.

Docker resources

YouTube has always been my go-to platform for learning any new technology. This video by TechWorld with Nana is the best I have found for beginners. It covers all Docker commands as well as the logic behind them. FreeCodeCamp has a video on Docker and Node.js that goes into detail about containerizing Node.js applications.

Ivan Velichko, a container expert, has written an excellent article in which he explains the intricacies of containers and includes some amazing MythBusters as a bonus.

Conclusion

In conclusion, Docker is a powerful tool that can greatly simplify the process of packaging, deploying, and managing applications. It allows developers to create and run isolated environments, known as containers, that can run on any platform that supports Docker. Docker also enables teams to collaborate more effectively by providing a consistent and reproducible environment for all members. Overall, the benefits of using Docker in both development and production environments are numerous and can greatly increase the efficiency and scalability of any application.

That's all for this one. Let's dive deep into Containerverse. Until next time... Happy containerizing!