Updated: Jan 25, 2022
One of the essential features that Azure provides is creating and managing containers. You may hear about containers before, but re-understand where containers came from; let's start with how traditional development performed.
In traditional development, code is created by dev teams and then copied and built on the production server, which is almost 100% led to problems that were found on the servers that weren't found in the dev machines (which are entirely different from the production environment in almost all aspects). If you have even the slightest experience in the software development world, you must encounter it and see it happen, causing significant delays in releasing the product to the field.
And of course, how can we ignore the almighty sentence from developers "Defect in production? It worked on my machine!" which is always considered a joke, but it is very frustrating as it true. The functionality that failed in production forked fine on the developer's machine, but now it didn't work on the production environment for some obscure reason. This minor issue of bugs that now "Suddenly" appears on other settings halted the software industry for some decades, and it was clear that something had to change to reduce the effort and time invested in it. And this is when containers appeared (invented in 2013 by its founder Solomon Hykes at DockerCon.).
So what are containers?
Containers packages software, configuration files, and dependencies cloned to other machines. This package creates a unit that can be executed using whatever software and files are contained within it. The most important thing is that it's completely independent of the rest of the machine it hosted, with one notable exception. The container used the underlying OS, so we have a separate unit that tuns independently of the host machine, which raises the question, what (if any) is the difference between containers and other virtual solutions (e.g., VMware, vSphere, etc.)?
And there is a genuine reason why you may think that both solutions are the same, so let's explain it by starting with the Virtual machine infrastructure:
As you can see from the image above, we have the infrastructure, the hardware, and the host OS on top of it with virtual machines. The hypervisor is the component responsible for running VM's and making accessible the host resources (e.g., Networking, storage, etc.). On top of the Hypervisor, we have the virtual machines themselves, each with its Os and applications. The guest operating system here is vital because we can have one VM running windows server 2019, one virtual machine running windows 2016, and another VM running Linux.
Based on this architecture, each VM contains a guest OS, a virtual copy of the hardware that the OS requires to run, and an application with its associated dependencies. Containers are a lighter-weight, more agile way of handling virtualization without this dependency.
As with Virtual machines, we have the infrastructure (physical hardware), on top of it, we have the Host Operating system (typically Linux or Windows), so each container contains only the application and its libraries and dependencies (containers do not need to include a guest OS in every instance). This allows users to create multiple workloads on a single OS instance. The containers managed by the runtime are incredibly lightweight compared to the Virtual machines. For example, if the containers have the same operating system as the Host, if the Host runs Windows 2019, this will be the operating system of the container. If the Host is upgraded to another version, all containers will use this newer version.
So the main difference between the two architectures is that containers do not need to include a guest OS in every instance and can, instead, leverage the features and resources of the Host OS.
Now, the question we need to ask, of course, is why we prefer containers over technology that re changed the way we work for more than a decade? Let's try to answer it with some main advantages:
Predictability - Remember the joke above? When bugs found on production are not found earlier when using Dev environment? Well, when using containers, the same package is used both on the developer machine and production machine, So the room for errors is minimized (of course, there are many other attributes related to the prod environment, such as the size of data and dependencies that can be different but you got the point).
Density - This means that one server can run thousands of containers, but his very same server will run only a few dozens of virtual machines (remember that containers require much less memory and CPU usage and can be as small as 10MB).
Efficiency - Containers allow applications to be more rapidly deployed, patched, or scaled.
Reducing costs - Containers can help decrease your operating and development expenditure.
Performance - Containers are fast, and I mean very fast in a way that you can have a new container that is up and running with all the configurations you need with seconds as opposed to minutes required by the virtual machine.
Ok, so these are the main reasons to use containers, but there are some reasons not to use them. For example:
Single point of failure - Containers share the kernel of their Host, meaning that if the kernel becomes corrupted for one reason or another, all of the containers will be vulnerable, as well.
Monitoring - You can easily manage tens of containers, yes it will demand some effort, but it can be done; however, when using hundreds and sometimes even thousands of containers, it will become impossible without using Kubernetes.