The Pre-Container World
Before containerization became popular, applications were, and in many cases still are, hosted on physical application servers. Conversations around what kind, how big, and how fast these servers should be were difficult questions to answer. In many cases, organizations went big and fast, which translated to powerful, but oversized servers running at a fraction of their full capacity. This approach can be inefficient and wasteful of company resources. Additionally, costs associated with purchasing and installing OS’s, web server tools, networking support, etc. are unavoidable.
The advent of virtualization technology helped organizations host and run applications more efficiently and at lower costs. Still, setting up multiple virtual machines on a single server had its drawbacks. For instance, four virtual machines on a server requires four separate OS instances. Each OS requires valuable server resources like CPU, RAM, and physical storage.
Figure 1 illustrates how virtualization is used for isolating applications. This approach still proves wasteful of server resources since it requires an OS instance for each virtual machine.
Figure 1. Virtual Machine Hosting
Source: Mark Russinovich, CTO, Microsoft Azure
Introduced during the early days of Linux, containerization is not a new concept, but it's enjoyed increased popularity over the last several years. It wasn’t until 2013, that container popularity surged when Docker emerged. Docker defines a container as a “standardized unit of software.” While this defines container correctly, Microsoft summarizes containerization more clearly as “an approach to software development in which an application or service, its dependencies, and its configuration (abstracted as deployment manifest files) are packaged together as a container image. Containerized applications can be tested as a unit and deployed as a container image instance to the host operating system.”
Figure 2 illustrates how applications are hosted in virtual, container environments.
Figure 2. Container Hosting
Source: Mark Russinovich, CTO, Microsoft Azure
With container isolation, server resources are efficiently leveraged. Since each container doesn’t require a separate OS, containerization reduces server space requirements. Additionally, starting and stopping application containers is much faster than having to boot an OS each time a user wants to start an application or service.
Another advantage to containers involves how nicely the technology works with modern application architecture in the cloud. Microsoft has invested a lot of time and energy in helping developers and IT Pros chart their journey to the cloud at different modernization maturity levels. The Cloud Optimized path involves DevOps refinement with containerization. More information on leveraging Microsoft’s maturity levels with Windows Containers for modernization can be found in Microsoft’s eBook: Modernize existing .NET applications with Azure cloud and Windows Containers.
Windows Containers with Docker
It’s important to note that an application is not limited to one container host. In fact, many containerized applications reside across multiple containers. Think about a simple online ordering application as an example. There could be a container responsible for hosting a MongoDB instance and another container that hosts order processing logic. While this container separation promotes modular, self-containing design, it does introduce functional dependencies to the application’s ecosystem. In microservices architectures, applications can be parceled into different services, each residing in a separate container. Luckily, there is tooling available to help manage limitations around resource isolation and application packaging.
The Docker Project is an open-source solution driven by Docker Inc. Nigel Poulton, a Docker Captain and leading name in the container community, suggests “what Docker is to containers is what VMWare is to virtual machines.” It provides container management tooling to build, ship, and run modern applications on-prem or in the cloud. Docker’s core functionality, referred to as the Docker Engine, is responsible for building images, which are just stopped or “powered off” containers.
Docker has extensive documentation on using Docker for Windows. Once you’re ready to start getting your hands dirty, start with installing Docker Desktop for Windows. While Docker Desktop for Windows runs well on most Windows VMs (even on a Mac), know that it does require Microsoft Hyper-V to run. For this reason, you may experience issues starting containers on some VM tools. Oracle VirtualBox, for instance, is not compatible with Hyper-V. To get around this, you can use ‘docker-machine’ to create more local VMs with the Hyper-V driver. For an exhaustive list of potential limitations running the Docker Machine in a virtual environment, review the Get started with Docker Machine and a local VM article.
Once your Docker Desktop installation has been successfully set up, test your installation and start getting acquainted with the Docker CLI commands. The Get started with Docker for Windows article is a great place to begin:
- Testing Docker installation
- Pulling applications from Docker Hub
- Running containers
- Explore Docker help pages
- Pull and run web servers
- Understanding Docker settings dialog
- Switching between Windows and Linux containers
Applications comprised of dozens or even hundreds of containerized components must be organized to work together in order for the application to function as intended. This process is known as container orchestration and really translates to moving away from monolithic architectures.
There are several tools available and each have their own approach for container management. Docker Swarm and Apache Mesos are good tools, but the most popular platform in terms of its wide use is Google Kubernetes. With any of these orchestration tools you can:
- Control when containers start and stop
- Set up container clusters
- Coordinate application processes
- Application health monitoring and reporting