Applications are being built, shipped and updated at an increasingly fast pace. This trend has generated interest in solutions that will help facilitate this complex process. The result is a flood of new methodologies and tools into the DevOps space. In this article, I will focus on one of these tools: Docker. More specifically, Docker on Windows, along with a sample application in ASP.NET.
What is DevOps?
The AWS site describes DevOps as “the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity.”
In other words, DevOps is about merging the Development and Operations silos into one team, so engineers work across the entire application lifecycle.
What is Docker?
Docker performs operating system level virtualization, a process often referred to as containerization (hence the term “Docker containers”). It was initially developed for Linux, but it is now fully supported on macOS and Windows, as well as all major cloud service providers (including AWS, Azure and Google Cloud). Containers help you package an application, and all of the software required to run it, into a single container, which you can then run from a developer’s local development environment all the way to production.
Docker has become one of the darlings of the DevOps community because it enables true independence between applications, environments, infrastructure, and developers.
Containers vs Virtual Machines?
You may ask yourself: “If containers are just another virtualization strategy, why should I consider it if I am already using some kind of Virtual Machine? “
There are a few outstanding benefits to using containers instead of virtual machines. But before we talk about this, I think it’s important for us to understand the main differences between these virtualization types.
Virtual machines (VMs) are an abstraction of physical hardware, turning one server (hardware) into many servers (virtualized). The hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating system, one or more apps, necessary binaries, and libraries. VMs are usually slower to boot comparing to the same OS installed on a bare metal server.
Containers are an abstraction at the application layer that packages code and dependencies together. Multiple containers can run on the same machine and operating system, sharing the kernel with other containers, each running as isolated processes in the user space.
Key Benefits of Containers
- Containers are faster and lighter to ship as they house the minimum requirements to run your application
- Containers can be versioned, shared, and archived
- Containers are instantly started as they carry a much smaller blueprint than Virtual Machines
- Build configurations are managed with declarative code
- Containers can be built and extended on top of pre-existing containers
Composing Containers
You might have applications that are composed of different technology stacks, such as in a microservices architecture. You might also have cases in which you need to use a combination of Linux and Windows containers.
This is where Docker Compose comes in. It helps create multiple isolated environments on a single host – very handy for development environments where you can have all application dependencies running together as different containers on the same host. Composed containers only recreate containers that have been changed, helping to speed up development time.
You can also configure the order of containers, and their respective dependencies, so they are bootstrapped in the correct order.
Hands On
Now that we understand the concept and ideas behind containers, It’s time to get our hands dirty. I’ll guide you, step by step, through the process of setting up a container environment.
Not a Windows developer? Don’t worry!
I will be demoing all of this in a Windows environment, using an ASP.NET Core application and a Redis container. Don’t worry though, many of the concepts I will go through in this article apply equally to non-Windows developers, such as “must know” Docker commands, running your first container, composing containers and Orchestration tools. The examples serve to illustrate concepts that are valid across platforms.
Requirements
Before we can continue any further, you will need to install Docker. The installation is simple. If you’re on Windows 10, like me, you will need to have a Professional license as it requires Hyper-V to run on Windows.
Let’s Go!
Open a terminal and execute docker to verify that everything was installed correctly. If it was, you should see the list of commands below:
Docker Pillars
OK, we’re on our way. Before we get too deep, It’s important to go over the big picture: what I refer to as the Docker Pillars. Knowing these will help you understand how everything is interconnected:
Pillar 1: Daemon
The Daemon can be considered as the brain of the whole operation. The Daemon is responsible for taking care of the lifecycle of containers and handling things with the Operating System. It does all of the heavy lifting every time a command is executed.
Pillar 2: Client
The Client is an HTTP API wrapper that exposes a set of commands interpreted by the Daemon.
Pillar 3: Registries
Responsible for storing images. Can be public or private and are available with different providers (Azure has its own container registry). Docker is configured to look for images on Docker Hub by default.
Must know commands
Any interaction with Docker is done via the command line, and there is a whole subset of commands and subcommands to manage the daemon and hub.
Below is a list of some of the key commands you need to know to get started:
Help: Brings help at any point by displaying a full listing of commands. Can also be used to get help regarding a particular command.
Command: docker help
Search: Searches for container images that were written by other people and are publicly available in the Docker hub or whatever registry you are using.
Command: docker search
Run: Runs a Docker container image.
Command: docker run
Ps: Lists container images that are running in your local daemon.
Command: docker ps
Pull: Pulls a docker image from the hub to your local repository.
Command: docker pull
Commit: Commits changes to a docker image (EG: changing an environment variable, changing the exposed port(s) of a container).
Command: docker commit
Push: Pushes a docker image to a registry.
Command: docker push
Inspect: Return low-level information on Docker objects. Very handy if you need to know things such as network information of a container.
Command: docker inspect
Kill: Kills a running container image. Requires the running container ID that can be found by running docker ps.
Command: docker kill
Restart: Restarts a running container. Requires the running container ID that can be found by running docker ps.
Command: docker restart
Stop: Stops a running container. Requires the running container ID that can be found by running docker ps.
Command: docker stop
Start: Starts a stopped container. Requires the running container ID that can be found by running docker ps.
Command: docker start
Stats: Displays a live stream of the container(s) resource usage statistics.
Command: docker stats
Tag: Tags a container image. Useful for registry name binding.
Command: docker tag
Remove: Removes one or more containers.
Command: docker rm
Remove Images: Removes one or more images
Command: docker rmi
Login: Used to authenticate credentials with a registry.
Command: docker login
Build: Builds a container image from a dockerfile.
Command: docker build
Your first container
Let’s start by running a simple container. Open a new command prompt/terminal window and execute the following commands in order:
docker pull hello-world
docker images
docker run hello-world
You should get a “Hello from Docker!” message after running this.
docker ps -a
docker rm [container-id] (Use the container ID that was shown on the result of the previous command)
docker rmi hello-world
Let’s see what we’ve accomplished with the above:
- First, we’ve pulled a pre-built image from the Docker Hub registry (Remember, I mentioned before this comes pre-configured as the default registry when you install Docker).
- After that, we’ve simply listed the images that are available on our local machine.
- Then we’ve ran the container, that exited itself right after the Hello World message was outputted.
- Then we’ve listed all Docker container processes so we could get the unique container ID that is generated every time you run a container.
- Then we’ve removed the container by using its unique ID.
- And last but not least, we’ve removed the image that we pulled from the registry to our local machine.
Let’s make things a bit more interesting by setting up an ASP.NET Core project with Visual Studio, Redis Cache, Docker and Docker Compose.
Again, if you are not a Microsoft developer it doesn’t matter as we will be going through concepts that are broadly applicable to all software development.
Visual Studio integration
I want to take you through a simple ASP.NET Core project running on Docker, so we can have a look at the concept of a ‘dockerfile’.
Start by opening a new instance, and creating a new ASP.NET Core Web Application project.
After that, you will be prompted to choose the type of template you want to use, to create your new application. Choose ‘Web Application (Model-View-Controller)’ and make sure you have the option ‘Enable Docker Support’ checked.
When you choose ‘Enable Docker Support’, Visual Studio will create your new Web Application project with a dockerfile. The dockerfile is a declarative file that tells Docker how to build and run customized images. You will also notice that a command prompt/terminal will open up and a ‘docker pull’ will be executed to pull the microsoft/aspnetcore image. This is the base image that will be used in our new application and in our ‘dockerfile’ which we will inspect in detail later on in this article.
A docker-compose yaml file will also be generated as part of the bootstrapping of our new solution. While at first, this is not strictly necessary, please don’t remove it. Visual Studio debugs your application using this file. We will be running the project as a single container instance, but later on, in this article, we will configure Docker compose to use a Redis container image as part of our solution.
If you look at the top bar of your Visual Studio instance, you will notice that the Debug button is displaying a ‘Docker’ label. Hit that button and your project will run in debug mode on top of a new container.
After a few seconds, a new browser tab should open up and you should be seeing something similar to this:
Open a new command prompt/terminal and execute ‘docker ps’. You should then see something similar to:
At this point, we have Visual Studio debugging an ASP.NET Core application on a container. Debugging works just as you would expect from when you run a project using IIS Express. This shows us why Docker is gaining a lot of traction, and big companies like Microsoft are putting all efforts to catch up with Docker.
If you execute ‘docker images’ in your command prompt/terminal, you will see that our new image was built and is ready to run on any docker daemon. At this point, you can even push this image to a container registry.
The ‘dockerfile’ in detail
If you inspect in detail our new ASP.NET Core application project, you will see under the root of its directory a file named as ‘dockerfile’ without any extension. This is a declarative .yaml file that instructs the docker daemon how to build a custom container based on another image followed by a set of commands that we can configure.
Let’s inspect what a few of these commands are doing:
- The FROM command tells Docker what base image should be used. Docker images can be created on top of existing images so we don’t have to set up things over and over again. We can also leverage from this by inheriting all updates and patches applied to the base images.
- The WORKDIR command sets the base container directory.
- The EXPOSE command opens the port 80 of the container. This is the local container port but can be forwarded from a different port from a Docker daemon (Port mapping).
- The RUN command executes a command from within the container.
- The COPY command copies from the host machine to the docker container (If you need to copy a file from a URL, the ADD command should be used).
- The ENTRYPOINT command sets the default bashing for this container after it is started up.
Composing a container dependency to our application
Our application might be composed of several containers, and these containers might depend on other containers. To demonstrate how Docker Compose can help in such situations, let’s add a Redis container to our Docker Compose and indicate that our ASP NET Core application depends on it.
Here is my updated docker.compose.yml file:
By adding a new image, which I’ve named as ‘redis’, we now have instructed docker-compose command to run them both. The ‘depends_on’ property configures the that the dockerdemo image will only be started after our redis image is ready.
If you run the project again from within Visual Studio, you will notice that there are now 2 running containers after we run ‘docker ps’. This might take a while as a new container will be pulled from the registry…
At this point, the containers can send network traffic among themselves, and a Redis connection pointing to ‘redis’ host would work just fine.
You can still use Redis on Cloud providers for your production environments, and tweak what dependencies are composed based on which environment you are running your ‘docker-compose’ command by using the ‘docker-compose.override.yml’ file. In addition to that, you could also be running a SQL Express container for your development environment to speed up the induction time of new developers to your team.
So, is it necessary for us to use Docker Compose whenever we get to use Docker? Nope. But I believe it is important for you to know what it is and what you can achieve with it. Whether you are trying to put the parts that compose your application development environment together, such as cache and database dependencies, or if you are architecting a microservices solution composed of several smaller applications that work in conjunction.
Docker Compose is on your side whenever you need to build up independent containers that will interact with each other.
Orchestration Tools
In the next article in this series, we will be looking at Docker Orchestrators, which allow us to automate deployment, scale and manage containerized applications. There are a few different options out there.
Want to get a head start? Here is a list of the most popular Docker Orchestrators:
- Docker Swarm
- Kubernetes
- Amazon ECS
- Azure Container Service
- Google Container Engine
Conclusion
So there you go. Docker is a fast and consistent way to accelerate and automate the shipping of software. It saves developers from having to setup and configure multiple development environments each time they test or deploy code. That time can then be spent developing quality software instead. Like all great solutions, it is simple and intuitive.
Assuming I did a good job, this article should have gotten you interested in Docker while guiding you through those first steps. I would love to hear your thoughts on both the article and how you feel about docker itself. For example: ‘how do you feel it is evolving and being used by the community?’
Are you searching for your next programming challenge?
Scalable Path is always on the lookout for top-notch talent. Apply today and start working with great clients from around the world!