Background
Recently I had the opportunity to present at Microsoft Azure Singapore meetup group on the topic of scaling multi-container apps using Docker Swarm and Azure Container Service (AKS). The demo consisted of using a small application built using .Net Core. It consists of a ASP.Net Core MVC frontend. There is a ASP.Net Core WebAPI which the frontend uses to display some key value data. The MVC app and the WebAPI are containerized using Docker. These are then combined into multi-container application using Docker-compose. The images are published to Dockerhub. In order to showcase the scaling features I used the combination of Docker Swarm as an Orchestrator and Azure Container Service to provision the hardware in Azure cloud.
All the source code is available on Github repo at https://github.com/yuvarajac/dotnet-2017. All the steps and different commands used during this demo are documented at https://github.com/yuvarajac/dotnet-2017/blob/master/DotNet2017/AzureDockerSwarm.md
The slide deck and the videos used during the talk are available at http://bit.ly/AzureMeetupSwarmDemo
Visualization problem with Azure container service and Docker Swarm
While demonstrating the scaling features as well as the distribution of containers on different nodes in Swarm Cluster, I had used the Docker CLI interface. Although it gives all the detailed information, it can be difficult for people new to Docker and Azure to understand things. This is where it could be quite helpful to have a easy way to visualize the state of the cluster resources. Unfortunately there is no default visualizer for Docker Swarm cluster. Unlike Kubernetes which provides a very detailed and user friendly UI, for Docker Swarm we need to depend on external providers for visualization.
I am aware of two such options. Visualizer and Portainer. Both these visualizers have a pre-requisite that they need to be run on the manager node in the Swarm cluster. This is where the first problem comes in. As of this writing, the Docker Swarm Orchestrator in Azure Container Service uses the legacy standalone swarm. As highlighted in the Azure documentation on Container Management we need to use alternative approach like the open source ACS Engine. I was able to get the Swarm cluster up and running quickly using the deployment template provided with ACS Engine.
Visualizer service waiting for scheduling
I verified that the Swarm cluster was indeed running with the integrated Swarm mode by running following commands
docker version
docker node ls
In the standalone mode, docker node ls command does not show any output. Whereas in the integrated mode the status of all the nodes in the cluster was listed successfully. With the confirmation that I was running the Swarm cluster in integrated mode, I set about starting the Visualizer service. The following command is used to create a service which runs a container on the master node
docker service create \ --name=viz \ --publish=8080:8080/tcp \ --constraint=node.role==manager \ --mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \ dockersamples/visualizer
As can be seen we specify the constraint that the node where the container is placed should be having the manager role. We are also using the image named dockersamples/visualizer. The service is created successfully. I tried to browse the url on port 8080 and nothing happened. The first step of debugging is to check the status of all services on the Swarm cluster using docker service ls command. This resulted in the following output
We can see that the replicas are shown as 0/1. This means that the requested desired state is 1 but 0 instances are running. I tried running other commands usually used for debugging docker images like docker logs, docker inspect <imagename> etc. All the commands seemed to be running fine. While googling I came across an issue reported for ACS Engine which was linked to the status of master node as paused.
Lets go back and look at this by running the command docker node ls
Notice the highlighted part. The master node is in Ready state but Availability is shown as Pause. This is the default availability set by ACS Engine. What this means is that the master node is used only for admin activities. There will not be any container scheduled to run on the master node. This is the root cause of our problem. It is like the chicken and egg problem. We are requesting the visualizer to be scheduled on master node and ACS default setting prevents any container from being scheduled on the master.
Change Availability setting for Master node
Luckily the availability for master can be overridden using the command
docker node update --availability=active swarmm-master-coredemo-0
After this, the master node can be used to schedule containers. If we check the status of visualizer service again, it is still waiting to be scheduled. The reason for this is that the availability setting does not impact the existing containers but only comes into picture for newly created ones. So we need to recreate the service by removing it first using the command docker service rm viz. Now if we run the same docker service create command as above, we should have a running visualizer container instance on the master node. We can verify this by running the docker service ls command
As expected, we have the 1/1 replica running. Lets also verify the output in the browser by accessing the agent url with port 8080.
The visualizer shows all the nodes and the containers running on those nodes. We can see the visualizer running on the manager node on the rightmost side. The workers are running two containers each. Along with the containers, Visualizer also shows the RAM and Operating system details for the node. It is very basic and good for quick checks.
Portainer
Compared to Visualizer, Portainer offers much more feature rich UI. We can get details about the cluster, nodes, services, containers, images etc from different dimensions. The container for portainer can be started using following command
docker service create \ --name portainer \ --publish 9000:9000 \ --constraint 'node.role == manager' \ --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \ portainer/portainer \ -H unix:///var/run/docker.sock
Note that we are publishing the port 9000. Once the container is up and running it can be accessed by hitting the agent url. If everything went fine, we should see output similar to the screenshot
The Portainer dashboard is quite feature rich having links to all the underlying options related to Swarm like Swarm, Stacks, Services, Containers, Images, Volumes and Networks. Lets looks at few of the detailed options
Swarm section displays the details of all the nodes that form the cluster. This is similar to the output of the docker node ls command. Apart from the node names it enhances the details by showing the CPU, RAM and Engine version.
Services shows the list of services and their details like the images, replicas, published ports etc. This is basically the output of docker services ls command that we had seen earlier using the CLI version.
Stacks shows the list of stacks deployed using the docker stack deploy command.
Conclusion
As the saying goes, “A picture is worth 1000 words”, I think having a visualizer with a distributed system like Docker Swarm helps a lot in understanding the features much faster. I am a big fan of using command line tools as it makes you aware of all the options and flags compared to using drag and drop tools. Looking at both Visualizer and Portainer, I would prefer to go ahead with Portainer for now. I hope this post was helpful in improving your knowledge of Docker Swarm and ACS. I did not show all the commands that were executed to get all the services up and running. The objective of this post was to focus on visualization of Swarm and not so much on other aspects. If you are interested in knowing the exact commands refer to the github page
https://github.com/yuvarajac/dotnet-2017/blob/master/DotNet2017/AzureDockerSwarmMode.md
Keep in mind that to use the visualizers we need to build Swarm cluster using the ACS engine.
Until next time Happy Programming.
No comments:
Post a Comment