Background
In one of the earlier post on Provisioning Docker Swarm Cluster in Azure, I had demonstrated how to use the ACS Engine to create the Swarm cluster. In that post I had manually entered all the parameters based on which the cluster was provisioned. In this post I will demonstrate how we can eliminate the manual steps and automate the process using Azure Resource Manger template. We will also use Azure CLI to perform the task of provisioning. In the end we would have provisioned exactly the same resources but with much less effort.
Following steps will help to achieve our objectives
- Login to Azure CLI
- Create resource group
- User ARM template to add resources to resource group
- Verify the provisioned resources
Login to Azure CLI
There are different ways in which we can provision resources in Azure. Simplest one is the Azure portal with all the visual elements. For those who like to work with command line, we can use Azure CLI which is a cross platform command line interface. As of this writing the 2.0 version is latest. If you do not have the Azure CLI installed, follow the steps to get it installed.
Login to the Azure subscription using the command from your preferred terminal
az login
This will give you a link and the code to authenticate. Once you are authenticated, the list of subscriptions associated with the login will be displayed on the screen. In my case I had multiple subscriptions.
I need to select one of these subscription. In this case I will choose the third option Azure Pass. We do that by setting the subscription parameter of the account as
az account set --subscription "Azure Pass"
Create Resource group
Resource Group in Azure is the logical grouping of related resources. We need to assign the resource group to one of the location which basically maps to the Azure region. In my case Southeast Asia is the nearest region. I will create a resource group named swrmresourcegroup in Southeast Asia region using the command
az group create \ --name swarmresourcegroup \ --location "Southeast Asia"
Use ARM template to add resources
Azure Resource Manager (ARM) templates provides us an easy way to describe different resources using a JSON formatted file. The structure of template consists of following
- schema for versioning
- content version
- list of parameters required by the template
- list of variables used within the template
- Resources created by the template
- outputs which can are used to access the resources
This fits in very nicely with the DevOps practices. We can store the contents of the file which describes in a declarative manner our infrastructure as code. This file can be stored within the source control.
This file lists down all the resources which will form part of the stack, their type and the related properties.
We describe each resource that we wish to create as part of this template. As we can see on line 18, we can specify default value for agentCount parameter as 2. There are multiple parameters specified in the template. The template also describes the relationship between different resources.
In the earlier post, we had specified the values for the parameters in the Azure portal. In fact the portal uses this file itself to show different drop down values. The template file is a mix of static and dynamic content. We can see example of dynamic content from line 276 onwards. We use functions like concat, variables, parameters etc to derive dynamic values for the resources.
Coming back to parameters, we will store all the parameters that can be overridden in a dedicated file named parameters.json
This way we have the complete list of resources as well as runtime parameter values required to instantiate the required resources. The parameters file is just a key value pair consisting of parameter name as key and its values as the value part. Please note that you will need to specify the correct value for sshRSAPublicKey on line 39. All that is needed now is to use these two files to trigger the deployment process.
az group deployment create \ --name "coredemo" \ --resource-group "swarmresourcegroup" \ --template-file azuredeploy.json \ --parameters parameters.json
The command is self explanatory. We use coredemo as the name of the deployment, we associate the deployment with the swarmresourcegroup using the resource-group flag. Template-file parameter is used to specify the name of the resource template and finally parameters are provided using the parameters.json. As always depending on the number of resources requested, this process can take about 5-10 minutes. At the completion of successful deployment, we will get the 15 resources provisioned which are shown in the screenshot at the beginning of this post.
Conclusion
As you can see we can automate the whole process of provisioning using two simple commands. Every time I need to create the Docker Swarm cluster now onwards, I just use these simple commands. I can avoid lot of manual mistakes which can happen when I copy & paste the values in the portal. This process of storing the resource templates and the parameters makes my provisioning step a repeatable process. In this post I demonstrated the provisioning using ARM templates and Azure CLI. You can also use the same template and the parameters file with Powershell instead of Azure CLI. I will demonstrate that separately in a later post. Any guesses what command I need to delete the resource group once I am done with my testing?
az group delete --name swarmresourcegroup
This one command is enough to delete all the resources under the swarmresourcegroup. ARM resource templates can save you lot of time if you need to create same set of resources across multiple environments. You can easily create copies of environment like Dev / Integration / QA Test / Preproduction / Production. Parameters help you to customize the resources. For example you can have 2 agent nodes running in Dev & QA environment while Preproduction & Production can have 10 agent nodes. In such cases all that is required to be done is to have an environment specific parameters file.
Another scenario where the parameters could be useful is to have different processing power for machines in the dev & QA environment. These could be less performant machines while preproduction and production can have hardware with more firepower in them.
Hope you found this information useful. Until next time code with passion and strive for excellence.