Deploy a Dockerized Application to Azure Kubernetes Service using Azure YAML Pipelines 3 – Terraform Deployment Pipeline
This is the third post in a series where I'm taking a fresh look at how to deploy a dockerized application to Azure Kubernetes Service (AKS) using Azure Pipelines after having previously blogged about this in 2018. The list of posts in this series is as follows:
- Getting Started
- Terraform Development Experience
- Terraform Deployment Pipeline (this post)
- Running a Dockerized Application Locally
- Application Deployment Pipelines
- Telemetry and Diagnostics
In this post I take a look at how to create infrastructure in Azure using Terraform in a deployment pipeline using Azure Pipelines. If you want to follow along you can clone / fork my repo here, and if you haven't already done so please take a look at the first post to understand the background, what this series hopes to cover and the tools mentioned in this post. I'm not covering Azure Pipelines basics here and if this is of interest take a look at this video and or this series of videos. I'm also assuming familiarity with Azure DevOps.
There's quite a few moving parts to configure to move from command-line Terraform to running it in Azure Pipelines so here's the high-level list of activities:
- Create a Variable Group in Azure Pipelines as a central place to store variables and secrets that can be used across multiple pipelines.
- Configure a self-hosted build agent to run on a local Windows machine to aid troubleshooting.
- Create storage in Azure to act as a backend for Terraform state.
- Generate credentials for deployment to Azure.
- Create variables in the variable group to support the Terraform resources that need variable values.
- Configure and run an Azure Pipeline from the megastore-iac.yml file in the repo.
Create a Variable Group in Azure Pipelines
In your Azure DevOps project (mine is called megastore-az) navigate to Pipelines > Library > Variable Groups and create a new variable group called megastore. Ensure that Allow access to all pipelines is set to on. Add a variable named project_name and give it a meaningful value that is also likely to be globally unique and doesn't contain any punctuation and click Save:
Configure a Self-Hosted Agent to Run Locally
While a Microsoft-hosted windows-latest agent will certainly be quite satisfactory for running Terraform pipeline jobs they can be a little bit slow and there is no way to peek in and see what's happening in the file system which can be a nuisance if you are trying to troubleshoot a problem. Additionally, because a brand new instance of an agent is created for each new request they mask the issue of files hanging around from previous jobs. This can catch you out if you move from a Microsoft-hosted agent to a self-hosted agent but is something that you will certainly catch and fix if you start with a self-hosted agent. The instructions for configuring a self-host agent can be found here. The usual scenario is that you are going to install the agent on a server but the agent works perfectly well on a local Windows 10 machine as long as all the required dependencies are installed. The high-level installation steps are as follows:
- Create a new Pool in Azure DevOps called Local at Organization Settings > Pipelines > Agent Pools > Add pool.
- On your Windows machine create a folder such as C:\agents\windows.
- Download the agent and unzip the contents.
- Copy the contents of the containing folder to C:\agents\windows, ie this folder will contain two folders and two *.cmd files.
- From a command prompt run .\config.cmd.
- You will need to supply your Azure DevOps server URL and previously created PAT.
- Use windows-10 as the agent name and for this local instance I recommend not running as a service or at startup.
- The agent can be started by running .\run.cmd at a command prompt after which you should see something this:
- After the agent has finished running a pipeline job you can examine the files in C:\agents\windows\_work to understand what happened and assist with troubleshooting any issues.
Create Backend Storage in Azure
The Azure backend storage can be created by applying the Terraform configuration in the backend folder that is part of the repo. The configuration outputs three key/value pairs which are required by Terraform and which should be added as variables to the megastore variable group. The backend_storage_access_key should be set as a secret with the padlock:
Generate Credentials for Deployment to Azure
There are several pieces of information required by Terraform which can be obtained as follows (assumes you are logged in to Azure via the Azure CLI—run az login if not):
- Run az account list --output table which will return a list of Azure accounts and corresponding subscription Ids.
- Run az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/SubscriptionId", substituting SubscriptionId for the appropriate Id from step 1.
- From the resulting output create four new variables in the megastore variables group as follows:
- azure_subscription_id = SubscriptionId from step 1
- azure_client_id = appId value from the result of step 2
- azure_tenant_id = tenant value from the result of step 2
- azure_client_secret = password value from the result of step 2, which should set as a secret with the padlock
- Remember to save the variable group after entering the new values.
Create Terraform Variable Values in the megastore Variable Group
In the previous post where we ran Terraform from the command-line we supplied variable values via dev.tfvars, a file that isn't committed to version control and is only available for local use. These variable values need creating in the megastore variable group as follows, obviously substituting in the appropriate values:
- aks_client_id = "service principal id for the AKs cluster"
- aks_client_secret = "service principal secret for the AKs cluster"
- asql_administrator_login_name = "Azure SQL admin name"
- asql_administrator_login_password = "Azure SQL admin password"
- asql_local_client_ip_address = "local ip address for your client workstation"
Remember to save the variable group after entering the new values.
Configure an Azure Pipeline
The pipeline folder in the repo contains megastore-iac.yml which contains all the instructions needed to automate the deployment of the Terraform resources in an Azure Pipeline. The pipeline is configured in Azure DevOps as follows:
- From Pipelines > Pipelines click New pipeline.
- In Connect choose GitHub and authenticate if required.
- In Select, find your repo, possibly by selecting to show All repositories.
- In Configure choose Existing Azure Pipelines YAML file and in Path select /pipeline/megastore-iac.yml and click Continue.
- From the Run dropdown select Save.
- At the Run Pipeline screen use the vertical ellipsis to show its menu and then select Rename/move:
- Rename the pipeline to megastore-iac and click Save.
- Now click Run pipeline > Run.
- If the self-hosted agent isn't running then from a command prompt navigate to the agent folder and run .\run.cmd.
- Hopefully watch with joy as the megastore Azure infrastructure is created through the pipeline.
Analysis of the YAML File
So what exactly is the YAML file doing? Here's an explanation for some of the schema syntax with reference to a specific pipeline run and the actual folders on disk for that run (the number shown will vary between runs but otherwise everything else should be the same):
- name: applies a custom build number
- variables: specifies a reference to the megastore variable group
- pool: specifies a reference to the local agent pool and specifically to the agent we created called windows-10
- jobs/job/workspace: ensures that the agent working folders are cleared down before a new job starts
- script/'output environemt variables': dumps all the environment variables to the log for diagnostic purposes
- publish/'publish iac artefact': takes the contents of the git checkout at C:\agents\windows\_work\3\s\iac and packages them in to an artifact called iac.
- download/'download iac artefact': downloads the iac artifact to C:\agents\windows\_work\3\iac.
- powershell/'create file with azurerm backend configuration': we need to tell Terraform to use Azure for the backend through a configuration. This configuration can't be present when working locally so instead it's created dynamically through PowerShell with some formatting commands to make the YAML structurally correct.
- script/'terraform init': initialises Terraform in C:\agents\windows\_work\3\iac using Azure as the backend through credentials supplied on the command line from the megastore variable group.
- script/'terraform plan and apply': performs a plan and than an apply on the configurations in C:\agents\windows\_work\3\iac using the credentials and variables passed in on the command line from the megastore variable group.
Final Thoughts
Although this seems like a lot of configuration—and it probably is—the ability to use pipelines as code feels like a significant step forward compared with GUI tasks. Although at first the YAML can seem confusing once you start working with it you soon get used to it and I now much prefer it to GUI tasks.
One question which I'm still undecided about is where to place some of the variables needed by the pipeline. I've used a variable group exclusively as it feels better for all variables to be in one place, and for variables used across different pipelines this is definitely where they should be. However, variables that are only used by one pipeline could live with the pipeline itself, as this is a fully supported feature (editing the pipeline in the browser lights up the Variables button where variables for that pipeline can be added). However having variables scattered everywhere could be confusing, hence my uncertainty. Let me know in the comments if you have a view!
That's it for now. Next time we look at running the sample application locally using Visual Studio and Docker Desktop.
Cheers -- Graham