Deploy a Dockerized Application to Azure Kubernetes Service using Azure YAML Pipelines 3 – Terraform Deployment Pipeline

Posted by Graham Smith on April 7, 2020No Comments (click here to comment)

This is the third post in a series where I'm taking a fresh look at how to deploy a dockerized application to Azure Kubernetes Service (AKS) using Azure Pipelines after having previously blogged about this in 2018. The list of posts in this series is as follows:

  1. Getting Started
  2. Terraform Development Experience
  3. Terraform Deployment Pipeline (this post)
  4. Running a Dockerized Application Locally
  5. Application Deployment Pipelines
  6. Telemetry and Diagnostics

In this post I take a look at how to create infrastructure in Azure using Terraform in a deployment pipeline using Azure Pipelines. If you want to follow along you can clone / fork my repo here, and if you haven't already done so please take a look at the first post to understand the background, what this series hopes to cover and the tools mentioned in this post. I'm not covering Azure Pipelines basics here and if this is of interest take a look at this video and or this series of videos. I'm also assuming familiarity with Azure DevOps.

There's quite a few moving parts to configure to move from command-line Terraform to running it in Azure Pipelines so here's the high-level list of activities:

  • Create a Variable Group in Azure Pipelines as a central place to store variables and secrets that can be used across multiple pipelines.
  • Configure a self-hosted build agent to run on a local Windows machine to aid troubleshooting.
  • Create storage in Azure to act as a backend for Terraform state.
  • Generate credentials for deployment to Azure.
  • Create variables in the variable group to support the Terraform resources that need variable values.
  • Configure and run an Azure Pipeline from the megastore-iac.yml file in the repo.

Create a Variable Group in Azure Pipelines

In your Azure DevOps project (mine is called megastore-az) navigate to Pipelines > Library > Variable Groups and create a new variable group called megastore. Ensure that Allow access to all pipelines is set to on. Add a variable named project_name and give it a meaningful value that is also likely to be globally unique and doesn't contain any punctuation and click Save:

Configure a Self-Hosted Agent to Run Locally

While a Microsoft-hosted windows-latest agent will certainly be quite satisfactory for running Terraform pipeline jobs they can be a little bit slow and there is no way to peek in and see what's happening in the file system which can be a nuisance if you are trying to troubleshoot a problem. Additionally, because a brand new instance of an agent is created for each new request they mask the issue of files hanging around from previous jobs. This can catch you out if you move from a Microsoft-hosted agent to a self-hosted agent but is something that you will certainly catch and fix if you start with a self-hosted agent. The instructions for configuring a self-host agent can be found here. The usual scenario is that you are going to install the agent on a server but the agent works perfectly well on a local Windows 10 machine as long as all the required dependencies are installed. The high-level installation steps are as follows:

  1. Create a new Pool in Azure DevOps called Local at Organization Settings > Pipelines > Agent Pools > Add pool.
  2. On your Windows machine create a folder such as C:\agents\windows.
  3. Download the agent and unzip the contents.
  4. Copy the contents of the containing folder to C:\agents\windows, ie this folder will contain two folders and two *.cmd files.
  5. From a command prompt run .\config.cmd.
  6. You will need to supply your Azure DevOps server URL and previously created PAT.
  7. Use windows-10 as the agent name and for this local instance I recommend not running as a service or at startup.
  8. The agent can be started by running .\run.cmd at a command prompt after which you should see something this:
  9. After the agent has finished running a pipeline job you can examine the files in C:\agents\windows\_work to understand what happened and assist with troubleshooting any issues.

Create Backend Storage in Azure

The Azure backend storage can be created by applying the Terraform configuration in the backend folder that is part of the repo. The configuration outputs three key/value pairs which are required by Terraform and which should be added as variables to the megastore variable group. The backend_storage_access_key should be set as a secret with the padlock:

Generate Credentials for Deployment to Azure

There are several pieces of information required by Terraform which can be obtained as follows (assumes you are logged in to Azure via the Azure CLI—run az login if not):

  1. Run az account list --output table which will return a list of Azure accounts and corresponding subscription Ids.
  2. Run az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/SubscriptionId", substituting SubscriptionId for the appropriate Id from step 1.
  3. From the resulting output create four new variables in the megastore variables group as follows:
    1. azure_subscription_id = SubscriptionId from step 1
    2. azure_client_id = appId value from the result of step 2
    3. azure_tenant_id = tenant value from the result of step 2
    4. azure_client_secret = password value from the result of step 2, which should set as a secret with the padlock
  4. Remember to save the variable group after entering the new values.

Create Terraform Variable Values in the megastore Variable Group

In the previous post where we ran Terraform from the command-line we supplied variable values via dev.tfvars, a file that isn't committed to version control and is only available for local use. These variable values need creating in the megastore variable group as follows, obviously substituting in the appropriate values:

  • aks_client_id = "service principal id for the AKs cluster"
  • aks_client_secret = "service principal secret for the AKs cluster"
  • asql_administrator_login_name = "Azure SQL admin name"
  • asql_administrator_login_password = "Azure SQL admin password"
  • asql_local_client_ip_address = "local ip address for your client workstation"

Remember to save the variable group after entering the new values.

Configure an Azure Pipeline

The pipeline folder in the repo contains megastore-iac.yml which contains all the instructions needed to automate the deployment of the Terraform resources in an Azure Pipeline. The pipeline is configured in Azure DevOps as follows:

  1. From Pipelines > Pipelines click New pipeline.
  2. In Connect choose GitHub and authenticate if required.
  3. In Select, find your repo, possibly by selecting to show All repositories.
  4. In Configure choose Existing Azure Pipelines YAML file and in Path select /pipeline/megastore-iac.yml and click Continue.
  5. From the Run dropdown select Save.
  6. At the Run Pipeline screen use the vertical ellipsis to show its menu and then select Rename/move:
  7. Rename the pipeline to megastore-iac and click Save.
  8. Now click Run pipeline > Run.
  9. If the self-hosted agent isn't running then from a command prompt navigate to the agent folder and run .\run.cmd.
  10. Hopefully watch with joy as the megastore Azure infrastructure is created through the pipeline.
Analysis of the YAML File

So what exactly is the YAML file doing? Here's an explanation for some of the schema syntax with reference to a specific pipeline run and the actual folders on disk for that run (the number shown will vary between runs but otherwise everything else should be the same):

  • name: applies a custom build number
  • variables: specifies a reference to the megastore variable group
  • pool: specifies a reference to the local agent pool and specifically to the agent we created called windows-10
  • jobs/job/workspace: ensures that the agent working folders are cleared down before a new job starts
  • script/'output environemt variables': dumps all the environment variables to the log for diagnostic purposes
  • publish/'publish iac artefact': takes the contents of the git checkout at C:\agents\windows\_work\3\s\iac and packages them in to an artifact called iac.
  • download/'download iac artefact': downloads the iac artifact to C:\agents\windows\_work\3\iac.
  • powershell/'create file with azurerm backend configuration': we need to tell Terraform to use Azure for the backend through a configuration. This configuration can't be present when working locally so instead it's created dynamically through PowerShell with some formatting commands to make the YAML structurally correct.
  • script/'terraform init': initialises Terraform in C:\agents\windows\_work\3\iac using Azure as the backend through credentials supplied on the command line from the megastore variable group.
  • script/'terraform plan and apply': performs a plan and than an apply on the configurations in C:\agents\windows\_work\3\iac using the credentials and variables passed in on the command line from the megastore variable group.

Final Thoughts

Although this seems like a lot of configuration—and it probably is—the ability to use pipelines as code feels like a significant step forward compared with GUI tasks. Although at first the YAML can seem confusing once you start working with it you soon get used to it and I now much prefer it to GUI tasks.

One question which I'm still undecided about is where to place some of the variables needed by the pipeline. I've used a variable group exclusively as it feels better for all variables to be in one place, and for variables used across different pipelines this is definitely where they should be. However, variables that are only used by one pipeline could live with the pipeline itself, as this is a fully supported feature (editing the pipeline in the browser lights up the Variables button where variables for that pipeline can be added). However having variables scattered everywhere could be confusing, hence my uncertainty. Let me know in the comments if you have a view!

That's it for now. Next time we look at running the sample application locally using Visual Studio and Docker Desktop.

Cheers -- Graham