Deploy a Dockerized Application to Azure Kubernetes Service using Azure YAML Pipelines 1 – Getting Started

Posted by Graham Smith on April 7, 2020No Comments (click here to comment)

In 2018 I wrote a series of blog posts about deploying a dockerized ASP.NET Core application to Azure Kubernetes Service (AKS) and finished up with this post where for various reasons I abandoned the Deploy to Kubernetes GUI tasks used by what was then VSTS and instead made use of refactored Bash scripts to deploy Kubernetes resources.

In the 2018 series of posts I didn't start out with infrastructure as code (IaC) and also since then a lot has changed with the tooling and the technology so in my next few posts I'm going to revisit this topic to see how things look in 2020. The blog series at the moment is looking like this:

  1. Getting Started (this post)
  2. Terraform Development Experience
  3. Terraform Deployment Pipeline
  4. Running a Dockerized Application Locally
  5. Application Deployment Pipelines
  6. Telemetry and Diagnostics

As with my previous 2018 series of posts I'm not suggesting that the ideas I'm presenting are the best and only way to do things. Rather, the intention is that the concepts offer a potential learning opportunity and a stepping stone to figuring out how you might approach this in a real-world scenario. Even if you don't need to use any of this in production I think there's a great deal of fun and satisfaction to be had from gluing all of the bits together.

The Big Picture

The dockerized application that I'll be deploying to AKS consists of the following components:

  • An ASP.NET Core web application, that sends messages to a
  • NATS message queue service, which stores messages to be retrieved by a
  • .NET Core message queue handler application, which saves messages to an
  • Azure SQL Database

The lifecycle of this application and the infrastructure it runs on is as follows:

  • All Azure resources are managed by Terraform using Azure Pipelines. These include a Container Registry, an AKS Cluster, an Azure SQL Database server and databases and Application Insights instances.
  • An AKS cluster is configured with two namespaces called qa and prd which form a basic CI/CD pipeline.
  • An Azure SQL Database server is configured with three databases called dev, qa and prd.
  • Application components (except the Azure SQL Database) run locally in a dev environment using docker-compose. Messages are saved to the dev Azure SQL Database.
  • Deployments of application components (except the Azure SQL Database) are managed separately using dedicated Azure Pipelines. The Container Registry is used to store tagged images and new images are first pushed to the qa and then to the prd namespaces on the AKS cluster.
  • Telemetry and diagnostics are collected by three separate Application Insights instances, one each for the three (dev, qa and prd) environments.

The overall aim of this series is to show how the big pieces of the jigsaw fit together and I'm intentionally not covering any of the lower-level details commonly associated with CI/CD pipelines such as testing. Maybe some other time!

What You Can Learn by Following This Blog Series

Some of the technologies I'm using in this blog series are vast in scope and I can only hope to scratch the surface. However this is a list of some of the things that you can learn about if you follow along with the series:

  • The great range of tools we now have that support running Linux on Windows via WSL 2.
  • An example of the Terraform developer inner loop experience and how to extend that to running Terraform in a deployment pipeline using Azure Pipelines.
  • Assistance with debugging Azure Pipelines by running self-hosted agents (both Windows and Linux flavours) on a Windows 10 machine.
  • Creating Azure Pipelines as pipeline as code using YAML files, including the use of templates to aid reusability and deployment jobs to target an environment.
  • How to avoid using Swiss Army Knife-style Azure Pipelines tasks and instead use native commands tuned exactly to a situation's requirements.
  • How to segment telemetry and diagnostics for each stage of the CI/CD pipeline using separate Application Insights resources.

Tools You Will Need / Want

There is a long list of tools needed for this series and getting everything installed and configured is quite an exercise. However you may have some of this already and it can also be great fun getting the newer stuff working. Some of the tools can be installed with Chocolatey and it's definitely worth checking this out if you haven't already. Generally, I've listed the tools in the order you will need them so you don't need to install everything before working through the next couple of posts in the series. Everything in the list should be installed in Windows 10. There are some tools that need installing in the Ubuntu distro but I cover that in the relevant post.

That's it for this post. Next time we start working with Terraform at the command line.

Cheers -- Graham