Versioning .NET Core Assemblies in Azure DevOps isn’t Straightforward (and Probably Won’t be in Other CI/CD Tools Either)

Posted by Graham Smith on June 26, 2019No Comments (click here to comment)

As part of ongoing work to enhance an existing Azure DevOps CI/CD pipeline that builds and deploys an ASP.NET Core application I thought I'd spend a pleasant 5 minutes versioning the .NET Core assemblies with the pipeline's build number. A couple of hours and 20+ test builds later...

Out of the box, creating a new build in Azure Pipelines using the ASP.NET Core template in the classic editor results in five tasks of which four are concerned with dotnet commands:

A quick look at the documentation for dotnet build and then this awesome blog post that explains the dizzying array of options and it's pretty clear that adding /p:Version=$(Build.BuildNumber) as a command line parameter to dotnet build should suffice as a good starting point. Except it didn't, with File version and Product version stubbornly remaining at their default values:

I established that /p:Version= works fine from a command line, so what's going on? After a bit of research and testing I discovered that unless you tell it otherwise dotnet publish (and dotnet test for that matter) compiles the application before doing its thing of publishing files to a folder. The way the Azure Pipelines tasks are configured means that dotnet publish is effectively cancelling out the effect of dotnet build. (And since dotnet test also cancels out out the effect of dotnet build leaves me wondering what is the point of including dotnet build in the first place?) As part of this research I also discovered that build, test and publish also do a restore unless told otherwise, again making me wonder what the point of the Restore task is? So out of the box then it seems like the four .NET Core tasks are resulting in lots of duplication and for someone like me the cause of head-scratching as to why assembly versioning doesn't work.

So based on a few hours of testing here is what I think the arguments of the different tasks need to be (for visual tasks or as YAML) to avoid duplication and implement assembly versioning.

Firstly, if you want to include an implicit Restore task:

  • build = --configuration $(BuildConfiguration) --no-restore /p:Version=$(Build.BuildNumber)
  • test = --configuration $(BuildConfiguration) --no-build
  • publish = --configuration $(BuildConfiguration) --output $(Build.ArtifactStagingdirectory) --no-build

Secondly, if you want to omit an explicit Restore task:

  • test = --configuration $(BuildConfiguration)
  • publish = --configuration $(BuildConfiguration) --output $(Build.ArtifactStagingdirectory) /p:Version=$(Build.BuildNumber)

In the first version build creates the binaries which are then used by test and publish, with the --no-build switch implicitly setting the --no-restore flag. I haven't tested it but that presumably means that --configuration $(BuildConfiguration) for test and publish is redundant. In the second version test and publish both create their own sets of binaries. (Is that the right thing to do from a purist CI/CD perspective? Maybe, maybe not.)

I did my testing on a Microsoft-hosted build agent and whilst it felt like both options above were quicker than the default settings I can't be certain without rigorous testing on a self-hosted agent with no other load. Either way though, it feels good to have optimised the tasks and I finally got assembly versioning working. Are there other optimisations? Have I missed something? Please leave a comment!

Cheers -- Graham

Azure DevOps Hidden Gems #2 – Run Build or Release Tasks According to Custom Conditions

Posted by Graham Smith on June 24, 2019No Comments (click here to comment)

I've been working with Azure DevOps quite a lot recently (having used its predecessors for many years) and I'm constantly amazed by features I never knew existed or which I vaguely knew existed but hadn't fully appreciated. In this blog post series I'm attempting to shine a light on some of these hidden gems for the benefit of others. The full list of posts is here and if you have any suggestions for other posts please leave a comment!

Imagine this scenario: you have a code branch on which you want to run an all-singing, all-dancing build packed full of tasks, and another branch where you only want to run a subset of those tasks. Cloning the build and stripping out unwanted tasks to create a second build is the answer, right? Not necessarily! It turns out that most tasks can be set to run conditionally, according to criteria that you specify.

To configure this feature (I'm illustrating using visual tasks but there is a YAML equivalent) open the task and head over to Control Options. For Run this task select Custom conditions and then enter your conditions in Custom condition:

In the build task example above the task will only run if the build is succeeding and the build is running against the master branch. For any other branches it will be skipped.

To understand the full capabilities of this fantastic feature you should take a look at the Conditions overview page and then the Expressions page which has a full guide to the conditions syntax. I'll blog soon about a specific scenario where this feature is just exactly what is needed to avoid creating a second build and the potential maintenance issue that causes.

Hope this helps!

Cheers -- Graham

Azure DevOps Hidden Gems #1 – Use Secure Files in a Build or Release Pipeline

Posted by Graham Smith on June 19, 2019No Comments (click here to comment)

I've been working with Azure DevOps quite a lot recently (having used its predecessors for many years) and I'm constantly amazed by features I never knew existed or which I vaguely knew existed but hadn't fully appreciated. In this blog post series I'm attempting to shine a light on some of these hidden gems for the benefit of others. The full list of posts is here and if you have any suggestions for other posts please leave a comment!

If you've created a Build or Release pipeline in Azure DevOps you've probably used the Variables feature to store either plain text or secret variables that can be passed in to the build or pipeline:

This works well for plain text, but what if you have more complicated requirements, such as secrets contained in a file that can't simply be copied as plain text in to a standard variable? Sure, there are solutions external to Azure DevOps that you could use (Azure Key Vault for example) but you could end up using a sledgehammer to crack a nut. No matter though, as Azure DevOps provides a solution through Secure Files. You can find this by navigating to Pipelines > Library and then clicking the Secure Files tab:

In the screenshot above I've used + Secure file to upload a file called config (which in this particular case is a file that contains credentials for connecting to an Azure Kubernetes Service cluster). Secure files are made available in the build or pipeline through the use of the Download Secure File task, which places the file in the $(Agent.TempDirectory) directory of the Azure Pipelines Agent. The file can then be used on a command line where a parameter is expecting a file, for example:

This is obviously a very specific example (an incomplete extract of a Bash script that is using kubectl to create secrets on a Kubernetes cluster) but hopefully you get the idea of how secret files can be used. Once the build or release has completed the file gets deleted—a good thing on a self-hosted agent although Microsoft-hosted agents are destroyed anyway after use.

Hope this helps!

Cheers -- Graham

A Better Way of Deploying a Dockerized Application to Azure Kubernetes Service Using Azure Pipelines

Posted by Graham Smith on January 21, 2019No Comments (click here to comment)

Throughout 2018 I wrote a mini blog post series aimed at providing specific and detailed guidance on how to create a CI/CD pipeline using VSTS/Azure DevOps to deploy a dockerized ASP.NET Core application to Azure Kubernetes Service (AKS):

Whilst the resulting solution works I wasn't entirely happy with several aspects and I've spent a great deal of time thinking and tinkering to come up with something better. In this blog post I explain what I wasn't happy with and how my new solution addresses most of my concerns. You don't necessarily need to read the posts above as I'm going to provide some context, but it will probably make things much clearer if you are planning to implement any of my suggestions.

The sample application I've been using to deploy to Kubernetes consists of the following components:

  • ASP.NET Core web application, that sends messages to a
  • NATS message queue service, which pushes messages to a
  • .NET Core message queue handler application, which saves messages to an
  • Azure SQL database

Apart from the database all the components run as docker containers. The container images are built in in an Azure Pipelines build pipeline and images pushed to an Azure Container Registry (ACR). An Azure Pipelines release pipeline then deploys the necessary services and deployments to AKS which causes the images to be pulled from ACR and instantiated as containers inside pods. My release pipeline consists of two environments: dat (developer automated test where automated acceptance tests might take place) and prd (production). That's just arbitrary of course and in a live scenario the pipeline can have whatever environments are needed.

My sample application is called MegaStore and you can find the code on GitHub here. In the rest of this post I explain my areas of concern and how I addressed them.

Azure Pipelines Tasks

Whilst there is no doubt that Azure Pipelines Tasks are great for quickly building a pipeline and definitely make it easier for those less familiar with the technology behind a task to get started, I now see some tasks as more of a curse than a blessing. I've particularly taken issue with tasks that manipulate a command line application (such as docker or kubectl) and which results in the task becoming something of a Swiss Army Knife task. Why have I taken issue? There are several reasons, some specific to the Swiss Army Knife variety and some of tasks in general:

  • There is often a need to set mandatory fields in ‘Swiss Army Knife' tasks even though those parameters will not be used by the chosen sub-command. Where there are multiple instances of the same task in use this becomes very tedious and is a potential maintenance problem when something changes. (Yes, I know tasks can be cloned but this doesn't make me any happier.)
  • Tasks by their nature only allow you to do what they have been coded to do and you can sometimes find yourself in a blind alley. For example, at the time of writing the only way I know of updating an existing Kubernetes ConfigMap without deleting it first and re-creating it is with a piped command, for example:

    Running a command such as this isn't possible with the current Deploy to Kubernetes Azure DevOps task, which is very limiting.
  • Speaking of command lines, my next issue is that tasks abstract you from what is actually going on behind the scenes. For simple tasks such as copying files this might be fine, however I've become frustrated at the way tasks such as Docker or Deploy to Kubernetes ‘hide' what they are doing, and the way that makes fine-tuning that little bit harder. Additionally, for me it's also a lost learning opportunity—a missed chance to learn the full syntax of a command because the task is constructing it on your behalf.
  • Another big issue is that tasks such as Docker or Deploy to Kubernetes offer nothing in the way of code usability, and break the DRY principle in multiple dimensions (ie there is scope for repetition within an environment and also across environments). To illustrate, the release pipeline in my 2018 mini blog series consisted of no fewer than 30 Deploy to Kubernetes tasks across two environments, resulting in a great deal of repetition.
  • Finally, the use of tasks in the current version of Azure Pipelines releases means that you don't have your ‘code' under proper version control. I know there are changes coming that will help to address this, and whilst they will be welcome I think there is an opportunity to do better.

So what's my solution to all this? Very simply, get rid of multiple Swiss Army Knife tasks and implement Bash scripts running from a single Bash task. I started off by using the Inline script feature of Bash tasks but this didn't help with getting code in to version control and I also quickly realised that there were big code reusability opportunities to be had across environments by using File Path scripts. By using Bash scripts stored in the repo I solved all the issues mentioned above and in the case of the release portion of the pipeline I reduced the number of tasks from 15 in each environment to two! What follows are the techniques I used to achieve this for the Docker builds and Kubernetes deployments.

Converting Docker builds to use a Bash script was reasonably straightforward so I'll start by discussing the first problem I encountered when converting Deploy to Kubernetes tasks to Bash scripts, which was how to authenticate to Kubernetes. Tasks rely on the creation of a Kubernetes service connection (Project Settings > Service connections) and I'd been using the Kubeconfig version which involves pasting in the contents of the Kubeconfig file that gets created (if you run the appropriate command) when you set up an AKS cluster:

By tracing the logging output of the Deploy to Kubernetes tasks I could see what was happening: a Kubeconfig file was being saved to disk and referenced in a kubectl command using the --kubeconfig parameter that points to the file on disk. I could successfully pass the file in from an Artifact as a proof of concept but how to store the Kubeconfig contents securely and create the file dynamically? The obvious choice was a secret variable however that didn't work because it destroyed the Kubeconfig formatting which is important in the re-hydrated file on disk. After a lot of fiddling I finally turned to LoECDA who are super-responsive via Twitter, and very quickly the suggestion came back to try using Secure files (Pipelines > Library > Secure files). This worked perfectly: a file is first uploaded to the Secure files area and this is then available for use using the Download Secure File task. The file is downloaded in to a temporary folder which can be referenced as the $AGENT_TEMPDIRECTORY variable in a Bash script. Great!

Next up was sorting out the practicalities of using Bash scripts in Bash tasks. I created a deployment (dep) folder in the repo to hold the scripts and then arranged for this folder to be available as an Artifact created directly from the GitHub repo:

I used VS Code to create the Bash files however in order for the file to be executed as a Bash script it needs its permissions setting to make it executable (chmod +x). This needs to be done from a Linux environment and there are several possibilities for achieving this including Windows Subsystem for Linux if you are on Windows 10. I chose to go with Azure Cloud Shell, which can be configured to run either a Bash or a PowerShell command line in the cloud! Once that was configured it was a case of cloning my repo, navigating to the dep folder and running chmod +x some-filename-sh. There's no GUI in Azure Cloud Shell so it does involve using git commands to push the changes back to GitHub. If this is new to you then git add *git commit -m "Commit message" and git push origin master are what you need. To authenticate you'll likely need to use a personal access token unless you go to the bother of setting up SSH. It gets to be a bit of a pain having to enter credentials every time you want to push to GitHub however the git config credential.helper store command will save credentials across Azure Cloud Shell sessions to make life easier.

Finding out what commands needed to be executed in the Bash scripts required a bit of detective work, and involved a combination of understanding what the task was attempting to accomplish and then looking at the build or release logs to see the actual output. With the basic command figured out this exercise offered the opportunity to do a bit of fine tuning. For example, I'd been tagging my docker images with the latest tag but it turns out that this isn't a great idea for release pipelines. By writing the actual command myself I was able to get exactly what I wanted.

I describe how I organised the Bash scripts to move away from a monolithic pipeline below. In this section I want to describe the tips and tricks I used to actually write the Bash scripts. Generally, the scripts make heavy use of variables to make them applicable to all release environments, however there are some essential things to know:

  • Variables created as part of Azure DevOps pipelines can be used as variables (ie passed in to a script) however with the exception of secrets they are also created as environment variables which are available directly in scripts. This means that a variable created as MyVariable is available as $MYVARIABLE directly in a Bash script (in Bash scripts the variable is really a constant which convention dictates should be in upper case and any periods need converting to underscores to ensure valid syntax).
  • Variables created as part of Azure DevOps pipelines can have the same name as long as they are scoped to a different environment. So you can have two variables called MyVariable with different values for each environment and simply refer to $MYVARIABLE in the Bash script, ie no need to pass $MYVARIABLE in as a parameter to the script for different environments.
  • As mentioned above, secrets are not created as environment variables and must be passed in to a script via the Arguments field, and in the script a variable is declared to accept the incoming parameter. Important: as of the time of writing a secret needs to be passed in to the Argument field as $(MYSECRET) ie with parentheses around the actual parameter name. If you omit the parentheses the secret is not passed in. A non-secret parameter doesn't require parentheses and I have queried whether this is a a bug here.
  • Later in this post I explain how I break up a monolithic pipeline in to multiple pipelines, which results in the same variables being needed in different pipelines. By using Variable Groups I was able to avoid repeated variable declarations and manage many variables from just one location.
  • In addition to variables that are created manually, built-in variables are also available as environment variables in the script. The ones I've used are $AGENT_TEMPDIRECTORY to define the download location of the Kubeconfig file from the Secure files area, $RELEASE_ENVIRONMENTNAME to refer to the environment (ie dat or prd) and also $BUILD_BUILDNUMBER used to tag docker images with a unique build number in the build process and then to refer to them by their unique name in the release. However, there are many built-in variables available to use—see here for details but remember that for use in Bash scripts you should change text to uppercase and must replace periods with an underscore.

I'm not a Bash scripting expert and I'm sure my scripts would be considered very rudimentary. The great thing though is that you can do whatever you like now the code is a script. Possibilities might include adding error handling or refactoring further using functions. There's potential to really go to town here.

Monolithic Pipeline

At the time of writing this article in early 2019 there aren't that many blog post examples of implementing a CI/CD pipeline to deploy an application to Kubernetes. Furthermore, the posts that do exist tend, not unreasonably, to use a simplistic application scenario to illustrate the concepts. Typically, this involves deploying the whole application as part of a single pipeline, and indeed this is the route I took with my 2018 blog post mini series. However, it became quickly apparent to me that this is an unsatisfactory arrangement for two main reasons:

  • Just one change to one of the application components would cause all the components of the application to be redeployed (or more correctly the parts of the application that have their docker images built by the pipeline).
  • A change to the Kubernetes configuration would also trigger a redeployment of all of the application components. Sometimes this is necessary but often it's not.

These issues arise because the trigger for the build component of the pipeline is set as the root of the GitHub repo, so if anything changes in the repo a build is triggered. Clearly not an optimal situation.

My solution to this problem is to divide the monolithic pipeline in to multiple pipelines that correspond to the individual components of the overall application. Then with a bit of refactoring of the codebase it's possible to use a very nifty feature of Azure Pipelines that allows a build to be triggered from one or more specific folders (or files for that matter) in the repo, ie a much more granular solution.

One complication that I had to cater for is that the pipeline isn't just building docker images and marshalling them in to the Kubernetes cluster: additionally, the pipeline is configuring Kubernetes elements such as Namespaces, Secrets and ConfigMaps.

Through the use of Bash scripts as described above the number of tasks needed is drastically reduced: just one Bash task for the builds and two tasks for releases (a Download Secure File task to copy the kubeconfig file to disk and a Bash task to host the bash script). All scripts are Namespace/environment aware.

In terms of Azure Pipelines build and release pipelines my current CI/CD solution is as follows:

megastore.init.release

This is a release that is not associated with a build and its sole purpose is to configure a Kubernetes Namespace in preparation for the deployment of the application. As such, this component is only intended to be run to either initialise a new Kubernetes cluster or (rarely) if one of the configuration items needs to change (in which case elements of the application will likely have to be redeployed for the configuration to be built in to the appropriate pods).

The configuration handled by megastore.init.release is as follows:

  • Creation of a Namespace for a corresponding Azure Pipelines environment.
  • Creation (or update) of ACR credentials (as a specialised Secret) that allow Deployments to pull docker images from ACR.
  • Creation (or update) of the message queue URL as a ConfigMap.
  • Creation (or update) of the Application Insights instrumentation key as a ConfigMap.

This configuration is handled by init.sh.

megastore.message-queue.release

This is another release that is not associated with a build, and in this case the requirement is to deploy the NATS message queue service. The absence of a build is due to the docker image being pulled from Docker Hub. The downside of not having a build associated with the release is that if any of the NATS configuration changes the release needs to be triggered manually. I see this as an infrequent requirement though. The message queue service doesn't have any dependencies on any other part of the application and so is the first component to be deployed following the initial Kubernetes configuration.

The configuration handled by megastore.message-queue.release is as follows:

  • Deployment of the Kubernetes Service for the message queue.
  • Deployment of the Kubernetes Deployment for the message queue.

This configuration is handled by message-queue.sh.

megastore.savesalehandler.build and megastore.savesalehandler.release

This build and linked release are responsible for deploying a new version of the .NET Core message queue handler application which receives message from the message queue and saves them to an Azure SQL database. The docker image is built and uploaded to ACR using this generic Bash script. This in turn triggers the megastore.savesalehandler.release which deals with the following configuration:

  • Creation (or update) of the database connection string as a Secret.
  • Deployment of the Kubernetes Deployment for the message queue handler component.
  • Update the image for the Deployment to the latest version using the unique tag for the build that triggered the release.

This configuration is handled by megastore-savesalehandler.sh. The build is triggered through the Azure Pipelines Path filters feature:

Using the Path filters feature ensures that the build will only be triggered for continuous integration if a file in the specified folder is changed.

megastore.web.build and megastore.web.release

This build and linked release are responsible for deploying a new version of the ASP.NET Core web application which sends messages to the message queue service. As with the message queue handler, the docker image is built and uploaded to ACR using this generic Bash script. The build triggers the megastore.web.release which deals with the following configuration:

  • Creation (or update) of the ASPNETCORE_ENVIRONMENT environment variable as a ConfigMap.
  • Deployment of the Kubernetes Deployment for the web component.
  • Deployment of the Kubernetes Service for the web component.
  • Update the image for the Deployment to the latest version using the unique tag for the build that triggered the release.

This configuration is handled by megastore-web.sh and once again the build is triggered through the Azure Pipelines Path filters feature:

As before, using the Path filters feature ensures that the build will only be triggered for continuous integration if a file in the specified folder is changed.

And Finally...

In breaking down a monolithic pipeline in to multiple pipelines I exposed the problem of what to do with the shared helper library of functions that is use both by the megastore.web and megastore.savesalehandler components, because if this code changes one or sometimes both components will need redeploying. I think the answer is that helper libraries like these do not belong in the Visual Studio solution and instead should be developed separately and distributed and referenced as NuGet packages.

One of my aspirations is to get as much pipeline configuration in the GitHub repo as possible and you might well ask why I'm not using yaml files. Apart from the fact that I just haven't had time to look at this in detail yet, at the time of writing it's only a partial solution as it's only available for the build portion of the pipeline. This will change hopefully later this year when the release portion of the pipeline is supported, and at that point I'll make the switch.

That's it for now! Whether you are deploying to AKS or somewhere else I hope this post has provided you with ideas to supercharge your Azure DevOps pipelines.

Cheers -- Graham

Create an Azure DevOps Services Self-Hosted Agent in Azure Using Terraform, Cloud-init—and Azure DevOps Pipelines!

Posted by Graham Smith on November 14, 2018No Comments (click here to comment)

I recently started a new job with the awesome DevOpsGroup. We're hiring and there's options for remote workers (like me), so if anything from our vacancies page looks interesting feel free to make contact on LinkedIn.

One of the reasons for mentioning this is that the new job brings a new Azure subscription and the need to recreate all of the infrastructure for my blog series on what I will now have to rename to Deploy a Dockerized ASP.NET Core Application to Azure Kubernetes Service Using an Azure DevOps CI/CD Pipeline. For this new Azure subscription I've promised myself that anything that is reasonably long-lived will be created through infrastructure as code. I've done a bit of this in the past but I haven't been consistent and I certainly haven't developed any memory muscle. As someone who is passionate about automation it's something I need to rectify.

I've used ARM templates in the past but found the JSON a bit fiddly, however in the DevOpsGroup office and Slack channels there are a lot of smart people talking about Terraform. When combined with Brendan Burns' post on Expanding the HashiCorp partnership and HashiCorp's post about being at Microsoft Ignite well, there's clearly something going on between Microsoft and HashiCorp and learning Terraform feels like a safe bet.

For my first outing with Terraform I decided to create an Azure DevOps Services (neé VSTS, hereafter ADOS) self-hosted agent capable of running Docker multi-stage builds. Why a self-hosted agent rather than a Microsoft-hosted agent? The main reason is speed: a self-hosted agent is there when you need it and doesn't need the spin-up time of a Microsoft-hosted agent, plus a self-hosted agent can cache Docker images for re-use as opposed to a Microsoft-hosted agent which needs to download images every time they are used. It's not a perfect solution for anyone using Azure credits as you don't want the VM hosting the agent running all the time (and burning credit) but there are ways to help with that, albeit with a twist at the moment.

List of Requirements

In the spirit of learning Terraform as well as creating a self-hosted agent I came up with the following requirements:

  • Use Terraform to create a Linux VM that would run Docker, that in turn would run a VSTS [sic] agent as a Docker container rather than having to install the agent and associated dependencies.
  • Create as many self-hosted agent VMs as required and tear them down again afterwards. (I'm never going to need this but it's a great infrastructure as code learning exercise.)
  • Consistent names for all the Azure infrastructure items that get created.
  • Terraform code refactored to avoid DRY problems as far as possible and practicable.
  • All bespoke settings supplied as variables to make the code easily reusable by anyone else.
  • Terraform code built and deployed from source control (GitHub) using an ADOS Pipeline.
  • Mechanism to deploy ‘development' self-hosted agents when working locally and only allow deploying ‘production' self-hosted agents from the ADOS Pipeline by checking in code.

This blog post aims to explain how I achieved all that but first I want to describe my Terraform journey since going from zero to my final solution in one step probably won't make much sense.

My Terraform Journey

I used these resources to get going with Terraform and I recommend you do the same with whatever is available to you:

It turns out that there were quite a few decisions to be made in implementing this solution and lots of parts that didn't work as expected so I've saved the discussion of all that to the end to keep the configuration steps cleaner.

Configuring Pre-requisites

There are a few things to configure before you can start working with my Terraform solution:

  • Install Azure CLI and Terraform on your local machine.
  • If you are using Visual Studio Code as your editor ensure you have the Terraform extension by Mikael Olenfalk installed.
  • If required, create an SSH key pair following these instructions if you need them. The goal is to have a public key file at ~\.ssh\id_rsa.pub.
  • To allow Terraform to access Azure create a password-based Azure AD service principal and make a note of the appId and password that are returned.
  • Create a new Agent Pool in your ADOS subscription—I called mine Self-Hosted Ubuntu.
  • Create a PAT in your ADOS subscription limiting it to Agent Pools (read, manage) scope.
  • Fork my repository on GitHub and clone to your local machine.

Examining the Terraform Solution

Let's take a look at the key contents of the GitHub repo, which have separate folders for the backend storage on Azure and the actual self-hosted agent.

backend-storage folder
  • main.tf—This is a simple configuration which creates an Azure storage account and container to hold the Terraform state. The names of the storage account and container are used in the configuration of the self-hosted agent.
  • variables.tf—I've pulled the key variables out in to a separate file to make it easier for anyone to know what to change for their own implementation.
ados-self-hosted-agent-ubuntu folder
  • main.tf—I've chosen to keep the configuration in one file for simplicity of writing this blog post, however you'll frequently see the different resources split out in to separate files. If you've followed this tutorial then most of what's in the configuration should be straightforward, however you will see that I've made extensive use of variables. I like to see all of my Azure infrastructure with consistent names so I pass in a base_name value on which the names of all resources are based. What is different from what you will have seen in the tutorial is that some resources have a count = "${var.node_count}" property, which tells Terraform to create as many copies of that particular resource as value of the node_count variable. For those resources the name must be unique of course and you'll see a slightly altered syntax of the name property, for example "${var.base_name}-vm-${count.index}" for the VM name. This creates those types of resources with an incrementing numerical suffix.
  • variables.tf—I've made heavy use of variables both to make it easier for others to use the repo and also to separate out what I think should and shouldn't be tracked via version control. You'll see that variables are grouped in to those that I think should be tracked by version control for traceability purposes and those which are very specific to an Azure subscription and which shouldn't (or in the case of secrets mustn't) be tracked by version control. In the tracked group the variables are set in variables.tf however the non-tracked group have their variables set in a separate (ignored by git) file locally and via Pipeline Variables in ADOS.
  • .gitignore—note that I've added cloud-init.txt to the standard file for Terraform as cloud-init.txt contains secrets.

Configuring Backend Storage

In order to use Azure as a central location to save Terraform state you will need to complete the following:

  1. In your favourite text editor edit \backend-storage\variables.tf supplying suitable values for the variables. Note that storage_account needs to be globally unique.
  2. Open a command prompt, login to the Azure CLI and and navigate to the backend-storage folder.
  3. Run terraform init to initialise the directory.
  4. Run terraform apply to have Terraform generate an execution plan. Type yes to have the plan executed.

Configuring and Running the Terraform Solution Locally

To get the solution working from your local machine to create development self-hosted agents running in Azure you will need to complete the following:

  1. In your favourite text editor edit \ados-self-hosted-agent-ubuntu\variables.tf supplying suitable values for the variables that are designated to be tracked and changed via version control.
  2. Add a terraform.tfvars file to the root of \ados-self-hosted-agent-ubuntu and copy the following code in, replacing the placeholder text with your actual values where required (double quotes are required):
  3. Add a cloud-init.txt file to the root of \ados-self-hosted-agent-ubuntu and copy the following code in, replacing the placeholder text with your actual values where required (angle brackets must be omitted and the readability back slashes in the docker run command must also be removed):
  4. In a secure text file somewhere construct a terraform init command as follows, replacing the placeholder text with your actual values where required (angle brackets must be omitted):
  5. Open a PowerShell console window and navigate to the root of \ados-self-hosted-agent-ubuntu. Copy, paste and then run the terraform init command above to initialise the directory.
  6. Run terraform apply to have Terraform generate an execution plan. Type yes to have the plan executed.

It takes some time for everything to run but after a while you should see ADOS self-hosted agents appear in your Agent Pool. Yay!

Configuring and Running the Terraform Solution in an Azure DevOps Pipeline

To get the solution working in an Azure DevOps Pipeline to create production self-hosted agents running in Azure you will need to complete the following:

  1. Optionally create a new team project. I'm planning on doing more of this so I created a project called terraform-azure.
  2. Create a new Build Pipeline called ados-self-hosted-agent-ubuntu-build and in the Where is your code? window click on Use the visual designer.
  3. Connect up to your forked GitHub repository and then elect to start with an empty job.
  4. For the Agent pool select Hosted Ubuntu 1604.
  5. Create a Bash task called terraform upgrade and paste in the following as an inline script:
  6. Create a Copy Files task called copy files to staging directory and configure Source Directory to ados-self-hosted-agent-ubuntu and Contents to show *.tf and Target Folder to be $(Build.ArtifactStagingDirectory).
  7. Create a PowerShell task called create cloud-init.txt, set the Working Directory to $(Build.ArtifactStagingDirectory) and paste in the following as an inline script:
  8. Create a Bash task called terraform init, set the Working Directory to $(Build.ArtifactStagingDirectory) and paste in the following as an inline script:
  9. Create a Bash task called terraform plan, set the Working Directory to $(Build.ArtifactStagingDirectory) and paste in the following as an inline script:
  10. Create a Bash task called terraform apply, set the Working Directory to $(Build.ArtifactStagingDirectory) and paste in the following as an inline script:
  11. Create a Publish Build Artifacts task called publish artifact and publish the $(Build.ArtifactStagingDirectory) to an artifact called terraform.
  12.  Create the following variables, supplying your own values as required (variables with spaces should be surrounded by single quotes) and ensuring secrets are added as such:
  13. Queue a new build and after a while a while you should see ADOS self-hosted agents appear in your Agent Pool.
  14. Finally, from the Triggers tab select Enable continuous integration.

Discussion

One of the great things I found about Terraform is that it's reasonably easy to get a basic VM running in Azure by following tutorials. However the tutorials are, understandably, lacking when it comes to addressing issues such as repeated code and I spent a long time working through my configurations addressing repeated code and extracting variables. One fun aspect was amending the Terraform resources so that a node_count variable can be set with the desired number of VMs to create. Pleasingly, Terraform seems really smart about this: you can start with two VMs (labelled 0 and 1), scale to four VMs (0, 1, 2 and 3) and then scale back to one VM which will remove 1, 2 and 3 to leave the VM labelled 0.

Do be aware that some of the tutorials don't cover all of the properties of resources and it's worth looking at the documentation for those fine details you want to implement. For example I wanted to be able to SSH in to VMs via a FQDN rather than IP address which meant setting the domain_name_label property of the azurerm_public_ip resource. In another example the tutorial didn't specify that the azurerm_virtual_machine should delete_os_disk_on_termination, the absence of which was a problem when implementing the scaling VMs up and down capability. It was all in the documentation though.

An issue that I had to tackle early on was the mechanism for configuring the internals of VMs once they were up-and-running. Terraform has a remote-exec provisioner which looks like it should do the job but I spent a long time trying to make it work only to have the various commands consistently terminate with errors. An alternative is cloud-init. This worked flawlessly however the cloud-init file has to contain an ADOS secret and I couldn't find a way to pass the secret in as a parameter. My solution was to exclude a local cloud-init.txt file from version control so the secret didn't get committed and then to dynamically build the file as part of the build pipeline.

Another decision I had to make was around workflow, ie how the solution is used in practice. I've kept it simple but have tried to put mechanisms in place for those that need something more sophisticated. For experimenting and getting things working I'm just working locally, although for consistency I'm storing Terraform state in Azure. Where required all resources are labelled with a ‘dev' element somewhere in their name. For ‘production' self-hosted agents I'm using an ADOS build pipeline, with all required resources having a ‘prd' element in their name. Note that there is a Terraform workspaces feature which might work better in a more sophisticated scenario. I've chosen to keep things simple by deploying straight from a build however I do output a Terraform plan as an arttifact so if you wanted you could pick this up and apply it in a release pipeline. I've also not implemented any linting or testing of the Terraform configuration as part of the build but that's all possible of course. You'll also notice that I haven't implemented the build pipeline with the YAML code feature. It was part of my plans but I ran in to some issues which I couldn't fix in the time available.

It's worth noting a few features of the build pipeline. I did look at tasks in the ADOS marketplace and while there are a few I'm increasingly becoming dissatisfied with how visual tasks abstract away what's really happening and I'm more in favour of constructing commands by hand.

  • terraform upgrade—In order to run Terraform commands in an ADOS pipeline you need to have access to an agent that has Terraform installed. As luck would have it the Microsoft-hosted agent Hosted Ubuntu 1604 fits the bill. Well almost! I ran in to an issue early on before I started using separate blobs in Azure to hold state for dev and prd where Terraform wasn't happy that I was developing on a newer version of Terraform than was running on the Hosted Ubuntu 1604 agent. It was a case of downgrading to an older version of Terraform locally or upgrading the agent. I chose the latter which is perfectly possible and very quick. There's obviously a maintenance issue with keeping versions in sync but I created a variable in the ADOS pipeline to help. As it turns out with the final solution using separate state blobs in Azure this is less of a problem however I've chosen to keep the upgrade task in, if for no other reason than to illustrate what's possible.
  • terraform init—Although I've seen the init command have its variables passed to it through as environment variables at the end of this demo I couldn't get the command to work this way and had to pass the variables through as parameters.
  • terraform plan—The environment variables trick does work with the plan command. Note that the command outputs a file called tfplan which gets saved as part of the artifact. I use the file immediately in the next task (terraform apply) but a release pipeline could equally use the file after (for example) someone had inspected what the plan was going to actually do using terraform show.

Finally, there's one killer feature that seems not to have been implemented yet and that's Auto-shutdown of VMs. I find this essential to ensure that Azure credits don't get burned, so if you are like me and need to preserve credits do take appropriate precautions until the feature is implemented. Happy Terraforming!

Cheers -- Graham

Deploy a Dockerized ASP.NET Core Application to Azure Kubernetes Service Using a VSTS CI/CD Pipeline: Part 4

Posted by Graham Smith on September 11, 2018No Comments (click here to comment)

In this blog post series I'm working my way through the process of deploying and running an ASP.NET Core application on Microsoft's hosted Kubernetes environment. These are the links to the full series of posts to date:

In this post I take a look at application monitoring and health. There are several options for this however since I'm pretty much all-in with the Microsoft stack in this blog series I'll stick with the Microsoft offering which is Azure Application Insights. This posts builds on previous posts, particularly Part 3, so please do consider working through at least Part 3 before this one.

In this post, I continue to use my MegaStore sample application which has been upgraded to .NET Core 2.1, in particular with reference to the csproj file. This is important because it affects the way Application Insights is configured in the ASP.NET Core web component. See here and here for more details. All my code is in my GitHub repo and you can find the starting code here and the finished code here.

Understanding the Application Insights Big Picture

Whilst it's very easy to get started with Application Insights, configuring it for an application with multiple components which gets deployed to a continuous delivery pipeline consisting of multiple environments running under Kubernetes requires a little planning and a little effort to get everything configured in a satisfactory way. As of the time of writing this isn't helped by the presence of Application Insights documentation on both docs.microsoft.com and github.com (ASP.NET Core | Kubernetes) which sometimes feels like it's conflicting, although it's nothing that good old fashioned detective work can't sort out.

The high-level requirements to get everything working are as follows:

  1. A mechanism is needed to separate out telemetry data from the different environments of the continuous delivery pipeline. Application Insights sends telemetry to a ‘bucket' termed an Application Insights Resource which is identified by a unique instrumentation key. Separation of telemetry data is therefore achieved by creating an individual Application Insights Resource, each for the development environment and the different environments of the delivery pipeline.
  2. Each component of the application that will send telemetry to an Application Insights Resource needs configuring so that it can be supplied with the instrumentation key for the Application Insights Resource for the environment the application is running in. This is a coding issue and there are lots of ways to solve it, however in the MegaStore sample application this is achieved through a helper class in the MegaStore.Helper library that receives the instrumentation key as an environment variable.
  3. The MegaStore.Web and MegaStore.SaveSaleHandler components need configuring for both the core and Kubernetes elements of Application Insights and a mechanism to send the telemetry back to Azure with the actual name of the component rather than a name that Application Insights has chosen.
  4. Each environment needs configuring to create an instrumentation key environment variable for the Application Insights Resource that has been created for that environment. In development this is achieved through hard-coding the instrumentation key in docker-compose.override.yaml. In the deployment pipeline it's achieved through a VSTS task that creates a Kubernetes config map that is picked up by the Kubernetes deployment configuration.

That's the big picture—let's get stuck in to the details.

Creating Application Insights Resources for Different Environments

In the Azure portal follow these slightly outdated instructions (Application Insights is currently found in Developer Tools) to create three Application Insights Resources for the three environments: DEV, DAT and PRD. I chose to put them in one resource group and ended up with this:

For reference there is a dedicated Separating telemetry from Development, Test, and Production page in the official Application Insights documentation set.

Configure MegaStore to Receive an Instrumentation Key from an Environment Variable

As explained above this is a specific implementation detail of the MegaStore sample application, which contains an Env class in MegaStore.Helper to capture environment variables. The amended class is as follows:

Obviously this class relies on an external mechanism creating an environment variable named APP_INSIGHTS_INSTRUMENTATION_KEY. Consumers of this class can reference MegaStore.Helper and call Env.AppInsightsInstrumentationKey to return the key.

Configure MegaStore.Web for Application Insights

If you've upgraded an ASP.NET Core web application to 2.1 or later as detailed earlier then the core of Application Insights is already ‘installed' via the inclusion of the Microsoft.AspNetCore.All meta package so there is nothing to do. You will need to add Microsoft.ApplicationInsights.Kubernetes via NuGet—at the time of writing it was in beta (1.0.0-beta9) so you'll need to make sure you have told NuGet to include prereleases.

In order to enable Application Insights amend BuildWebHost in Program.cs as follows:

Note the way that the instrumentation key is passed in via Env.AppInsightsInstrumentationKey from MegaStore.Helper as mentioned above.

Telemetry relating to Kubernetes is enabled in ConfgureServices in Startup.cs as follows:

Note also that a CloudRoleTelemetryInitializer class is being initialised. This facilitates the setting of a custom RoleName for the component, and requires a class to be added as follows:

Note here that we are setting the RoleName to MegaStore.Web. Finally, we need to ensure that all web pages return telemetry. This is achieved by adding the following code to the end of _ViewImports.cshtml:

and then by adding the following code to the end of the <head> element in _Layout.cshtml:

Configure MegaStore.SaveSaleHandler for Application Insights

I'll start this section with a warning because at the time of writing the latest versions of Microsoft.ApplicationInsights and Microsoft.ApplicationInsights.Kubernetes didn't play nicely together and resulted in dependency errors. Additionally the latest version of Microsoft.ApplicationInsights.Kubernetes was missing the KubernetesModule.EnableKubernetes class described in the documentation for making Kubernetes work with Application Insights. The Kubernetes bits are still in beta though so it's only fair to expect turbulence. The good news is that with a bit of experimentation I got everything working by installing NuGet packages Microsoft.ApplicationInsights (2.4.0) and Microsoft.ApplicationInsights.Kubernetes (1.0.0-beta3). If you try this long after publication date things will have moved on but this combination works with this initialisation code in Program.cs:

Please do note that this a completely stripped down Program class to just show how Application Insights and the Kubennetes extension is configured. Note again that this component uses the CloudRoleTelemetryInitializer class shown above, this time with the RoleName set to MegaStore.SaveSaleHandler. What I don't show here in any detail is that you can add lots of client.Track* calls to generate rich telemetry to help you understand what your application is doing. The code on my GitHub repo has details.

Configure the Development Environment to Create an Instrumentation Key Environment Variable

This is a simple matter of editing docker-compose.override.yaml with the new APP_INSIGHTS_INSTRUMENTATION_KEY environment variable and the instrumentation key for the corresponding Application Insights Resource:

Make sure you don't just copy the code above as the actual key needs to come from the Application Insights Resource you created for the DEV environment, which you can find as follows:

Configure the VSTS Deployment Pipeline to Create Instrumentation Key Environment Variables

The first step is to amend the two Kubernetes deployment files (megastore-web-deployment.yaml and megastore-savesalehandler-deployment.yaml) with details of the new environment variable in the respective env sections:

Now in VSTS:

  1. Create variables called DatAppInsightsInstrumentationKey and PrdAppInsightsInstrumentationKey scoped to their respective environments and populate the variables with the respective instrumentation keys.
  2. In the task lists for the DAT and PRD environments clone the Delete ASPNETCORE_ENVIRONMENT config map and Create ASPNETCORE_ENVIRONMENT config map tasks and amend them to work with the new APP_INSIGHTS_INSTRUMENTATION_KEY environment variable configured in the *.deployment.yaml files.

Generate Traffic to the MegaStore Web Frontend

Now the real fun can begin! Commit all the code changes to trigger a build and deploy. The clumsy way I'm having to delete an environment variable and then recreate it (to cater for a changed variable name) will mean that the release will be amber in each environment for a couple of releases but will hopefully eventually go green. In order to generate some interesting telemetry we need to put one of the environments under load as follows:

  1. Find the public IP address of MegaStore.Web in the PRD environment by running kubectl get services --namespace=prd:
  2. Create a PowerShell (ps1) file with the following code (using your own IP address of course):
  3. Run the script (in Windows PowerShell ISE for example) and as long as the output is white you know that traffic is getting to the website.

Now head over to the Azure portal and navigate to the Application Insights Resource that was created for the PRD environment earlier and under Investigate click on Search and then Click here (to see all data in the last 24 hours):

You should see something like this:

Hurrah! We have telemetry! However the icing on the cake comes when you click on an individual entry (a trace for example) and see the Kubernetes details that are being returned with the basic trace properties:

Until Next Time

It's taken my quite a lot of research and experimentation to get to this point so that's it for now! In this post I showed you how to get started monitoring your Dockerized .NET Core 2.1 applications running in AKS using Application Insights. The emphasis has been very much on getting started though as Application Insights is a big beast and I've only scratched the surface in this post. Do bear in mind that some of the NuGets used in this post are in beta and some pain is to be expected.

As I publish this blog post VSTS has had a name change to Azure DevOps so that's the title of this series having to change again!

Cheers—Graham

Upgrade a Dockerized ASP.NET Core Application to the Latest Version of .NET Core

Posted by Graham Smith on August 15, 2018No Comments (click here to comment)

In the combined worlds of .NET Core and Docker things are changing pretty quickly and at some point you may well find yourself wanting to upgrade your Dockerized ASP.NET Core application. If you are upgrading a production application then you'll certainly want to follow the official guidance. In my case and for the purposes of this blog post I'm more concerned with the upgrade from a Docker perspective. It's not difficult however there are a few steps which can leave you scratching your head if you miss them out so I'm documenting my process for upgrading as it will certainly help me in the future and hopefully someone else as well.

Upgrading ASP.NET Core

  1. Download and install the latest version of .NET Core from here. From a command prompt run dotnet --list-runtimes to show what you have installed. In my case the latest version was 2.1.2.
  2. Ensure you are running the latest version of Visual Studio 2017. At the time of writing version 15.8.0 had just been released.
  3. Open your VS solution and from the Application tab of the Properties page of each project you want to upgrade change the Target framework to the required version:
  4. Using your technique of choice now upgrade all of the NuGet packages for the solution.

Upgrading Docker files

This is the bit which will have you scratching your head if your Docker files are targeting an earlier version of .NET Core than the version you have just upgraded to as your solution will build but not run under Docker. The error message (something like "It was not possible to find any compatible framework version. The specified framework ‘Microsoft.NETCore.App', version ‘2.1.0' was not found.") makes complete sense when you remember it is being generated from a container running an earlier version of .NET Core.

The answer of course is to change the Docker files in your solution to refer to an image running a later version of .NET Core. However, this is also a great opportunity to upgrade your Docker files to the latest specification used in new Visual Studio projects, as it does seem to change on every release. I do this by simply creating a new ASP.NET Code project in Visual Studio and then working out what needs to change in the Docker file I'm upgrading. In my case this saw my Docker file change from

to

The obvious changes to the specification are the removal of -nowarn:msb3202,nu1503 and changes to the Docker syntax. I'm not sure what improvements changes to the syntax bring however it makes sense to me to keep up with the latest thinking from the folks writing the Docker files for Visual Studio projects.

On the face of it your project should now run as it did before the upgrade. However in my case I was still getting error messages as per this GitHub issue. The problem for me was an outdated microsoft/dotnet:2.1-aspnetcore-runtime image and running docker pull microsoft/dotnet:2.1-aspnetcore-runtime got things running again. Probably just something peculiar to my machine due all the testing I do but if you run in to this then hopefully this will do the trick.

Cheers -- Graham

Deploy a Dockerized ASP.NET Core Application to Azure Kubernetes Service Using a VSTS CI/CD Pipeline: Part 3

Posted by Graham Smith on May 24, 2018No Comments (click here to comment)

In this blog post series I'm working my way through the process of deploying and running an ASP.NET Core application on Microsoft's hosted Kubernetes environment. Formerly known as Azure Container Service (AKS), it has recently been renamed Azure Kubernetes Service, which is why the title of my blog series has changed slightly. In previous posts in this series I covered the key configuration elements both on a developer workstation and in Azure and VSTS and then how to actually deploy a simple ASP.NET Core application to AKS using VSTS. This is the full series of posts to date:

In this post I introduce MegaStore (just a fictional name), a more complicated ASP.NET Core application (in the sense that it has more moving parts), and I show how to deploy MegaStore to an AKS cluster using VSTS. Future posts will use MegaStore as I work through more advanced Kubernetes concepts. To follow along with this post you will need to have completed the following, variously from parts 1 and 2:

Introducing MegaStore

MegaStore was inspired by Elton Stoneman's evolution of NerdDinner for his excellent book Docker on Windows, which I have read and can thoroughly recommend. The concept is a sales application that rather than saving a ‘sale' directly to a database, instead adds it to a message queue. A handler monitors the queue and pulls new messages for saving to an Azure SQL Database. The main components are as follows:

  • MegaStore.Web—an ASP.NET Core MVC application with a CreateSale method in the HomeController that gets called every time there is a hit on the home page.
  • NATS message queue—to which a new sale is published.
  • MegaStore.SaveSalehandler—a .NET Core console application that monitors the NATS message queue and saves new messages.
  • Azure SQL Database—I recently heard Brendan Burns comment in a podcast that hardly anybody designing a new cloud application should be managing storage themselves. I agree and for simplicity I have chosen to use Azure SQL Database for all my environments including development.

You can clone MegaStore from my GitHub repository here.

In order to run the complete application you will first need to create an Azure SQL Database. The easiest way is probably to create a new database (also creates a server at the same time) via the portal and manage with SQL Server Management Studio. The high-level procedure is as follows:

  1. In the portal create a new database called MegaStoreDev and at the same time create a new server (name needs to be unique). To keep costs low I start with the Basic configuration knowing I can scale up and down as required.
  2. Still in the portal add a client IP to the firewall so you can connect from your development machine.
  3. Connect to the server/database in SSMS and create a new table called dbo.Sale:
  4. In Security > Logins create a New Login called sales_user_dev, noting the password.
  5. In Databases > MegaStoreDev > Security > Users create a New User called sales_user mapped to the sales_user_dev login and with the db_owner role.

In order to avoid exposing secrets via GitHub the credentials to access the database are stored in a file called db-credentials.env which I've not committed to the repo. You'll need to create this file in the docker-compose project in your VS solution and add the following, modified for your server name and database credentials:

If you are using version control make sure you exclude db-credentials.env from being committed.

With docker-compose set as the startup project and Docker for Windows running set to Linux containers you should now be able to run the application. If everything is working you should be able to see sales being created in the database.

To understand how the components are configured you need to look at docker-compose.yml and docker-compose-override.yml. Image building is handled by docker-compose.yml, which can't have anything else in it otherwise VSTS complains if you want to use the compose file to build the images. The configuration of the components is specified in docker-compose-override.yml which gets merged with docker-compose.yml at run time. Notice the k8s folder. This contains the configuration files needed to deploy the application to AKS.

By now you may be wondering if MegaStore should be running locally under Kubernetes rather than in Docker via docker-compose. It's a good question and the answer is probably yes. However at the time of writing there isn't a great story to tell about how Visual Studio integrates with Kubernetes on a developer workstation (ie to allow debugging as is possible with Docker) so I'm purposely ignoring this for the time being. This will change over time though, and I will cover this when I think there is more to tell.

Create Azure SQL Databases for Different Release Pipeline Environments

I'll be creating a release pipeline consisting of DAT and PRD environments. I explain more about these below but to support these environments you'll need to create two new databases—MegaStoreDat and MegaStorePrd. You can do this either through the Azure portal or through SQL Server Management Studio, however be aware that if you use SSMS you'll end up on the standard pricing tier rather than the cheaper basic tier. Either way, you then use SQL Server Management Studio to create dbo.Sale and set up security as described above, ensuring that you create different logins for the different environments.

Create a Build in VSTS

Once everything is working locally the next step is to switch over to VSTS and create a build. I'm assuming that you've cloned my GitHub repo to your own GitHub account however if you are doing it another way (your repo is in VSTS for example) you'll need to amend accordingly.

  1. Create a new Build definition in VSTS. The first thing you get asked is to select a repository—link to your GitHub account and select the MegaStore repo:
  2. When you get asked to Choose a template go for the Empty process option.
  3. Rename the build to something like MegaStore and under Agent queue select your private build agent.
  4. In the Triggers tab check Enable continuous integration.
  5. In the Options tab set Build number format to $(Date:yyyyMMdd)$(Rev:.rr), or something meaningful to you based on the available options described here.
  6. In the Tasks tab use the + icon to add two Docker Compose tasks and a Publish Build Artifacts task. Note that when configuring the tasks below only the required entries and changes to defaults are listed.
  7. Configure the first Docker Compose task as follows:
    1. Display name = Build service images
    2. Action = Build service images
    3. Azure subscription = [name of existing Azure Resource Manager endpoint]
    4. Azure Container Registry = [name of existing Azure Container Registry]
    5. Additional Image Tags = $(Build.BuildNumber)
  8. Configure the first Docker Compose task as follows:
    1. Display name = Push service images
    2. Azure subscription = [name of existing Azure Resource Manager endpoint]
    3. Azure Container Registry = [name of existing Azure Container Registry]
    4. Action = Push service images
    5. Additional Image Tags = $(Build.BuildId)
  9. Configure the Publish Build Artifacts task as follows:
    1. Display name = Publish k8s config
    2. Path to publish = k8s
    3. Artifact name = k8s-config
    4. Artifact publish location = Visual Studio Team Services/TFS

You should now be able to test the build by committing a minor change to the source code. The build should pass and if you look in the Repositories section of your Container Registry you should see megastoreweb and megastoresavesalehandler repositories with newly created images.

Create a DAT Release Environment in VSTS

With the build working it's now time to create the release pipeline, starting with an environment I call DAT which is where automated acceptance testing might take place. At this point there is a style choice to be made for creating Kubernetes Secrets and ConfigMaps. They can be configured from files or from literal values. I've gone down the literal values route since the files route needs to specify the namespace and this would require either a separate file for each namespace creating a DRY problem or editing the config files as part of the release pipeline. To me the literal values technique seems cleaner. Either way, as far as I can tell there is no way to update a Secret or ConfigMap via a VSTS Deploy to Kubernetes task as it's a two step process and the task can't handle this. The workaround is a task to delete the Secret or ConfigMap and then a task to create it. You'll see that I've also chosen to explicitly create the image pull secret. This is partly because of a bug in the Deploy to Kubernetes task however it also avoids having to repeat a lot of the Secrets configuration in Deploy to Kubernetes tasks that deploy service or deployment configurations.

  1. Create a new release definition in VSTS, electing to start with an empty process and rename it MegaStore.
  2. In the Pipeline tab click on Add artifact and link the build that was just created which in turn makes the k8s-config artifact from step 9 above available in the release.
  3. Click on the lightning bolt to enable the Continuous deployment trigger.
  4. Still in the Pipeline tab rename Environment 1 to DAT, with the overall changes resulting in something like this:
  5. In the Tasks tab click on Agent phase and under Agent queue select your private build agent.
  6. In the Variables tab create the following variables with Release Scope:
    1. AcrAuthenticationSecretName = prmcrauth (or the name you are using for imagePullSecrets in the Kubernetes config files)
    2. AcrName = [unique name of your Azure Container Registry, eg mine is prmcr]
    3. AcrPassword = [password of your Azure Container Registry from Settings > Access keys], use the padlock to make it a secret
  7.  In the Variables tab create the following variables with DAT Scope:
    1. DatDbConn = Server=tcp:megastore.database.windows.net,1433;Initial Catalog=MegaStoreDat;Persist Security Info=False;User ID=sales_user;Password=mystrongpwd;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30; (you will need to alter this connection string for your own Azure SQL server and database)
    2. DatEnvironment = dat (ie in lower case)
  8. In the Tasks tab add 15 Deploy to Kubernetes tasks and disable all but the first one so the release can be tested after each task is configured. Note that when configuring the tasks below only the required entries and changes to defaults are listed.
  9. Configure the first Deploy to Kubernetes task as follows:
    1. Display name = Delete image pull secret
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = delete
    5. Arguments = secret $(AcrAuthenticationSecret)
    6. Control Options > Continue on error = checked
  10. Configure the second Deploy to Kubernetes task as follows:
    1. Display name = Create image pull secret
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = create
    5. Arguments = secret docker-registry $(AcrAuthenticationSecretName) --namespace=$(DatEnvironment) --docker-server=$(AcrName).azurecr.io --docker-username=$(AcrName) --docker-password=$(AcrPassword) --docker-email=fred@bloggs.com (note that the email address can be anything you like)
  11. Configure the third Deploy to Kubernetes task as follows:
    1. Display name = Delete ASPNETCORE_ENVIRONMENT config map
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = delete
    5. Arguments = configmap aspnetcore.env
    6. Control Options > Continue on error = checked
  12. Configure the fourth Deploy to Kubernetes task as follows:
    1. Display name = Create ASPNETCORE_ENVIRONMENT config map
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = create
    5. Arguments = configmap aspnetcore.env --from-literal=ASPNETCORE_ENVIRONMENT=$(DatEnvironment)
  13. Configure the fifth Deploy to Kubernetes task as follows:
    1. Display name = Delete DB_CONNECTION_STRING secret
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = delete
    5. Arguments = secret db.connection
    6. Control Options > Continue on error = checked
  14. Configure the sixth Deploy to Kubernetes task as follows:
    1. Display name = Create DB_CONNECTION_STRING secret
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = create
    5. Arguments = secret generic db.connection --from-literal=DB_CONNECTION_STRING="$(DatDbConn)"
  15. Configure the seventh Deploy to Kubernetes task as follows:
    1. Display name = Delete MESSAGE_QUEUE_URL config map
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = delete
    5. Arguments = configmap message.queue
    6. Control Options > Continue on error = checked
  16. Configure the eighth Deploy to Kubernetes task as follows:
    1. Display name = Create MESSAGE_QUEUE_URL config map
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = create
    5. Arguments = configmap message.queue --from-literal=MESSAGE_QUEUE_URL=nats://message-queue-service.$(DatEnvironment):4222
  17. Configure the ninth Deploy to Kubernetes task as follows:
    1. Display name = Create message-queue service
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = apply
    5. Use Configuration files = checked
    6. Configuration File = $(System.DefaultWorkingDirectory)/_MegaStore/k8s-config/message-queue-service.yaml
  18. Configure the tenth Deploy to Kubernetes task as follows:
    1. Display name = Create megastore-web service
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = apply
    5. Use Configuration files = checked
    6. Configuration File = $(System.DefaultWorkingDirectory)/_MegaStore/k8s-config/megastore-web-service.yaml
  19. Configure the eleventh Deploy to Kubernetes task as follows:
    1. Display name = Create message-queue deployment
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = apply
    5. Use Configuration files = checked
    6. Configuration File = $(System.DefaultWorkingDirectory)/_MegaStore/k8s-config/message-queue-deployment.yaml
  20. Configure the twelfth Deploy to Kubernetes task as follows:
    1. Display name = Create megastore-web deployment
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = apply
    5. Use Configuration files = checked
    6. Configuration File = $(System.DefaultWorkingDirectory)/_MegaStore/k8s-config/message-queue-deployment.yaml
  21. Configure the thirteenth Deploy to Kubernetes task as follows:
    1. Display name = Update megastore-web with latest image
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = set
    5. Arguments = image deployment/megastore-web-deployment megastoreweb=$(AcrName).azurecr.io/megastoreweb:$(Build.BuildNumber)
  22. Configure the fourteenth Deploy to Kubernetes task as follows:
    1. Display name = Create megastore-savesalehandler deployment
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = apply
    5. Use Configuration files = checked
    6. Configuration File = $(System.DefaultWorkingDirectory)/_MegaStore/k8s-config/megastore-savesalehandler-deployment.yaml
  23. Configure the fifthteenth Deploy to Kubernetes task as follows:
    1. Display name = Update megastore-savesalehandler with latest image
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = set
    5. Arguments = image deployment/megastore-savesalehandler-deployment megastoresavesalehandler=$(AcrName).azurecr.io/megastoresavesalehandler:$(Build.BuildNumber)

That's a heck of a lot of configuration, so what exactly have we built?

The first eight tasks deal with the configuration that support the services and deployments:

  • The image pull secret stores the credentials to the Azure Container Registry so that deployments that need to pull images from the ACR can authenticate.
  • The ASPNETCORE_ENVIRONMENT config map sets the environment for ASP.NET Core. I don't do anything with this but it could be handy for troubleshooting purposes.
  • The DB_CONNECTION_STRING secret stores the connection string to the Azure SQL database and is used by the megastore-savesalehandler-deployment.yaml configuration.
  • The MESSAGE_QUEUE_URL config map stores the URL to the NATS message queue and is used by the megastore-web-deployment.yaml and megastore-savesalehandler-deployment.yaml configurations.

As mentioned above, a limitation of the VSTS Deploy to Kubernetes task means that in order to be able to update Secrets and ConfigMaps they need to be deleted first and then created again. This does mean that an exception is thrown the first time a delete task is run however the Continue on error option ensures that the release doesn't fail.

The remaining seven tasks deal with the deployment and configuration of the components (other than the Azure SQL database) that make up the MegaStore application:

  • The NATS message queue requires a service so other components can talk to it and the deployment that specifies the specification for the image.
  • The MegaStore.Web front end requires a service so that it is exposed to the outside world and the deployment that specifies the specification for the image.
  • MegaStore.SaveSalehandler monitoring component only needs the deployment that specifies the specification for the image as nothing connects to it directly.

If everything has been configured correctly then triggering a release should result in a megastore-web-service being created. You can check the deployment was successful by executing kubectl get services --namespace=dat to get the external IP address of the LoadBalancer which you can paste in to a browser to confirm that the ASP.NET Core website is running. On the backend, you can use SQL Server Management Studio to connect to the database and confirm that records are being created in dbo.Sale.

If you are running in to problems, you can run the Kubernetes Dashboard to find out what is failing. Typically it's deployments that fail, and navigating to Workloads > Deployments can highlight the failing deployment. You can find out what the error is from the New Replica Set panel by clicking on the Logs icon which brings up a new browser tab with a command line style output of the error. If there is no error it displays any Console.WiteLine output. Very neat:

Create a PRD Release Environment in VSTS

With a DAT environment created we can now create other environments on the route to production. This could be whatever else is needed to test the application, however here I'm just going to create a production environment I'll call PRD. I described this process in my previous post so here I'll just list the high level process:

  1. Clone the DAT environment and rename it PRD.
  2. In the Variables tab rename the cloned DatDbConn and DatEnvironment variables (the ones with PRD scope) to PrdDbConn and PrdEnvironment and change their values accordingly.
  3. In the Tasks tab visit each task and change all references of $(DatDbConn) and $(DatEnvironment) to $(PrdDbConn) and $(PrdEnvironment). All Namespace fields will need changing and many of the tasks with use the Arguments fields will need attention.
  4. Trigger a build and check the deployment was successful by executing kubectl get services --namespace=prd to get the external IP address of the LoadBalancer which you can paste in to a browser to confirm that the ASP.NET Core website is running.

Wrapping Up

Although the final result is a CI/CD pipeline that certainly works there are more tasks than I'm happy with due to the need to delete and then recreate Secrets and ConfigMaps and this also adds quite a bit of overhead to the time it takes to deploy to an environment. There's bound to be a more elegant way of doing this that either exists now and I just don't know about it or that will exist in the future. Do post in the comments if you have thoughts.

Although I'm three posts in I've barely scratched the surface of the different topics that I could cover, so plenty more to come in this series. Next time it will probably be around health and / or monitoring.

Cheers—Graham

Deploy a Dockerized ASP.NET Core Application to Kubernetes on Azure Using a VSTS CI/CD Pipeline: Part 2

Posted by Graham Smith on March 21, 2018No Comments (click here to comment)

If you need to provision a new environment for your deployment pipeline, what's your process and how long does it take? For many of us the process probably starts with a request to an infrastructure team for new virtual machines. If the new VMs are in Azure the request might be completed quite quickly; if they are on premises it might take much longer. In both scenarios you might have to justify your request: there will be actual cost in Azure and on premises it's another chunk of the datacentre ‘gone'.

With the help of containers and container orchestrators I predict (and sincerely hope) that this sort of pain will become a distant memory for much of the software development community for whom it is currently an issue. The reason is that container orchestration technologies abstract away the virtual (or physical) server layer and allow you to focus on configuring services and how they communicate with each other—all through configuration files. The only time you'd need to think of virtual (or physical) servers is if the cluster running your orchestrator needed more capacity, in which case someone will need to add more nodes. A whole new environment for your pipeline just by doing some work with a configuration file? What's not to like?

In this blog post I hope to make my prediction come alive by showing you how new environments can be quickly created using Kubernetes running in Microsoft's Azure Container Service (AKS), crucially using declarative configuration files that get deployed as part of a VSTS release pipeline. This post follows directly on from a previous post, both in terms of understanding and also the components that were built in that first post, so if you haven't already done so I recommend working your way through that post before going further.

Housekeeping

In the previous post we deployed to the default namespace so it probably makes sense to clean all this up. This can all be done by the command line of course but to mix it up a bit I'll illustrate using the Kubernetes Dashboard. You can start the dashboard using the following command, substituting in the name of your resource group and the name of the cluster:

This should open the dashboard in a browser displaying the default namespace. Navigate to Workloads > Deployments and using the hamburger menu delete the deployment:

Navigate to Discovery and Load Balancing > Services and delete the service:

Navigate to Config and Storage > Secret and delete the secret:

Environments and Namespaces

The Kubernetes feature that we'll use to create environments that together form part of our pipeline is Namespaces. You can think of namespaces as a way to divide the Kubernetes cluster in to virtual clusters. Within a namespace resource names need to be unique but they don't have to be across namespaces. This is great because effectively we have network isolation so that across each environment resource names stay the same. Say goodbye to having to append the environment name to all the resources in your environment to make them unique.

In this post I'll make a pipeline consisting of two environments. I'm sticking with a convention I established several years ago so I'll be creating DAT (developer automated test) and PRD (production) environments. In a complete pipeline I might also create a DQC (developer quality control) environment to sit between DAT and PRD but that won't really add anything extra to this exercise.

First up is to create the namespaces. There is an argument for saying that namespace creation should be part of the release pipeline however in this post I'm going to create everything manually as I think it helps to understand what's going on. Create a file called namespaces.yaml and add the following contents:

Note that namespace name needs to be in lower case as it needs to be DNS compatible. Open a command prompt at the same location as namespaces.yaml and execute the the following command: kubectl create -f namespaces.yaml. You should get a message back advising the namespaces have been created and at one level that's all there is to it. However there's a couple of extra bits worth knowing.

When you first start working with kubectl at the command line you are working in the default namespace. To work with other namespaces needs some configuration.

To return details of the configuration stored in C:\Users\<username>\.kube\config use:

My cluster returned the following output:

From this output you need to determine your cluster name (which you probably already know) as well as the name of the user. These details are fed in to the following command for creating a new context for an environment (in this case the DAT environment):

To switch to working to this context (and hence the dat namespace) use:

To confirm (or check) the current context use:

To get back to the default namespace use:

Normally that would be most of what you need to know to work with namespaces, however as of the time of writing there is a bug in the VSTS Deploy to Kubernetes task which requires some extra work. The bug may be fixed by the time you read this however it's handy to examine the issue to further understand what is going on behind the scenes.

Each namespace needs to access the Azure Container Registry (ACR) we created in the previous post to pull down images. This is a private registry so we don't want open access and so some form of authentication is required. This is provided by the creation of a Kubernetes secret that holds the authentication details to the ACR. The VSTS Deploy to Kubernetes task can create this secret for us however the bug is that it only creates the secret for the default namespace and fails to create the secret when a different namespace is specified. The workaround is to create the secret manually in each namespace using the following command:

In the above command secret-name is any arbitrary name you choose for the secret, namespace is the namespace in which to create the secret, acr-name is the name of your ACR, acr-admin-password is the password from the Access keys panel of your ACR and any-valid-email-address is just that. You'll need to run this command for each namespace of course. One final thing: you'll need to make sure that in the codebase the imagePullSecrets name in deployment.yaml matches the name of the secret you just created.

Amend the VSTS Pipeline to Support Multiple Environments

In this section we amend the release pipeline that was built in the previous post to support multiple environments.

  1. In the Pipeline tab rename Environment 1 to DAT:
  2. In the Variables tab create a variable to hold the name of the secret created above to authenticate with ACR. Create a second variable for the DAT environment namespace and change its scope to DAT. Remember that the value needs to be lower case:
  3. In the Tasks tab amend all three Deploy to Kubernetes tasks so that the Namespace field contains the $(DatEnvironment) variable. At the same time ensure that Secret name field matches the name of the secret variable created above:
  4. In order to test that deploying to DAT works, either trigger a build or, if you updated deployment.yaml above on your workstation commit your code. If the deployment was successful find the external IP address of the LoadBalancer by executing kubectl get services --namespace=dat and paste in to a browser to confirm that the ASP.NET Core website is running.

Amend the VSTS Pipeline to Support a New Environment

Now for the fun bit where we see just how easy it is to configure a new, network-isolated environment.

  1. In the Pipeline tab use the arrow next to Environments > Add to show and then select Clone environment:
  2. Rename the cloned environment to PRD. Create a new variable (ie PrdEnvironment) scoped to PRD to hold the prd namespace and amend each of the three Deploy to Kubernetes tasks so that the Namespace field contains the $(PrdEnvironment) variable.
  3. Trigger a build and check the deployment was successful by executing kubectl get services --namespace=prd to get the external IP address of the LoadBalancer which you can paste in to a browser to confirm that the ASP.NET Core website is running.

And That's It!

Yep—that really is all there is to it! Okay, this is just a trivial example, however even with more services the procedure would be the same. Granted, in a more complex application there might be some environment variables or secrets that might change but even so, it's just configuration.

I'm thrilled by the power that Kubernetes gives to developers—no more thinking about VMs or tin, no more having to append resources with environment names, and the ability to create a new environment in the blink of an eye—wow!

There's lots more I'm planning to cover in the deployment pipeline space however next time I'll be looking at the development inner loop and the options for running Kubernetes whilst developing code.

Cheers—Graham

Deploy a Dockerized ASP.NET Core Application to Kubernetes on Azure Using a VSTS CI/CD Pipeline: Part 1

Posted by Graham Smith on February 20, 2018No Comments (click here to comment)

Over the past 18 months or so I've written a handful of blog posts about deploying Docker containers using Visual Studio Team Services (VSTS). The first post covered deploying a container to a Linux VM running Docker and other posts covered deploying containers to a cluster running DC/OS—all running in Microsoft Azure. Fast forward to today and everything looks completely different from when I wrote that first post: Docker is much more mature with features such as multi-stage builds dramatically streamlining the process of building source code and packaging it in to containers, and Kubernetes has emerged as a clear leader in the container orchestration battle and looks set to be a game-changing technology. (If you are new to Kubernetes I have a Getting Started blog post here with plenty of useful learning resources and tips for getting started.)

One of the key questions that's been on my mind recently is how to use Kubernetes as part of a CI/CD pipeline, specifically using VSTS to deploy to Microsoft's Azure Container Service (AKS), which is now specifically targeted at managing hosted Kubernetes environments. So in a new series of posts I'm going to be examining that very question, with each post building on previous posts as I drill deeper in to the details. In this post I'm starting as simply as I possibly can whilst still answering the key question of how to use VSTS to deploy to Kubernetes. Consequently I'm ignoring the Kubernetes experience on the development workstation, I only deploy a very simple application to one environment and I'm not looking at scaling or rolling updates. All this will come later, but meantime I hope you'll find that this walkthrough will whet your appetite for learning more about CI/CD and Kubernetes.

Development Workstation Configuration

These are the main tools you'll need on a Windows 10 Pro development workstation (I've documented the versions of certain tools at the time of writing but in general I'm always on the latest version):

  • Visual Studio 2017—version 15.5.6 with the ASP.NET and web development workload.
  • Docker for Windows—stable channel 17.12.0-ce.
  • Windows Subsystem for Linux (WSL)—see here for installation details. I'm still using Bash on Ubuntu on Windows that I installed before WSL moved to the Microsoft Store and in this post I assume you are using Ubuntu. The aim of installing WSL is to run Azure CLI, although technically you don't need WSL as Azure CLI will run happily under a Windows command prompt. However using WSL facilitates running Azure CLI commands from a Bash script.
  • Azure CLI on Windows Subsystem for Linux—see here for installation (and subsequent upgrade) instructions. There are several ways to login to Azure from the CLI however I've found that the interactive log-in works well since once you're logged-in you remain so for quite a long time (many days for me so far). Use az -v to check which version you are on (2.0.27 was latest at time of writing).
  • kubectl on Azure CLI—the kubectl CLI is used to interact with a Kubernetes cluster. Install using sudo az aks install-cli.

Create Services in Microsoft Azure

There are several services you will need to set up in Microsoft Azure:

  • Azure Container Registry—see here for an overview and links to the various methods for creating an ACR. I use the Standard SKU for the better performance and increased storage.
  • Azure Container Service (AKS) cluster—see here for more details about AKS and how to create a cluster, however you may find it easier to use my script below. I started off by creating a cluster and then destroying it after each use until I did some tests and found that a one-node cluster was costing pennies per day rather than the pounds per day I had assumed it would cost and now I just keep the cluster running.
    • From a WSL Bash prompt run nano create_k8s_cluster.sh to bring up the nano editor with a new empty file. Copy and paste (by pressing right mouse key) the following script:
    • Change the variables to your suit your requirements. If you only have one Azure subscription you can delete the lines that set a particular subscription as the default, otherwise use az account list to list your subscriptions to find the ID.
    • Exit out of nano making sure you save the changes (Ctrl +X, Y) and then apply permissions to make it executable by running chmod 700 create_k8s_cluster.sh.
    • Next run the script using ./create_k8s_cluster.sh.
    • One the cluster is fully up-and-running you can show the Kubernetes dashboard using az aks browse --resource-group $resourceGroup --name $clusterName.
    • You can also start to use the kubectl CLI to explore the cluster. Start with kubectl get nodes and then have a look at this cheat sheet for more commands to run.
    • The cluster will probably be running an older version of Kubernetes—you can check and find the procedure for upgrading here.
  • Private VSTS Agent on Linux—you can use the hosted agent (called Hosted Linux Preview at time of writing) but I find it runs very slowly and additionally because a new agent is used every time you perform a build it has to pull docker images down each time which adds to the slowness. In a future post I'll cover running a VSTS agent from a Docker image running on the Kubernetes cluster but for now you can create a private Linux agent running on a VM using these instructions. Although they date back to October 2016 they still work fine (I've checked them and tweaked them slightly).
    • Since we will only need this agent to build using Docker you can skip steps 5b, 5c and 5d.
    • Install a newer version of Git—I used these instructions.
    • Install docker-compose using these instructions and choosing the Linux tab.
    • Make the docker-user a member of the docker group by executing usermod -aG docker ${USER}.

Create VSTS Endpoints

In order to talk to the various Azure services you will need to create the following endpoints in VSTS (from the cog icon on the toolbar choose Services > New Service Endpoint):

  • Azure Resource Manager—to point to your MSDN subscription. You'll need to authenticate as part of the process.
  • Kubernetes Service Connection—to point to your Kubernetes cluster. You'll need the FQDN to the cluster (prepended with https://) which you can get from the Azure CLI by executing az aks show --resource-group $resourceGroup --name $clusterName, passing in your own resource group and cluster names. You'll also need the contents of the kubeconfig file. If you used the script above to create the cluster then the script copied the config file to C:\Users\Public and you can use Notepad to copy the contents.

Configure a CI Build

The first step to deploying containers to a Kubernetes cluster is to configure a CI build that creates a container and then pushes the container to a Docker registry—Azure Container Registry in this case.

Create a Sample App
  • Within an existing Team Project create a new Git repository (Code > $current repository$ > New repository) called k8s-aspnetcore. Feel free to select the options to add a README and a VisualStudio .gitignore.
  • Clone this repo on your development workstation:
    • Open PowerShell at the desired root folder.
    • Copy the URL from the VSTS code view of the new repository.
    • At the PowerShell prompt execute git clone along with the pasted URL.
  • Make sure Docker for Windows is running.
  • In Visual Studio create an ASP.NET Core Web Application in the folder the git clone command created.
  • Choose an MVC app and enable Docker support for Linux.
  • You should now be able to run your application using the green Docker run button on the Standard toolbar. What is interesting here is that the build process is using a multi-stage Dockerfile, ie the tooling to build the application is running from a Docker container. See Steve Lasker's post here for more details.
  • In the root of the repository folder create a folder named k8s-config, which we'll use later to store Kubernetes configuration files. In Visual Studio create a New Solution Folder with the same name and back in the file system folder create empty files named service.yaml and deployment.yaml. In Visual Studio add these files as existing items to the newly created solution folder.
  • The final step here is to commit the code and sync it with VSTS.
Create a VSTS Build
  • In VSTS create a new build based on the repository created above and start with an empty process.
  • After the wizard stage of the setup supply an appropriate name for the build and select the Agent queue created above if you are using the recommended private agent or Hosted Linux Preview if not.
  • Go ahead and perform a Save & queue to make sure this initial configuration succeeds.
  • In the Phase 1 panel use + to add two Docker Compose tasks and one Publish Build Artifacts task.
  • If you want to be able to perform a Save & queue after configuring each task (recommended) then right-click the second and third tasks and disable them.
  • Configure the first Docker Compose task as follows:
    • Display name = Build service images
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Docker Compose File = **/docker-compose.yml
    • Project Name = $(Build.Repository.Name)
    • Qualify Image Names = checked
    • Action = Build service images
    • Additional Image Tags = $(Build.BuildId)
    • Include Latest Tag = checked
  • Configure the second Docker Compose task as follows:
    • Display name = Push service images
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Docker Compose File = **/docker-compose.yml
    • Project Name = $(Build.Repository.Name)
    • Qualify Image Names = checked
    • Action = Push service images
    • Additional Image Tags = $(Build.BuildId)
    • Include Latest Tag = checked
  • Configure the Publish Build Artifacts task as follows:
    • Display name = Publish k8s config
    • Path to publish = k8s-config (this is the folder we created earlier in the repository root folder)
    • Artifact name = k8s-config
    • Artifact publish location = Visual Studio Team Services/TFS
  • Finally, in the Triggers section of the build editor check Enable continuous integration so that the build will trigger on a commit from Visual Studio.

So what does this build do? The first Docker Compose task uses the docker-compose.yml file to work out what images need building as specified by Dockerfile file(s) for different services. We only have one service (k8s-aspnetcore) but there could (and usually would) be more. With the image built on the VSTS agent the second Docker Compose task pushes the image to the Azure Container Registry. If you navigate to this ACR in the Azure portal and drill in to the Repositories section you should see your image. The build also publishes the yaml configuration files needed to deploy to the cluster.

Configure a Release Pipeline

We are now ready to configure a release to deploy the image that's hosted in ACR to our Kubernetes cluster. Note that you'll need to complete all of this section before you can perform a release.

Create a VSTS Release Definition
  • In VSTS create a new release definition, starting with an empty process and changing the name to k8s-aspnetcore.
  • In the Artifacts panel click on Add artifact and wire-up the build we created above.
  • With the build now added as an artifact click on the lightning bolt to enable the Continuous deployment trigger.
  • In the default Environment 1 click on 1phase, 0 task and in the Agent phase click on + to create three Deploy to Kubernetes tasks.
  • Configure the first Deploy to Kubernetes task as follows:
    • Display name = Create Service
    • Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint created above]
    • Command = apply
    • Use Configuration files = checked
    • Configuration File = $(System.DefaultWorkingDirectory)/k8s-aspnetcore/k8s-config/service.yaml
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Secret name [any secret word of your choosing, to be used consistently across all tasks]
  • Configure the second Deploy to Kubernetes task as follows:
    • Display name = Create Deployment
    • Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint created above]
    • Command = apply
    • Use Configuration files = checked
    • Configuration File = $(System.DefaultWorkingDirectory)/k8s-aspnetcore/k8s-config/deployment.yaml
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Secret name [any secret word of your choosing, to be used consistently across all tasks]
  • Configure the third Deploy to Kubernetes task as follows:
    • Display name = Update with Latest Image
    • Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint created above]
    • Command = set
    • Arguments = image deployment/k8s-aspnetcore-deployment k8s-aspnetcore=$yourAcrNameHere$.azurecr.io/k8s-aspnetcore:$(Build.BuildId)
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Secret name [any secret word of your choosing, to be used consistently across all tasks]
  • Make sure you save the release but don't bother testing it out just yet as it won't work.
Create the Kubernetes configuration
  • In Visual Studio paste the following code in to the service.yaml file created above.
  • Paste the following code in to the deployment.yaml file created above. The code is for my ACR so you will need to amend accordingly.
  • You can now commit these changes and then head over to VSTS to check that the release was successful.
  • If the release was successful you should be able to see the ASP.NET Core website in your browser. You can find the IP address by executing kubectl get services from wherever you installed kubectl.
  • Another command you might try running is kubectl describe deployment $nameOfYourDeployment, where $nameOfYourDeployment is the metadata > name in deployment.yaml. A quick tip here is that if you only have one deployment you only need to type the first letter of it.
  • It's worth noting that splitting the service and deployment configurations in to separate files isn't necessarily a best practice however I'm doing it here to try and help clarify what's going on.

In terms of a very high level explanation of what we've just configured in the release pipeline, for a simple application such as an ASP.NET Core website we need to deploy two key objects:

  1. A Kubernetes Service which (in our case) is configured with an external IP address and acts as an abstraction layer for Pods which are killed off and recreated every time a new release is triggered. This is handled by the first Deploy to Kubernetes task.
  2. A Kubernetes Deployment which describes the nature of the deployment—number of Pods (via Replica Sets), how they will be upgraded and so on. This is handled by the second Deploy to Kubernetes task.

On first deployment these two objects are all that is needed to perform a release. However, because of the declarative nature of these objects they do nothing on subsequent release if they haven't changed. This is where the third Deploy to Kubernetes task comes in to play—ensuring that after the first release subsequent releases do cause the container to be updated.

Wrapping Up

That concludes our initial look at CI/CD with VSTS and Azure Container Service (AKS)! As I mentioned at the beginning of the post I've purposely tried to keep this walkthrough as simple as possible, so watch out for the next installment where I'll build on what I've covered here.

Cheers—Graham