Deploy a Dockerized Application to Azure Kubernetes Service using Azure YAML Pipelines 6 – Telemetry and Diagnostics
This is the sixth post in a series where I'm taking a fresh look at how to deploy a dockerized application to Azure Kubernetes Service (AKS) using Azure Pipelines after having previously blogged about this in 2018. The list of posts in this series is as follows:
- Getting Started
- Terraform Development Experience
- Terraform Deployment Pipeline
- Running a Dockerized Application Locally
- Application Deployment Pipelines
- Telemetry and Diagnostics (this post)
One of the problems with running applications in containers in an orchestration system such as Kubernetes is that it can be harder to understand what is happening when things go wrong. So while instrumenting your application for telemetry and diagnostic information should be fairly high on your to do list anyway, this is even more so when running application is containers. Whilst there are lots of third party offerings in the telemetry and diagnostics space in this post I take a look at what's available for those wanting to stick with the Microsoft experience. If you want to follow along you can clone / fork my repo here, and if you haven't already done so please take a look at the first post to understand the background, what this series hopes to cover and the tools mentioned in this post.
Azure DevOps Environments
If you are following along with this series you may recall that in the last post we configured an Azure DevOps Pipeline Environment for the Kubernetes cluster. It turns out that these are great for quickly taking a peek at the health of the components deployed to a cluster. For example, this is what's displayed for the MegaStore.SaveSaleHandler deployment and pods:
It gets better though because you can drill in to the pods and view the log for each pod. This is the log from the message-queue-deployment pod:
Of course, pipeline environments only really tell you what's going on at that moment in time (or maybe for the previous few minutes depending on how busy the logs are). In order to capture sufficient retrospective data to be useful requires the services of a dedicated tool.
Application Insights
From the docs: Application Insights, a feature of Azure Monitor, is an extensible Application Performance Management (APM) service for developers and DevOps professionals. When using it in conjunction with an application as we are here there are several configuration options to address. I describe an overview below, but everything is implemented in the sample application here.
Instrumentation keys
When using Application Insights with an application that is deployed to different environments it's important to take steps to ensure that telemetry from different environments is not mixed up together. The principal technique to avoid this is to have separate Application Insights resource instances which each have their own instrumentation key. Each stage of the deployment pipeline is then configured to make the appropriate instrumentation key available and the application running in that stage of the pipeline sends telemetry back using that key. The Terraform configuration developed in a previous post created three Application Insights resource instances for each of the environments the MegaStore application runs in:
When working with containers probably the easiest way to make an instrumentation key available to applications is via an environment variable named APPINSIGHTS_INSTRUMENTATIONKEY. An ASP.NET Core application component will automatically recognise APPINSIGHTS_INSTRUMENTATIONKEY—in other components it may need to be set manually. The MegaStore application contains a helper class (MegaStore.Helper.Env) to pass environment variables to calling code.
Server-Side Telemetry
Each component of an application that is required to generate server-side telemetry at the very least needs to consume one of the Application Insights SDKs as a NuGet. The MegaStore.Web ASP.NET Core component is configured with Microsoft.ApplicationInsights.AspNetCore and the MegaStore.SaveSaleHandler .NET Core console application component with Microsoft.ApplicationInsights.WorkerService.
These components are configured with the IServiceCollection class. For a .NET Core application the code is as follows:
|
IServiceCollection services = new ServiceCollection(); services.AddApplicationInsightsTelemetryWorkerService(Env.AppInsightsInstrumentationKey); // more config here IServiceProvider serviceProvider = services.BuildServiceProvider(); |
For an ASP.NET Core application the code is similar however IServiceCollection is supplied via the ConfigureServices method of the Startup class.
Client Side Telemetry
For web applications you may want to generate client-side usage telemetry and in ASP.NET Core applications this is achieved through two configuration steps:
- In _ViewImports.cshtml add @inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet
- In _Layout.cshtml add @Html.Raw(JavaScriptSnippet.FullScript) at the end of the <head> section but before any other script.
You can read more about this here.
Kubernetes Enhancements
Since the deployed version of MegaStore runs under Kubernetes we can take advantage of the Microsoft.ApplicationInsights.Kubernetes NuGet package to enhance the standard Application Insights telemetry with Kubernetes-related information. With the NuGet installed you simply add the AddApplicationInsightsKubernetesEnricher(); extension method to IServiceCollection.
Visualising Application Components with Cloud Role
Application Insights uses an Application Map to visualise the components of a system. It will automatically name a component but it's a good idea to set this explicitly. This is achieved through the use of a CloudRoleTelemetryInitializer class, which you will need to add to each component that needs tracking:
|
using Microsoft.ApplicationInsights.Channel; using Microsoft.ApplicationInsights.Extensibility; namespace MegaStore.Web { public class CloudRoleTelemetryInitializer : ITelemetryInitializer { public void Initialize(ITelemetry telemetry) { telemetry.Context.Cloud.RoleName = "MegaStore.Web"; } } } |
The important part in the code above is in setting the RoleName. A CloudRoleTelemetryInitializer class is configured via IServiceCollection with this line of code:
|
services.AddSingleton<ITelemetryInitializer, CloudRoleTelemetryInitializer>(); |
Custom Telemetry
Adding your own custom telemetry is achieved with the TelemetryClient class. In an ASP.NET Core application TelemetryClient can be configured in a controller through dependency injection as described here. In a .NET Core Console app it's configured from IServiceProvider. The complete implementation in MegaStore.SaveSaleHandler is as follows:
|
// Class-level declaration private static TelemetryClient _telemetryClient; IServiceCollection services = new ServiceCollection(); services.AddApplicationInsightsTelemetryWorkerService(Env.AppInsightsInstrumentationKey); services.AddApplicationInsightsKubernetesEnricher(); services.AddSingleton<ITelemetryInitializer, CloudRoleTelemetryInitializer>(); IServiceProvider serviceProvider = services.BuildServiceProvider(); _telemetryClient = serviceProvider.GetRequiredService<TelemetryClient>(); |
There are then several options for generating telemetry including TelemetryClient.TrackEvent, TelemetryClient.TrackTrace and TelemetryClient.TrackException.
Generating Data
With all this configuration out of the way we can now start generating data. The first step is to find the IP addresses of the MegaStore.Web home pages for the qa and prd environments. One way is to bring up the Kubernetes Dashboard by running the following code: az aks browse --resource-group yourResourceGroup --name yourAksCluster.
Now switch to the desired namespace and navigate to Discovery and Load Balancing > Services to see the IP address of megastore-web-service:
My preferred way of generating traffic to these web pages is with a PowerShell snippet run from Azure Cloud Shell (which stops your own machine from being overloaded if you really crank things up by by reducing the Start-Sleep value):
|
while ($true) { (New-Object Net.WebClient).DownloadString("http://51.11.9.183/") Start-Sleep -Milliseconds 1000 } |
As tings stand you will probably mostly get ‘routine' telemetry being returned. If you want to simulate exceptions you can change the name of one of the columns in the database table dbo.Sale.
Visualising Data
Without any further configuration there are several areas of Application Insights that will now start displaying data. Here is just a small selection of what's available:
Overview Panel
Application Map Panel
Live Metrics Panel
Search Panel
Whilst all these overview representations of data are very useful it is in the detail where things perhaps get the most interesting. Drilling in to an individual Trace for example shows a useful set of standard Trace properties:
And also a set of custom Trace properties courtesy of Microsoft.ApplicationInsights.Kubernetes:
Drilling in to a synthetic exception (due to a changed column name in the database) provides details of the exception and also the stack trace:
These are just a few examples of what's available and a fuller list is available here. And this is just the application monitoring side of the whole Application Monitoring platform. A good starting point to see where Application Insights fits in to the bigger Azure Monitoring platform is the overview page here.
That's it folks!
That's it for this mini series! As I said in the first post, the ideas presented in this series are not meant to be the definitive, one and only, way of deploying a Dockerized ASP.NET Core application to Azure Kubernetes Service. Rather, they are intended to show my journey and hopefully give you ideas for doing things differently and better. To give you one example, I'm uneasy about having Kubernetes deployments described in static YAML files. Making modifications by hand somehow feels error prone and inefficient. There are other options though, and this post has a good explanation of the possibilities. From this post we see that there is a tool called Kustomize and then we see that there is an Azure DevOps Kubernetes manifest task that uses Kustomise. I've not explored this task yet but it looks like a good next step to understand how to evolve Kubernetes deployments.
If you have your own ideas for evolving the ideas in this series do leave a comment!
Cheers -- Graham
Deploy a Dockerized Application to Azure Kubernetes Service using Azure YAML Pipelines 5 – Application Deployment Pipelines
This is the fifth post in a series where I'm taking a fresh look at how to deploy a dockerized application to Azure Kubernetes Service (AKS) using Azure Pipelines after having previously blogged about this in 2018. The list of posts in this series is as follows:
- Getting Started
- Terraform Development Experience
- Terraform Deployment Pipeline
- Running a Dockerized Application Locally
- Application Deployment Pipelines (this post)
- Telemetry and Diagnostics
In this post I deploy the MegaStore sample application that was introduced in the previous post to AKS using YAML Azure Pipelines. If you want to follow along you can clone / fork my repo here, and if you haven't already done so please take a look at the first post to understand the background, what this series hopes to cover and the tools mentioned in this post. I'm not covering Azure Pipelines basics here and if this is of interest take a look at this video and or this series of videos. I'm also assuming general familiarity with Azure DevOps and the Azure Portal.
For me this is probably the most exciting post in the series. I've been developing Azure Pipelines using YAML for a little while now and I love working in this way and wouldn't want to go back to classic pipelines ie GUI tasks.
Even though we're dealing with pipelines as code there's still a lot to configure, so let's get started!
Azure SQL qa and prd Databases
First configure the Azure SQL qa and prd databases created in a previous post. Using SQL Server Management Studio (SSMS) login to Azure SQL where Server name will be something like yourservername-asql.database.windows.net and Login and Password are the values supplied to the asql_administrator_login_name and asql_administrator_login_password Terraform variables. Once logged in create the following objects using the files in the repo's sql folder (use Ctrl+Shift+M in SSMS to show the Template Parameters dialog to add the qa and prd suffixes):
- SQL logins called sales_user_qa and sales_user_prd based on create-login-template.sql. Make a note of the passwords.
- In both the qa and prd databases users called sales_user and a table called Sale based on configure-database-template.sql.
Note: if you are having problems logging in to Azure SQL from SSMS make sure you have correctly set a firewall rule to allow your local workstation to connect.
Self-hosted Linux Agent
The MegaStore sample application uses Linux containers so we need a Linux agent running Docker to build them. The Microsoft ubuntu-latest agent will work but as noted in a previous post the Microsoft agents can be slow and you can't directly see what they are doing at the file system level. However, due to the magic of the newer versions of Docker Desktop and WSL 2 we can easily run a self-hosted Linux agent on a Windows 10 machine. The instructions for configuring a self-host agent can be found here and I assume that you have the prerequisites installed and configured as per the first post in this series. The high-level procedure is as follows:
- If you didn't create a new Agent Pool in Azure DevOps as part of a previous post, you'll need to create anew pool called Local at Organization Settings > Pipelines > Agent Pools > Add pool.
- On your Windows machine create a folder such as C:\agents\linux.
- Download the agent which will have a filename like vsts-agent-linux-x64-2.165.2.tar.gz. Move this file to C:\agents\linux (it's okay to do this in Windows Explorer).
- The tar file needs to be unzipped from an Ubuntu Bash prompt (ie Ubuntu running under WSL 2). Make sure you are at /mntc/agents/linux and then run tar zxvf vsts-agent-linux-x64-2.165.2.tar.gz (obviously substitute the correct filename as the version may have moved on by the time you read this). It took a couple of minutes on my machine.
- Now run ./config.sh to start the configuration process.
- You will need to supply your Azure DevOps server URL and previously created PAT.
- Use ubuntu-18.04 as the agent name and for this local instance I recommend not running as a service or at startup.
- The agent can be started by running ./run.sh at an Ubuntu Bash prompt after which you should see something this:
- After the agent has finished running a pipeline job you can examine the files in C:\agents\linux\_work (Windows Explorer works fine) to understand what happened and assist with troubleshooting any issues.
- The ubuntu-18.04 agent name will be used in a few pipelines so it's a good candidate for adding to the megastore variable group as local_linux_agent_name.
- Don't forget that you'll need Docker Desktop running to run any pipeline jobs that use Docker.
Create a Secure File to Authenticate to AKS
One of the techniques I'm demonstrating in this blog series and in this post in particular is how to take full control of the pipeline by working with command line tools rather than Azure Pipeline tasks. Whilst tasks undoubtedly have their place, for some command line tools I don't like the way that tasks abstract away what is going on and, because of the Swiss Army knife nature of some tasks, the way they sometimes force you to supply information that may not actually be used for a task sub-command.
The command line tool predominantly in use in this post is kubectl—used to issue commands to a Kubernetes cluster. When used locally kubectl works in conjunction with a kubeconfig file that specifies connection details to a cluster. On a Windows machine, by default kubectl is going to look in C:\Users\%USERNAME%\.kube for a kubeconfig file called config. That's not going to work in an Azure Pipeline (or any pipeline) so we need a different approach. It turns out that kubectl has a --kubeconfig parameter for specifying the path to a kubeconfig file. We can make use of this in Azure Pipelines by uploading the C:\Users\%USERNAME%\.kube\config file as a Secure files item. In the pipeline we can then call a task to download the file, which by default will be to $(Agent.TempDirectory). The procedure for configuring all this is as follows:
- Whilst logged in to the Azure CLI and with the correct Azure subscription set, run az aks get-credentials --resource-group yourResourceGroup --name yourAksCluster. This will create the config file at C:\Users\%USERNAME%\.kube.
- In Azure DevOps navigate to Pipelines > Library and click + Secure file.
- Use the Upload file dialog to Browse to and upload the config file. The new secure file item is named the same as the file.
- Use the ellipsis to the right of the new secure file item to edit it:
- Edit the secure file item so that Pipeline permissions is set to Authorize for use in all pipelines:
- Note that (at least at the time of writing) for some reason this change doesn't cause the Save link to light up but you can navigate away from the editor without losing changes.
Once you have the kubeconfig file installed on your local machine you can access the cluster's Dashboard by running az aks browse --resource-group yourResourceGroup --name yourAksCluster.
Create Kubernetes Namespaces
Two Kubernetes namespaces are needed that will be the deployment environments. The great thing about using namespaces is that exactly the same configuration can be applied to each namespace without any naming collisions. For example, the message queue URL is nats://message-queue-service:4222 and this same URL works in all environments without any clashes.
With the kubeconfig file installed as above namespaces can be created from the command line using kubectl create namespace qa and kubectl create namespace prd.
Configure a Pipeline Environment
From the docs: An environment is a collection of resources that can be targeted by deployments from a pipeline. At the time of writing only a couple of resource types are supported, one of them being Kubernetes. It's actually a very handy way of being able to see what's going on in the cluster, including the health of pods and being able to look at the logs for each pod. There's also some nice traceability. Configuration is mostly straightforward:
- In Azure DevOps navigate to Pipelines > Environments and click New Environment.
- In the dialog that appears set the Name to megastore, select Kubernetes then Next.
- In the next step select Azure Kubernetes Service as the Provider and follow through with the authentication procedure.
- For Namespace select Existing and select qa in the dropdown:
- Click Validate and create to complete the first part of the process.
- In the next screen that appears click Add resource and repeat the above process but this time for the prd namespace. The final result should be something like this:
- Create a variable called environment_name for the name of the environment in the megastore variable group.
- Note that I've never seen the Latest job column change from Never deployed despite doing many deployments. Something to investigate...
Generic Procedure for Creating a Pipeline from an Existing YAML File
Thee are four separate pipelines that need creating to deploy MegaStore to AKS and this is the generic procedure for creating them from existing YAML files assuming you have cloned / forked the repo on GitHub:
- In Azure DevOps navigate to Pipelines > Pipelines and click New pipeline.
- In the Connect tab choose GitHub as the location for your code.
- In the Select tab choose the appropriate repository, possibly using the dropdown to show All repositories rather than My repositories.
- In the Configure tab choose Existing Azure Pipelines YAML file and then in the window that pops, for Path select the required YAML file and click Continue.
- In the Review tab click the dropdown next to Run and click Save.
- The next screen you are presented with invites you to run the pipeline but before doing that click the vertical ellipsis / slimline hamburger menu next to the rightmost Run pipeline and select Rename / move:
- Overwrite Name with the desired name and click Save.
- The final step is to define any variables that are not defined in the pipeline itself. There are two options here: in the UI of the pipeline and in a variable group. More on this below.
Working With YAML Pipelines
Whilst it's possible to edit pipelines in Azure DevOps I've never bothered, and instead I prefer to use VS Code with the Azure Pipelines extension. By using a yml extension for pipeline file and a yaml extension for Kubernetes files it's possible to tell VS Code to associate just yml files with the pipelines extension using this in settings.json:
|
"files.associations": { "*.yml": "azure-pipelines" } |
If that convention doesn't work for you an alternative could be to add a prefix to your pipelines and use that to identify them to the extension.
For various reasons I spent a very long time refactoring and fine-tuning the pipelines used in this blog series (okay, I went down several rabbit holes) and I've tried to capture what I learned below.
Choose stage names to promote code reusability
I know it's not always possible but if you can match the stage names in the release part of the pipeline to the names of your actual environments then you can make use of predefined variables such as $(System.StageName) to write templates (see below) that can be reused in different stages possibly without any extra work. (If your stage and environment names can't match for whatever reason you can still pass in the environment name as a parameter to a template but it's extra work.) For MegaStore deployment I have two AKS environments (qa and prd) and these match the qa and prd stages of the pipelines.
Talking of stages there is also a first stage to each pipeline I call init as I think this is a better name than build when nothing is actually being built, but that's just a personal preference.
Consider how many jobs a pipeline needs and the type of job
A job in Azure Pipelines is the top level container for the work that actually happens. Jobs do a lot of stuff to get ready for this work which is all potential overhead for a pipeline. As a rule of thumb you probably want to use as few jobs as you can get away with, which at a minimum is one job per stage.
You should also appreciate the difference between standard and deployment jobs. In addition to the differences described in the documentation I've noticed that a deployment job doesn't perform a git checkout unlike a standard job, so it looks like Microsoft have optimised the deployment job for deployment as well as giving it some extra functionality. In the MegaStore pipelines I've used a standard job for the init stage and deployment jobs for the qa and prd stages.
Where to declare variables
Variables in Azure Pipelines is a pretty large and complex topic but these resources go a long way to help understand how they work and the different options:
In terms of where to declare variables, if they are just needed for that pipeline and are not secrets they should be declared in the pipeline itself. Variables that are needed across multiple pipelines should be declared in a variable group, which also allows for the management of variables that are secrets. The remaining scenario is where to store secrets that are only used in one pipeline. The official documentation advises using the pipeline settings UI, but I'm not certain if storing related variables and secrets in multiple locations might cause confusion and whether it's better to store related items together in a variable group. I will be using the pipeline settings UI in this post to illustrate the technique and will leave it to you to make your own mind up about whether it's a good idea to split related variables.
Giving a pipeline a custom run name
The name keyword at the beginning of each pipeline allows you to provide a custom name for each run of the pipeline. I've specified a Semantic versioning type name but there's lots of configurability.
How and when to clean the workspace
Whilst it may not always be appropriate, my general preference is to start each new run of a pipeline with a completely clean workspace so there is no chance of contamination from a previous run. Looking back in time it seems that in late 2019 the procedure for cleaning the workspace changed from cleaning at the pool level to the job level. Typically you only want to clean the workspace once per run and I've dealt with this by performing a clean in the init job of the init stage of each pipeline.
Versioning files used in the pipeline
The MegaStore pipelines call Kubernetes manifest files from the kubectl command line. (These are the YAML files in the k8s folder.) Since this folder exists on disk after the git checkout these files can be referenced directly from the command line. However, this is probably not a great idea because in theory it's possible to write a pipeline against a frequently changing repo that could end up using one version of a file in one stage of the pipeline and a different version in another.
A much better practice in my view is to package files in to an artifact and then make those packaged files available to the stages of the pipeline. An additional benefit of this approach is that the artifact is associated with the pipeline run and can be examined at a later date if you need to understand what was actually deployed. (Note that in the MegaStore pipelines I'm being a bit lazy in packaging the whole k8s folder but that isn't strictly necessary as not every file is used in each pipeline.)
By default a deployment job will try and download an artifact created in a previous part of the pipeline. In my pipelines I'm explicitly downloading the artifact in the init stage so I suppress this in the qa and prd stages using the download: none keyword.
Refactor the pipeline with templates
You can and should refactor your pipelines with templates. From the docs: Templates let you define reusable content, logic, and parameters. Templates function in two ways. You can insert reusable content with a template or you can use a template to control what is allowed in a pipeline. I'm using the first version here, ie to package reusable content.
Templates work at different levels, and can be used to reuse steps, jobs and stages. I started by creating job templates as it made the main pipeline much cleaner. However, I realised that the job templates in a stage were executing in any order, which definitely was not what I wanted. Other than possibly passing in a parameter to the template to control dependency I couldn't see an obvious way to set the execution order of jobs templates. This, in conjunction with my realising that there is some overhead to each job (see above) meant that I ditched job templates for step templates.
As an aside, one great thing I learned whilst using (the now abandoned) job templates was how to dynamically set the job name, as I wanted the job name to include the stage name. You can't simply append $(System.StageName) to the job name in a template because the job name needs to be evaluated before the pipeline executes. However, you can pass a parameter in to the template that uses the template expression syntax in the template which gets resolved during pipeline initialization. I couldn't stop smiling when I came across this feature.
A final thought about templates is that it's probably a good idea to make sure you don't take refactoring too far, as to me it feels like the single-responsibility principle ought to apply to templates. I fell foul of this by nesting a template in a template. There are valid reasons to do this but in my case the nested template had nothing to do with the parent template and I decided it was probably a bad idea.
Configuring and Running the MegaStore Pipelines
At long last we get to actually create the pipelines. You should follow the generic procedure above to create the following:
- megastore-config, with the following variables
- acr_authentication_secret_name = acrauth: in pipeline settings UI as plain text
- acr_name = ACR name from Azure Portal: in megastore variable group as plain text
- acr_password = ACR password from Azure Portal: in megastore variable group as secret
- appinsights_instrumentationkey_qa = App Insights qa key from Azure Portal: in pipeline settings UI as plain text
- appinsights_instrumentationkey_prd = App Insights prd key from Azure Portal: in pipeline settings UI as plain text
- db_password_qa = password generate above for sales_user_qa login
- db_password_prd = password generate above for sales_user_prd login
- db_server_name = Azure SQL server name without the megastoreprm-asql.database.windows.net element
- megastore-message-queue
- megastore-savesalehandler
- megastore-web
The first pipeline to run should be megastore-config as this sets up environment variables used by other pipelines. In a stable system (ie not in active development / test cycle) this pipeline wouldn't be needed again unless any of the environment variables change.
The next pipeline to run is megastore-message-queue as it doesn't have dependencies. The pipeline creates a Kubernetes Service to expose pod(s) running the NATS message queue which are deployed using a Kubernetes Deployment. For this demo setup the NATS Docker image is pulled directly from Docker Hub so there is no interaction with Azure Container Registry. Again, once deployed this pipeline would only needed to be deployed infrequently.
The final pipelines can be run in any order. The megastore-savesalehandler pipeline only consists of a deployment because nothing needs to connect to it all it does is monitor the message queue. The megastore-web pipeline requires both a service and a deployment because we want to talk to the pod(s) from the outside world. In both cases the init stage of the pipeline runs a series of commands to build a new image and upload it to Azure Container Registry tagged with the build number. The kubectl set image command ensures that the image with the correct build number is deployed. With a changing application these pipelines would be deployed as required to release new features. These application components can be developed and deployed independently of each other but will reply on testing in Visual Studio to make sure nothing is broken.
That's it Folks!
I'm aware that there is a lot of small moving parts here and lots of scope for things to be missed. If you are following along and getting errors please leave a comment and I'll try to help. Missing or misspelt variables are a common thing that trip me up.
For me, the big takeaway from this post is that I've found writing YAML Azure Pipelines to be a very enjoyable and extremely productive way to develop deployment pipelines. If you haven't tried them I urge you to give it a go. You might be pleasantly surprised.
Next time we change gears completely and look at how Application Insights fits in to all of this.
Cheers -- Graham
Deploy a Dockerized Application to Azure Kubernetes Service using Azure YAML Pipelines 4 – Running a Dockerized Application Locally
This is the fourth post in a series where I'm taking a fresh look at how to deploy a dockerized application to Azure Kubernetes Service (AKS) using Azure Pipelines after having previously blogged about this in 2018. The list of posts in this series is as follows:
- Getting Started
- Terraform Development Experience
- Terraform Deployment Pipeline
- Running a Dockerized Application Locally (this post)
- Application Deployment Pipelines
- Telemetry and Diagnostics
In this post I explain the components of the sample application I wrote to accompany this (and the previous) blog series and how to run the application locally. If you want to follow along you can clone / fork my repo here, and if you haven't already done so please take a look at the first post to understand the background, what this series hopes to cover and the tools mentioned in this post. Additionally, this post assumes you have created the infrastructure—or at least the Azure SQL dev database—described in the previous Terraform posts.
MegaStore Application
The sample application is called MegaStore and is about as simple as it gets in terms of a functional application. It's a .NET Core 3.1 application and the idea is that a sales record (beers from breweries local to me if you are interested) is created in the presentation tier which eventually gets persisted to a database via a message queue. The core components are:
- MegaStore.Web: a skeleton ASP.NET Core application that creates a ‘sales' record every time the home page is accessed and places it on a message queue.
- NATS message queue: this is an instance of the nats image on Docker Hub using the default configuration.
- MegaStore.SaveSaleHandler: a .NET Core console application that monitors the NATS message queue for new records and saves them to an Azure SQL database using EF Core.
When running locally in Visual Studio 2019 these application components work together using Docker Compose, which is a separate project in the Visual Studio solution. There are two configuration files in use which get merged together:
- docker-compose.yml: contains the configuration for megastore.web and megastore.savesalehandler which is common to running the application both locally and in the deployment pipeline.
- docker-compose.override.yml: contains additional configuration that is only needed locally.
There's a few steps you'll need to complete to run MegaStore locally.
Azure SQL dev Database
First configure the Azure SQL dev database created in the previous post. Using SQL Server Management Studio (SSMS) login to Azure SQL where Server name will be something like yourservername-asql.database.windows.net and Login and Password are the values supplied to the asql_administrator_login_name and asql_administrator_login_password Terraform variables. Once logged in create the following objects using the files in the repo's sql folder (use Ctrl+Shift+M in SSMS to show the Template Parameters dialog to add the dev suffix):
- A SQL login called sales_user_dev based on create-login-template.sql. Make a note of the password.
- In the dev database a user called sales_user and a table called Sale based on configure-database-template.sql.
Note: if you are having problems logging in to Azure SQL from SSMS make sure you have correctly set a firewall rule to allow your local workstation to connect.
Docker Environment File
Next create a Docker environment file to store the database connection string. In Visual Studio create a file called db-credentials.env in the docker-compose project. All on one line add the following connection string, substituting in your own values for the server name and sales_user_dev password:
|
DB_CONNECTION_STRING=Server=tcp:yourservername.database.windows.net,1433;Initial Catalog=dev;Persist Security Info=False;User ID=sales_user_dev;Password=yourpassword;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30; |
Note: since this file contains sensitive data it's important that you don't add it to version control. The .gitignore file that's part of the repo is configured to ignore db-credentials.env.
Application Insights Key
In order to collect Application Insights telemetry from a locally running MegaStore you'll need to edit docker-compose.override.yml to contain the instrumentation key for the dev instance of the Application Insights resource that was created in the the previous post. You can find this in the Azure Portal in the Overview pane of the Application Insights resource:
I'll write more about Application Insights in a later post but in the meantime if you want to know more see this post from my previous 2018 series. It's largely the same with a few code changes for newer ways of doing things with updated NuGet packages.
Set docker-compose as Startup
The startup project in Visual Studio needs to be set to docker-compose by right-clicking docker-compose in the Solution Explorer and selecting Set as Startup Project:
Up and Running
You should now be able to run MegaStore using F5 which should result in a localhost+port number web page in your browser. Docker Desktop will need to be running however I've noticed that newer versions of Visual Studio offer to start it automatically if required. Notice in Visual Studio the handy Containers window that gives some insight into what's happening:
In order to establish everything is working open SSMS and run select-from-sales.sql (in the sql folder in the repo) against the dev database. You should see a new ‘beer' sales record. If you want to create more records you can keep reloading the web page in your browser or run the generate-web-traffic.ps1 PowerShell snippet that's in the repo's pipeline folder making sure that the URL is something like http://localhost:32768/ (your port number will likely be different).
To view Application Insights telemetry (from the Azure Portal) whilst running MegaStore locally you may need to be aware of services running on you network that could cause interference. For me I could run Live Metrics and see activity in most of the graphs, however I initially couldn't use the Search feature to see trace and request telemetry (the screenshot is what I was expecting to see):
I initially thought this might be a firewall issue but it wasn't, and instead it turned out to be the pi-hole ad blocking service I have running on my network. It's easy to disable pi-hole for a few minutes or you can figure out which URL's need whitelisting. The bigger picture though is that if you don't see telemetry—particularly in a corporate scenario—you may have to do some investigation.
That's it for now! Next time we look at deploying MegaStore to AKS using Azure Pipelines.
Cheers -- Graham
Deploy a Dockerized Application to Azure Kubernetes Service using Azure YAML Pipelines 3 – Terraform Deployment Pipeline
This is the third post in a series where I'm taking a fresh look at how to deploy a dockerized application to Azure Kubernetes Service (AKS) using Azure Pipelines after having previously blogged about this in 2018. The list of posts in this series is as follows:
- Getting Started
- Terraform Development Experience
- Terraform Deployment Pipeline (this post)
- Running a Dockerized Application Locally
- Application Deployment Pipelines
- Telemetry and Diagnostics
In this post I take a look at how to create infrastructure in Azure using Terraform in a deployment pipeline using Azure Pipelines. If you want to follow along you can clone / fork my repo here, and if you haven't already done so please take a look at the first post to understand the background, what this series hopes to cover and the tools mentioned in this post. I'm not covering Azure Pipelines basics here and if this is of interest take a look at this video and or this series of videos. I'm also assuming familiarity with Azure DevOps.
There's quite a few moving parts to configure to move from command-line Terraform to running it in Azure Pipelines so here's the high-level list of activities:
- Create a Variable Group in Azure Pipelines as a central place to store variables and secrets that can be used across multiple pipelines.
- Configure a self-hosted build agent to run on a local Windows machine to aid troubleshooting.
- Create storage in Azure to act as a backend for Terraform state.
- Generate credentials for deployment to Azure.
- Create variables in the variable group to support the Terraform resources that need variable values.
- Configure and run an Azure Pipeline from the megastore-iac.yml file in the repo.
Create a Variable Group in Azure Pipelines
In your Azure DevOps project (mine is called megastore-az) navigate to Pipelines > Library > Variable Groups and create a new variable group called megastore. Ensure that Allow access to all pipelines is set to on. Add a variable named project_name and give it a meaningful value that is also likely to be globally unique and doesn't contain any punctuation and click Save:
Configure a Self-Hosted Agent to Run Locally
While a Microsoft-hosted windows-latest agent will certainly be quite satisfactory for running Terraform pipeline jobs they can be a little bit slow and there is no way to peek in and see what's happening in the file system which can be a nuisance if you are trying to troubleshoot a problem. Additionally, because a brand new instance of an agent is created for each new request they mask the issue of files hanging around from previous jobs. This can catch you out if you move from a Microsoft-hosted agent to a self-hosted agent but is something that you will certainly catch and fix if you start with a self-hosted agent. The instructions for configuring a self-host agent can be found here. The usual scenario is that you are going to install the agent on a server but the agent works perfectly well on a local Windows 10 machine as long as all the required dependencies are installed. The high-level installation steps are as follows:
- Create a new Pool in Azure DevOps called Local at Organization Settings > Pipelines > Agent Pools > Add pool.
- On your Windows machine create a folder such as C:\agents\windows.
- Download the agent and unzip the contents.
- Copy the contents of the containing folder to C:\agents\windows, ie this folder will contain two folders and two *.cmd files.
- From a command prompt run .\config.cmd.
- You will need to supply your Azure DevOps server URL and previously created PAT.
- Use windows-10 as the agent name and for this local instance I recommend not running as a service or at startup.
- The agent can be started by running .\run.cmd at a command prompt after which you should see something this:
- After the agent has finished running a pipeline job you can examine the files in C:\agents\windows\_work to understand what happened and assist with troubleshooting any issues.
Create Backend Storage in Azure
The Azure backend storage can be created by applying the Terraform configuration in the backend folder that is part of the repo. The configuration outputs three key/value pairs which are required by Terraform and which should be added as variables to the megastore variable group. The backend_storage_access_key should be set as a secret with the padlock:
Generate Credentials for Deployment to Azure
There are several pieces of information required by Terraform which can be obtained as follows (assumes you are logged in to Azure via the Azure CLI—run az login if not):
- Run az account list --output table which will return a list of Azure accounts and corresponding subscription Ids.
- Run az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/SubscriptionId", substituting SubscriptionId for the appropriate Id from step 1.
- From the resulting output create four new variables in the megastore variables group as follows:
- azure_subscription_id = SubscriptionId from step 1
- azure_client_id = appId value from the result of step 2
- azure_tenant_id = tenant value from the result of step 2
- azure_client_secret = password value from the result of step 2, which should set as a secret with the padlock
- Remember to save the variable group after entering the new values.
Create Terraform Variable Values in the megastore Variable Group
In the previous post where we ran Terraform from the command-line we supplied variable values via dev.tfvars, a file that isn't committed to version control and is only available for local use. These variable values need creating in the megastore variable group as follows, obviously substituting in the appropriate values:
- aks_client_id = "service principal id for the AKs cluster"
- aks_client_secret = "service principal secret for the AKs cluster"
- asql_administrator_login_name = "Azure SQL admin name"
- asql_administrator_login_password = "Azure SQL admin password"
- asql_local_client_ip_address = "local ip address for your client workstation"
Remember to save the variable group after entering the new values.
Configure an Azure Pipeline
The pipeline folder in the repo contains megastore-iac.yml which contains all the instructions needed to automate the deployment of the Terraform resources in an Azure Pipeline. The pipeline is configured in Azure DevOps as follows:
- From Pipelines > Pipelines click New pipeline.
- In Connect choose GitHub and authenticate if required.
- In Select, find your repo, possibly by selecting to show All repositories.
- In Configure choose Existing Azure Pipelines YAML file and in Path select /pipeline/megastore-iac.yml and click Continue.
- From the Run dropdown select Save.
- At the Run Pipeline screen use the vertical ellipsis to show its menu and then select Rename/move:
- Rename the pipeline to megastore-iac and click Save.
- Now click Run pipeline > Run.
- If the self-hosted agent isn't running then from a command prompt navigate to the agent folder and run .\run.cmd.
- Hopefully watch with joy as the megastore Azure infrastructure is created through the pipeline.
Analysis of the YAML File
So what exactly is the YAML file doing? Here's an explanation for some of the schema syntax with reference to a specific pipeline run and the actual folders on disk for that run (the number shown will vary between runs but otherwise everything else should be the same):
- name: applies a custom build number
- variables: specifies a reference to the megastore variable group
- pool: specifies a reference to the local agent pool and specifically to the agent we created called windows-10
- jobs/job/workspace: ensures that the agent working folders are cleared down before a new job starts
- script/'output environemt variables': dumps all the environment variables to the log for diagnostic purposes
- publish/'publish iac artefact': takes the contents of the git checkout at C:\agents\windows\_work\3\s\iac and packages them in to an artifact called iac.
- download/'download iac artefact': downloads the iac artifact to C:\agents\windows\_work\3\iac.
- powershell/'create file with azurerm backend configuration': we need to tell Terraform to use Azure for the backend through a configuration. This configuration can't be present when working locally so instead it's created dynamically through PowerShell with some formatting commands to make the YAML structurally correct.
- script/'terraform init': initialises Terraform in C:\agents\windows\_work\3\iac using Azure as the backend through credentials supplied on the command line from the megastore variable group.
- script/'terraform plan and apply': performs a plan and than an apply on the configurations in C:\agents\windows\_work\3\iac using the credentials and variables passed in on the command line from the megastore variable group.
Final Thoughts
Although this seems like a lot of configuration—and it probably is—the ability to use pipelines as code feels like a significant step forward compared with GUI tasks. Although at first the YAML can seem confusing once you start working with it you soon get used to it and I now much prefer it to GUI tasks.
One question which I'm still undecided about is where to place some of the variables needed by the pipeline. I've used a variable group exclusively as it feels better for all variables to be in one place, and for variables used across different pipelines this is definitely where they should be. However, variables that are only used by one pipeline could live with the pipeline itself, as this is a fully supported feature (editing the pipeline in the browser lights up the Variables button where variables for that pipeline can be added). However having variables scattered everywhere could be confusing, hence my uncertainty. Let me know in the comments if you have a view!
That's it for now. Next time we look at running the sample application locally using Visual Studio and Docker Desktop.
Cheers -- Graham
Deploy a Dockerized Application to Azure Kubernetes Service using Azure YAML Pipelines 2 – Terraform Development Experience
This is the second post in a series where I'm taking a fresh look at how to deploy a dockerized application to Azure Kubernetes Service (AKS) using Azure Pipelines after having previously blogged about this in 2018. The list of posts in this series is as follows:
- Getting Started
- Terraform Development Experience (this post)
- Terraform Deployment Pipeline
- Running a Dockerized Application Locally
- Application Deployment Pipelines
- Telemetry and Diagnostics
In this post I take a look at how to create infrastructure in Azure using Terraform at the command line. If you want to follow along you can clone or fork my repo here, and if you haven't already done so please take a look at the first post to understand the background, what this series hopes to cover and the tools mentioned in this post. I'm not covering Terraform basics here and if you need this take a look at this tutorial.
Working With Terraform Files in VS Code
As with most code I write, I like to distinguish between what's sometimes called the develop inner loop and the deployment pipeline. The developer inner loop is where code is written and quickly tested for fast feedback, and the deployment pipeline is where code is committed to version control and then (usually) built and deployed and subjected to a variety of tests in different environments or stages to ensure appropriate quality.
Working with infrastructure as code (IaC) against a cloud platform is obviously different from developing an application that can run completely locally, but with Terraform it's reasonably straightforward to create a productive local development experience.
Assuming you've forked my repo and cloned the fork to a suitable location on your Windows machine, open the repo's root folder in VS Code. You will probably want to install the following extensions if you haven't already:
The .gitignore file in the root of the repo contains most of the recommended settings for Terraform plus one of my own:
|
# Local .terraform directories **/.terraform/* # .tfstate files *.tfstate *.tfstate.* # Crash log files crash.log # developer tfvars file dev.tfvars |
The following files in the iac folder are of specific interest to my way of working locally with Terraform:
- variables.tf: Here I declare variables here but don't provide default values.
- terraform.tfvars: Here I provide values for all variables that are common to working both locally and in the deployment pipeline, and which aren't secrets.
- dev.tfvars: Here I provide values for all variables that are specific to working locally or which are secrets. Crucially this file is omitted from being committed to version control, and the values supplied by dev.tfvars locally are supplied in a different way in the deployment pipeline. Obviously you won't have this file and instead I've added dev.txt as a proxy for what your copy of dev.tfvars should contain.
- versions.tf: Here I specify the minimum versions of Terraform itself and the Azure Provider.
The other files in the iac folder should be familiar to anyone who has used Terraform and consist of configurations for the following Azure resources:
With all of the configurations I've taken a minimalist approach, partly to keep things simple and partly to keep Azure costs down for anyone who is looking to eek out free credits.
Running Terraform Commands in VS Code
What's nice about using VS Code for Terraform development is the integrated terminal. For fairly recent installations of VS Code a new terminal (Ctrl+Shift+') will create one of the PowerShell variety at the rood of the repo. Navigate to the iac folder (ie cd iac) and create dev.tfvars based on dev.txt, obviously supplying your own values. Next run terraform init.
As expected a set of new files is created to support the local Terraform backend, however these are a distraction in the VS Code Explorer. We can fix this, and clean the Explorer up a bit more as well:
- Access the settings editor via File > Preferences > Settings.
- Ensuring you have the User tab selected, in Search settings search for files:exclude.
- Click Add Pattern to add a glob pattern.
- Suggested patterns include:
- **/.terraform
- **/*.tfstate*
- **/.vscode
- **/LICENSE
To be able to deploy the Terraform configurations to Azure we need to be logged in via the Azure CLI:
- At the command prompt run az login and follow the browser instructions to log in.
- If you have access to more than one Azure subscription examine the output that is returned to check that the required subscription is set as the default.
- If necessary run az account set --subscription "subscription_id" to set the appropriate subscription.
You should now be able to plan or apply the configurations however there is a twist because we are using a custom tfvars file in conjunction with terraform.tfvars (which is automatically included by convention). So the correct commands to run are terraform plan -var-file="dev.tfvars" or terraform apply -var-file="dev.tfvars", remembering that these are specifically for local use only as dev.tfvars will not be available in the deployment pipeline and we'll be supplying the variable values in a different way.
That's it for this post. Next time we look at deploying the Terraform configurations in an Azure Pipeline.
Cheers -- Graham
Deploy a Dockerized Application to Azure Kubernetes Service using Azure YAML Pipelines 1 – Getting Started
In 2018 I wrote a series of blog posts about deploying a dockerized ASP.NET Core application to Azure Kubernetes Service (AKS) and finished up with this post where for various reasons I abandoned the Deploy to Kubernetes GUI tasks used by what was then VSTS and instead made use of refactored Bash scripts to deploy Kubernetes resources.
In the 2018 series of posts I didn't start out with infrastructure as code (IaC) and also since then a lot has changed with the tooling and the technology so in my next few posts I'm going to revisit this topic to see how things look in 2020. The blog series at the moment is looking like this:
- Getting Started (this post)
- Terraform Development Experience
- Terraform Deployment Pipeline
- Running a Dockerized Application Locally
- Application Deployment Pipelines
- Telemetry and Diagnostics
As with my previous 2018 series of posts I'm not suggesting that the ideas I'm presenting are the best and only way to do things. Rather, the intention is that the concepts offer a potential learning opportunity and a stepping stone to figuring out how you might approach this in a real-world scenario. Even if you don't need to use any of this in production I think there's a great deal of fun and satisfaction to be had from gluing all of the bits together.
The Big Picture
The dockerized application that I'll be deploying to AKS consists of the following components:
- An ASP.NET Core web application, that sends messages to a
- NATS message queue service, which stores messages to be retrieved by a
- .NET Core message queue handler application, which saves messages to an
- Azure SQL Database
The lifecycle of this application and the infrastructure it runs on is as follows:
- All Azure resources are managed by Terraform using Azure Pipelines. These include a Container Registry, an AKS Cluster, an Azure SQL Database server and databases and Application Insights instances.
- An AKS cluster is configured with two namespaces called qa and prd which form a basic CI/CD pipeline.
- An Azure SQL Database server is configured with three databases called dev, qa and prd.
- Application components (except the Azure SQL Database) run locally in a dev environment using docker-compose. Messages are saved to the dev Azure SQL Database.
- Deployments of application components (except the Azure SQL Database) are managed separately using dedicated Azure Pipelines. The Container Registry is used to store tagged images and new images are first pushed to the qa and then to the prd namespaces on the AKS cluster.
- Telemetry and diagnostics are collected by three separate Application Insights instances, one each for the three (dev, qa and prd) environments.
The overall aim of this series is to show how the big pieces of the jigsaw fit together and I'm intentionally not covering any of the lower-level details commonly associated with CI/CD pipelines such as testing. Maybe some other time!
What You Can Learn by Following This Blog Series
Some of the technologies I'm using in this blog series are vast in scope and I can only hope to scratch the surface. However this is a list of some of the things that you can learn about if you follow along with the series:
- The great range of tools we now have that support running Linux on Windows via WSL 2.
- An example of the Terraform developer inner loop experience and how to extend that to running Terraform in a deployment pipeline using Azure Pipelines.
- Assistance with debugging Azure Pipelines by running self-hosted agents (both Windows and Linux flavours) on a Windows 10 machine.
- Creating Azure Pipelines as pipeline as code using YAML files, including the use of templates to aid reusability and deployment jobs to target an environment.
- How to avoid using Swiss Army Knife-style Azure Pipelines tasks and instead use native commands tuned exactly to a situation's requirements.
- How to segment telemetry and diagnostics for each stage of the CI/CD pipeline using separate Application Insights resources.
Tools You Will Need / Want
There is a long list of tools needed for this series and getting everything installed and configured is quite an exercise. However you may have some of this already and it can also be great fun getting the newer stuff working. Some of the tools can be installed with Chocolatey and it's definitely worth checking this out if you haven't already. Generally, I've listed the tools in the order you will need them so you don't need to install everything before working through the next couple of posts in the series. Everything in the list should be installed in Windows 10. There are some tools that need installing in the Ubuntu distro but I cover that in the relevant post.
That's it for this post. Next time we start working with Terraform at the command line.
Cheers -- Graham
A Better Way of Deploying a Dockerized Application to Azure Kubernetes Service Using Azure Pipelines
Throughout 2018 I wrote a mini blog post series aimed at providing specific and detailed guidance on how to create a CI/CD pipeline using VSTS/Azure DevOps to deploy a dockerized ASP.NET Core application to Azure Kubernetes Service (AKS):
Whilst the resulting solution works I wasn't entirely happy with several aspects and I've spent a great deal of time thinking and tinkering to come up with something better. In this blog post I explain what I wasn't happy with and how my new solution addresses most of my concerns. You don't necessarily need to read the posts above as I'm going to provide some context, but it will probably make things much clearer if you are planning to implement any of my suggestions.
The sample application I've been using to deploy to Kubernetes consists of the following components:
- ASP.NET Core web application, that sends messages to a
- NATS message queue service, which pushes messages to a
- .NET Core message queue handler application, which saves messages to an
- Azure SQL database
Apart from the database all the components run as docker containers. The container images are built in in an Azure Pipelines build pipeline and images pushed to an Azure Container Registry (ACR). An Azure Pipelines release pipeline then deploys the necessary services and deployments to AKS which causes the images to be pulled from ACR and instantiated as containers inside pods. My release pipeline consists of two environments: dat (developer automated test where automated acceptance tests might take place) and prd (production). That's just arbitrary of course and in a live scenario the pipeline can have whatever environments are needed.
My sample application is called MegaStore and you can find the code on GitHub here. In the rest of this post I explain my areas of concern and how I addressed them.
Azure Pipelines Tasks
Whilst there is no doubt that Azure Pipelines Tasks are great for quickly building a pipeline and definitely make it easier for those less familiar with the technology behind a task to get started, I now see some tasks as more of a curse than a blessing. I've particularly taken issue with tasks that manipulate a command line application (such as docker or kubectl) and which results in the task becoming something of a Swiss Army Knife task. Why have I taken issue? There are several reasons, some specific to the Swiss Army Knife variety and some of tasks in general:
- There is often a need to set mandatory fields in ‘Swiss Army Knife' tasks even though those parameters will not be used by the chosen sub-command. Where there are multiple instances of the same task in use this becomes very tedious and is a potential maintenance problem when something changes. (Yes, I know tasks can be cloned but this doesn't make me any happier.)
- Tasks by their nature only allow you to do what they have been coded to do and you can sometimes find yourself in a blind alley. For example, at the time of writing the only way I know of updating an existing Kubernetes ConfigMap without deleting it first and re-creating it is with a piped command, for example:
|
kubectl create configmap message.queue --from-literal=URL=nats://mq-service:4222 --dry-run -o yaml | kubectl apply -f - |
Running a command such as this isn't possible with the current Deploy to Kubernetes Azure DevOps task, which is very limiting.
- Speaking of command lines, my next issue is that tasks abstract you from what is actually going on behind the scenes. For simple tasks such as copying files this might be fine, however I've become frustrated at the way tasks such as Docker or Deploy to Kubernetes ‘hide' what they are doing, and the way that makes fine-tuning that little bit harder. Additionally, for me it's also a lost learning opportunity—a missed chance to learn the full syntax of a command because the task is constructing it on your behalf.
- Another big issue is that tasks such as Docker or Deploy to Kubernetes offer nothing in the way of code usability, and break the DRY principle in multiple dimensions (ie there is scope for repetition within an environment and also across environments). To illustrate, the release pipeline in my 2018 mini blog series consisted of no fewer than 30 Deploy to Kubernetes tasks across two environments, resulting in a great deal of repetition.
- Finally, the use of tasks in the current version of Azure Pipelines releases means that you don't have your ‘code' under proper version control. I know there are changes coming that will help to address this, and whilst they will be welcome I think there is an opportunity to do better.
So what's my solution to all this? Very simply, get rid of multiple Swiss Army Knife tasks and implement Bash scripts running from a single Bash task. I started off by using the Inline script feature of Bash tasks but this didn't help with getting code in to version control and I also quickly realised that there were big code reusability opportunities to be had across environments by using File Path scripts. By using Bash scripts stored in the repo I solved all the issues mentioned above and in the case of the release portion of the pipeline I reduced the number of tasks from 15 in each environment to two! What follows are the techniques I used to achieve this for the Docker builds and Kubernetes deployments.
Converting Docker builds to use a Bash script was reasonably straightforward so I'll start by discussing the first problem I encountered when converting Deploy to Kubernetes tasks to Bash scripts, which was how to authenticate to Kubernetes. Tasks rely on the creation of a Kubernetes service connection (Project Settings > Service connections) and I'd been using the Kubeconfig version which involves pasting in the contents of the Kubeconfig file that gets created (if you run the appropriate command) when you set up an AKS cluster:
By tracing the logging output of the Deploy to Kubernetes tasks I could see what was happening: a Kubeconfig file was being saved to disk and referenced in a kubectl command using the --kubeconfig parameter that points to the file on disk. I could successfully pass the file in from an Artifact as a proof of concept but how to store the Kubeconfig contents securely and create the file dynamically? The obvious choice was a secret variable however that didn't work because it destroyed the Kubeconfig formatting which is important in the re-hydrated file on disk. After a lot of fiddling I finally turned to LoECDA who are super-responsive via Twitter, and very quickly the suggestion came back to try using Secure files (Pipelines > Library > Secure files). This worked perfectly: a file is first uploaded to the Secure files area and this is then available for use using the Download Secure File task. The file is downloaded in to a temporary folder which can be referenced as the $AGENT_TEMPDIRECTORY variable in a Bash script. Great!
Next up was sorting out the practicalities of using Bash scripts in Bash tasks. I created a deployment (dep) folder in the repo to hold the scripts and then arranged for this folder to be available as an Artifact created directly from the GitHub repo:
I used VS Code to create the Bash files however in order for the file to be executed as a Bash script it needs its permissions setting to make it executable (chmod +x). This needs to be done from a Linux environment and there are several possibilities for achieving this including Windows Subsystem for Linux if you are on Windows 10. I chose to go with Azure Cloud Shell, which can be configured to run either a Bash or a PowerShell command line in the cloud! Once that was configured it was a case of cloning my repo, navigating to the dep folder and running chmod +x some-filename-sh. There's no GUI in Azure Cloud Shell so it does involve using git commands to push the changes back to GitHub. If this is new to you then git add *, git commit -m "Commit message" and git push origin master are what you need. To authenticate you'll likely need to use a personal access token unless you go to the bother of setting up SSH. It gets to be a bit of a pain having to enter credentials every time you want to push to GitHub however the git config credential.helper store command will save credentials across Azure Cloud Shell sessions to make life easier.
Finding out what commands needed to be executed in the Bash scripts required a bit of detective work, and involved a combination of understanding what the task was attempting to accomplish and then looking at the build or release logs to see the actual output. With the basic command figured out this exercise offered the opportunity to do a bit of fine tuning. For example, I'd been tagging my docker images with the latest tag but it turns out that this isn't a great idea for release pipelines. By writing the actual command myself I was able to get exactly what I wanted.
I describe how I organised the Bash scripts to move away from a monolithic pipeline below. In this section I want to describe the tips and tricks I used to actually write the Bash scripts. Generally, the scripts make heavy use of variables to make them applicable to all release environments, however there are some essential things to know:
- Variables created as part of Azure DevOps pipelines can be used as variables (ie passed in to a script) however with the exception of secrets they are also created as environment variables which are available directly in scripts. This means that a variable created as MyVariable is available as $MYVARIABLE directly in a Bash script (in Bash scripts the variable is really a constant which convention dictates should be in upper case and any periods need converting to underscores to ensure valid syntax).
- Variables created as part of Azure DevOps pipelines can have the same name as long as they are scoped to a different environment. So you can have two variables called MyVariable with different values for each environment and simply refer to $MYVARIABLE in the Bash script, ie no need to pass $MYVARIABLE in as a parameter to the script for different environments.
- As mentioned above, secrets are not created as environment variables and must be passed in to a script via the Arguments field, and in the script a variable is declared to accept the incoming parameter. Important: as of the time of writing a secret needs to be passed in to the Argument field as $(MYSECRET) ie with parentheses around the actual parameter name. If you omit the parentheses the secret is not passed in. A non-secret parameter doesn't require parentheses and I have queried whether this is a a bug here.
- Later in this post I explain how I break up a monolithic pipeline in to multiple pipelines, which results in the same variables being needed in different pipelines. By using Variable Groups I was able to avoid repeated variable declarations and manage many variables from just one location.
- In addition to variables that are created manually, built-in variables are also available as environment variables in the script. The ones I've used are $AGENT_TEMPDIRECTORY to define the download location of the Kubeconfig file from the Secure files area, $RELEASE_ENVIRONMENTNAME to refer to the environment (ie dat or prd) and also $BUILD_BUILDNUMBER used to tag docker images with a unique build number in the build process and then to refer to them by their unique name in the release. However, there are many built-in variables available to use—see here for details but remember that for use in Bash scripts you should change text to uppercase and must replace periods with an underscore.
I'm not a Bash scripting expert and I'm sure my scripts would be considered very rudimentary. The great thing though is that you can do whatever you like now the code is a script. Possibilities might include adding error handling or refactoring further using functions. There's potential to really go to town here.
Monolithic Pipeline
At the time of writing this article in early 2019 there aren't that many blog post examples of implementing a CI/CD pipeline to deploy an application to Kubernetes. Furthermore, the posts that do exist tend, not unreasonably, to use a simplistic application scenario to illustrate the concepts. Typically, this involves deploying the whole application as part of a single pipeline, and indeed this is the route I took with my 2018 blog post mini series. However, it became quickly apparent to me that this is an unsatisfactory arrangement for two main reasons:
- Just one change to one of the application components would cause all the components of the application to be redeployed (or more correctly the parts of the application that have their docker images built by the pipeline).
- A change to the Kubernetes configuration would also trigger a redeployment of all of the application components. Sometimes this is necessary but often it's not.
These issues arise because the trigger for the build component of the pipeline is set as the root of the GitHub repo, so if anything changes in the repo a build is triggered. Clearly not an optimal situation.
My solution to this problem is to divide the monolithic pipeline in to multiple pipelines that correspond to the individual components of the overall application. Then with a bit of refactoring of the codebase it's possible to use a very nifty feature of Azure Pipelines that allows a build to be triggered from one or more specific folders (or files for that matter) in the repo, ie a much more granular solution.
One complication that I had to cater for is that the pipeline isn't just building docker images and marshalling them in to the Kubernetes cluster: additionally, the pipeline is configuring Kubernetes elements such as Namespaces, Secrets and ConfigMaps.
Through the use of Bash scripts as described above the number of tasks needed is drastically reduced: just one Bash task for the builds and two tasks for releases (a Download Secure File task to copy the kubeconfig file to disk and a Bash task to host the bash script). All scripts are Namespace/environment aware.
In terms of Azure Pipelines build and release pipelines my current CI/CD solution is as follows:
megastore.init.release
This is a release that is not associated with a build and its sole purpose is to configure a Kubernetes Namespace in preparation for the deployment of the application. As such, this component is only intended to be run to either initialise a new Kubernetes cluster or (rarely) if one of the configuration items needs to change (in which case elements of the application will likely have to be redeployed for the configuration to be built in to the appropriate pods).
The configuration handled by megastore.init.release is as follows:
- Creation of a Namespace for a corresponding Azure Pipelines environment.
- Creation (or update) of ACR credentials (as a specialised Secret) that allow Deployments to pull docker images from ACR.
- Creation (or update) of the message queue URL as a ConfigMap.
- Creation (or update) of the Application Insights instrumentation key as a ConfigMap.
This configuration is handled by init.sh.
megastore.message-queue.release
This is another release that is not associated with a build, and in this case the requirement is to deploy the NATS message queue service. The absence of a build is due to the docker image being pulled from Docker Hub. The downside of not having a build associated with the release is that if any of the NATS configuration changes the release needs to be triggered manually. I see this as an infrequent requirement though. The message queue service doesn't have any dependencies on any other part of the application and so is the first component to be deployed following the initial Kubernetes configuration.
The configuration handled by megastore.message-queue.release is as follows:
- Deployment of the Kubernetes Service for the message queue.
- Deployment of the Kubernetes Deployment for the message queue.
This configuration is handled by message-queue.sh.
megastore.savesalehandler.build and megastore.savesalehandler.release
This build and linked release are responsible for deploying a new version of the .NET Core message queue handler application which receives message from the message queue and saves them to an Azure SQL database. The docker image is built and uploaded to ACR using this generic Bash script. This in turn triggers the megastore.savesalehandler.release which deals with the following configuration:
- Creation (or update) of the database connection string as a Secret.
- Deployment of the Kubernetes Deployment for the message queue handler component.
- Update the image for the Deployment to the latest version using the unique tag for the build that triggered the release.
This configuration is handled by megastore-savesalehandler.sh. The build is triggered through the Azure Pipelines Path filters feature:
Using the Path filters feature ensures that the build will only be triggered for continuous integration if a file in the specified folder is changed.
megastore.web.build and megastore.web.release
This build and linked release are responsible for deploying a new version of the ASP.NET Core web application which sends messages to the message queue service. As with the message queue handler, the docker image is built and uploaded to ACR using this generic Bash script. The build triggers the megastore.web.release which deals with the following configuration:
- Creation (or update) of the ASPNETCORE_ENVIRONMENT environment variable as a ConfigMap.
- Deployment of the Kubernetes Deployment for the web component.
- Deployment of the Kubernetes Service for the web component.
- Update the image for the Deployment to the latest version using the unique tag for the build that triggered the release.
This configuration is handled by megastore-web.sh and once again the build is triggered through the Azure Pipelines Path filters feature:
As before, using the Path filters feature ensures that the build will only be triggered for continuous integration if a file in the specified folder is changed.
And Finally...
In breaking down a monolithic pipeline in to multiple pipelines I exposed the problem of what to do with the shared helper library of functions that is use both by the megastore.web and megastore.savesalehandler components, because if this code changes one or sometimes both components will need redeploying. I think the answer is that helper libraries like these do not belong in the Visual Studio solution and instead should be developed separately and distributed and referenced as NuGet packages.
One of my aspirations is to get as much pipeline configuration in the GitHub repo as possible and you might well ask why I'm not using yaml files. Apart from the fact that I just haven't had time to look at this in detail yet, at the time of writing it's only a partial solution as it's only available for the build portion of the pipeline. This will change hopefully later this year when the release portion of the pipeline is supported, and at that point I'll make the switch.
That's it for now! Whether you are deploying to AKS or somewhere else I hope this post has provided you with ideas to supercharge your Azure DevOps pipelines.
Cheers -- Graham
Deploy a Dockerized ASP.NET Core Application to Azure Kubernetes Service Using a VSTS CI/CD Pipeline: Part 4
In this blog post series I'm working my way through the process of deploying and running an ASP.NET Core application on Microsoft's hosted Kubernetes environment. These are the links to the full series of posts to date:
In this post I take a look at application monitoring and health. There are several options for this however since I'm pretty much all-in with the Microsoft stack in this blog series I'll stick with the Microsoft offering which is Azure Application Insights. This posts builds on previous posts, particularly Part 3, so please do consider working through at least Part 3 before this one.
In this post, I continue to use my MegaStore sample application which has been upgraded to .NET Core 2.1, in particular with reference to the csproj file. This is important because it affects the way Application Insights is configured in the ASP.NET Core web component. See here and here for more details. All my code is in my GitHub repo and you can find the starting code here and the finished code here.
Understanding the Application Insights Big Picture
Whilst it's very easy to get started with Application Insights, configuring it for an application with multiple components which gets deployed to a continuous delivery pipeline consisting of multiple environments running under Kubernetes requires a little planning and a little effort to get everything configured in a satisfactory way. As of the time of writing this isn't helped by the presence of Application Insights documentation on both docs.microsoft.com and github.com (ASP.NET Core | Kubernetes) which sometimes feels like it's conflicting, although it's nothing that good old fashioned detective work can't sort out.
The high-level requirements to get everything working are as follows:
- A mechanism is needed to separate out telemetry data from the different environments of the continuous delivery pipeline. Application Insights sends telemetry to a ‘bucket' termed an Application Insights Resource which is identified by a unique instrumentation key. Separation of telemetry data is therefore achieved by creating an individual Application Insights Resource, each for the development environment and the different environments of the delivery pipeline.
- Each component of the application that will send telemetry to an Application Insights Resource needs configuring so that it can be supplied with the instrumentation key for the Application Insights Resource for the environment the application is running in. This is a coding issue and there are lots of ways to solve it, however in the MegaStore sample application this is achieved through a helper class in the MegaStore.Helper library that receives the instrumentation key as an environment variable.
- The MegaStore.Web and MegaStore.SaveSaleHandler components need configuring for both the core and Kubernetes elements of Application Insights and a mechanism to send the telemetry back to Azure with the actual name of the component rather than a name that Application Insights has chosen.
- Each environment needs configuring to create an instrumentation key environment variable for the Application Insights Resource that has been created for that environment. In development this is achieved through hard-coding the instrumentation key in docker-compose.override.yaml. In the deployment pipeline it's achieved through a VSTS task that creates a Kubernetes config map that is picked up by the Kubernetes deployment configuration.
That's the big picture—let's get stuck in to the details.
Creating Application Insights Resources for Different Environments
In the Azure portal follow these slightly outdated instructions (Application Insights is currently found in Developer Tools) to create three Application Insights Resources for the three environments: DEV, DAT and PRD. I chose to put them in one resource group and ended up with this:
For reference there is a dedicated Separating telemetry from Development, Test, and Production page in the official Application Insights documentation set.
Configure MegaStore to Receive an Instrumentation Key from an Environment Variable
As explained above this is a specific implementation detail of the MegaStore sample application, which contains an Env class in MegaStore.Helper to capture environment variables. The amended class is as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
|
using System; using System.Collections.Generic; namespace MegaStore.Helper { // This code is modified from https://github.com/sixeyed/docker-on-windows public class Env { private static Dictionary<string, string> _Values = new Dictionary<string, string>(); public static string MessageQueueUrl { get { return Get("MESSAGE_QUEUE_URL"); } } public static string DbConnectionString { get { return Get("DB_CONNECTION_STRING"); } } public static string AppInsightsInstrumentationKey { get { return Get("APP_INSIGHTS_INSTRUMENTATION_KEY"); } } private static string Get(string variable) { if (!_Values.ContainsKey(variable)) { var value = Environment.GetEnvironmentVariable(variable); _Values[variable] = value; } return _Values[variable]; } } } |
Obviously this class relies on an external mechanism creating an environment variable named APP_INSIGHTS_INSTRUMENTATION_KEY. Consumers of this class can reference MegaStore.Helper and call Env.AppInsightsInstrumentationKey to return the key.
Configure MegaStore.Web for Application Insights
If you've upgraded an ASP.NET Core web application to 2.1 or later as detailed earlier then the core of Application Insights is already ‘installed' via the inclusion of the Microsoft.AspNetCore.All meta package so there is nothing to do. You will need to add Microsoft.ApplicationInsights.Kubernetes via NuGet—at the time of writing it was in beta (1.0.0-beta9) so you'll need to make sure you have told NuGet to include prereleases.
In order to enable Application Insights amend BuildWebHost in Program.cs as follows:
|
public static IWebHost BuildWebHost(string[] args) => WebHost.CreateDefaultBuilder(args) .UseStartup<Startup>() .UseApplicationInsights(Env.AppInsightsInstrumentationKey) .Build(); |
Note the way that the instrumentation key is passed in via Env.AppInsightsInstrumentationKey from MegaStore.Helper as mentioned above.
Telemetry relating to Kubernetes is enabled in ConfgureServices in Startup.cs as follows:
|
public void ConfigureServices(IServiceCollection services) { services.EnableKubernetes(); services.AddMvc(); services.AddSingleton<ITelemetryInitializer, CloudRoleTelemetryInitializer>(); } |
Note also that a CloudRoleTelemetryInitializer class is being initialised. This facilitates the setting of a custom RoleName for the component, and requires a class to be added as follows:
|
using Microsoft.ApplicationInsights.Channel; using Microsoft.ApplicationInsights.Extensibility; namespace MegaStore.Web { public class CloudRoleTelemetryInitializer : ITelemetryInitializer { public void Initialize(ITelemetry telemetry) { telemetry.Context.Cloud.RoleName = "MegaStore.Web"; } } } |
Note here that we are setting the RoleName to MegaStore.Web. Finally, we need to ensure that all web pages return telemetry. This is achieved by adding the following code to the end of _ViewImports.cshtml:
|
@inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet |
and then by adding the following code to the end of the <head> element in _Layout.cshtml:
|
@Html.Raw(JavaScriptSnippet.FullScript) |
Configure MegaStore.SaveSaleHandler for Application Insights
I'll start this section with a warning because at the time of writing the latest versions of Microsoft.ApplicationInsights and Microsoft.ApplicationInsights.Kubernetes didn't play nicely together and resulted in dependency errors. Additionally the latest version of Microsoft.ApplicationInsights.Kubernetes was missing the KubernetesModule.EnableKubernetes class described in the documentation for making Kubernetes work with Application Insights. The Kubernetes bits are still in beta though so it's only fair to expect turbulence. The good news is that with a bit of experimentation I got everything working by installing NuGet packages Microsoft.ApplicationInsights (2.4.0) and Microsoft.ApplicationInsights.Kubernetes (1.0.0-beta3). If you try this long after publication date things will have moved on but this combination works with this initialisation code in Program.cs:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
|
/* Requires these using statements: using Microsoft.ApplicationInsights; using Microsoft.ApplicationInsights.Extensibility; using Microsoft.ApplicationInsights.Kubernetes; */ class Program { private static TelemetryConfiguration configuration = new TelemetryConfiguration(Env.AppInsightsInstrumentationKey); static void Main(string[] args) { configuration.TelemetryInitializers.Add(new CloudRoleTelemetryInitializer()); KubernetesModule.EnableKubernetes(configuration); TelemetryClient client = new TelemetryClient(configuration); client.TrackTrace("Some message"); } } |
Please do note that this a completely stripped down Program class to just show how Application Insights and the Kubennetes extension is configured. Note again that this component uses the CloudRoleTelemetryInitializer class shown above, this time with the RoleName set to MegaStore.SaveSaleHandler. What I don't show here in any detail is that you can add lots of client.Track* calls to generate rich telemetry to help you understand what your application is doing. The code on my GitHub repo has details.
Configure the Development Environment to Create an Instrumentation Key Environment Variable
This is a simple matter of editing docker-compose.override.yaml with the new APP_INSIGHTS_INSTRUMENTATION_KEY environment variable and the instrumentation key for the corresponding Application Insights Resource:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
|
version: '3.5' services: message-queue: image: nats:linux networks: - ms-net megastore.web: environment: - ASPNETCORE_ENVIRONMENT=Development - MESSAGE_QUEUE_URL=nats://message-queue:4222 - APP_INSIGHTS_INSTRUMENTATION_KEY=fba89ed9-z023-48f0-a7bb-6279ba6b5c87 ports: - 80 depends_on: - message-queue networks: - ms-net megastore.savesalehandler: environment: - MESSAGE_QUEUE_URL=nats://message-queue:4222 - APP_INSIGHTS_INSTRUMENTATION_KEY=fba89ed9-z023-48f0-a7bb-6279ba6b5c87 env_file: - db-credentials.env depends_on: - message-queue networks: - ms-net networks: ms-net: |
Make sure you don't just copy the code above as the actual key needs to come from the Application Insights Resource you created for the DEV environment, which you can find as follows:
Configure the VSTS Deployment Pipeline to Create Instrumentation Key Environment Variables
The first step is to amend the two Kubernetes deployment files (megastore-web-deployment.yaml and megastore-savesalehandler-deployment.yaml) with details of the new environment variable in the respective env sections:
|
- name: APP_INSIGHTS_INSTRUMENTATION_KEY valueFrom: configMapKeyRef: name: appinsights.env key: APP_INSIGHTS_INSTRUMENTATION_KEY |
Now in VSTS:
- Create variables called DatAppInsightsInstrumentationKey and PrdAppInsightsInstrumentationKey scoped to their respective environments and populate the variables with the respective instrumentation keys.
- In the task lists for the DAT and PRD environments clone the Delete ASPNETCORE_ENVIRONMENT config map and Create ASPNETCORE_ENVIRONMENT config map tasks and amend them to work with the new APP_INSIGHTS_INSTRUMENTATION_KEY environment variable configured in the *.deployment.yaml files.
Generate Traffic to the MegaStore Web Frontend
Now the real fun can begin! Commit all the code changes to trigger a build and deploy. The clumsy way I'm having to delete an environment variable and then recreate it (to cater for a changed variable name) will mean that the release will be amber in each environment for a couple of releases but will hopefully eventually go green. In order to generate some interesting telemetry we need to put one of the environments under load as follows:
- Find the public IP address of MegaStore.Web in the PRD environment by running kubectl get services --namespace=prd:
- Create a PowerShell (ps1) file with the following code (using your own IP address of course):
|
while ($true) { (New-Object Net.WebClient).DownloadString("http://23.97.208.183/") Start-Sleep -Milliseconds 5 } |
- Run the script (in Windows PowerShell ISE for example) and as long as the output is white you know that traffic is getting to the website.
Now head over to the Azure portal and navigate to the Application Insights Resource that was created for the PRD environment earlier and under Investigate click on Search and then Click here (to see all data in the last 24 hours):
You should see something like this:
Hurrah! We have telemetry! However the icing on the cake comes when you click on an individual entry (a trace for example) and see the Kubernetes details that are being returned with the basic trace properties:
Until Next Time
It's taken my quite a lot of research and experimentation to get to this point so that's it for now! In this post I showed you how to get started monitoring your Dockerized .NET Core 2.1 applications running in AKS using Application Insights. The emphasis has been very much on getting started though as Application Insights is a big beast and I've only scratched the surface in this post. Do bear in mind that some of the NuGets used in this post are in beta and some pain is to be expected.
As I publish this blog post VSTS has had a name change to Azure DevOps so that's the title of this series having to change again!
Cheers—Graham
Upgrade a Dockerized ASP.NET Core Application to the Latest Version of .NET Core
In the combined worlds of .NET Core and Docker things are changing pretty quickly and at some point you may well find yourself wanting to upgrade your Dockerized ASP.NET Core application. If you are upgrading a production application then you'll certainly want to follow the official guidance. In my case and for the purposes of this blog post I'm more concerned with the upgrade from a Docker perspective. It's not difficult however there are a few steps which can leave you scratching your head if you miss them out so I'm documenting my process for upgrading as it will certainly help me in the future and hopefully someone else as well.
Upgrading ASP.NET Core
- Download and install the latest version of .NET Core from here. From a command prompt run dotnet --list-runtimes to show what you have installed. In my case the latest version was 2.1.2.
- Ensure you are running the latest version of Visual Studio 2017. At the time of writing version 15.8.0 had just been released.
- Open your VS solution and from the Application tab of the Properties page of each project you want to upgrade change the Target framework to the required version:
- Using your technique of choice now upgrade all of the NuGet packages for the solution.
Upgrading Docker files
This is the bit which will have you scratching your head if your Docker files are targeting an earlier version of .NET Core than the version you have just upgraded to as your solution will build but not run under Docker. The error message (something like "It was not possible to find any compatible framework version. The specified framework ‘Microsoft.NETCore.App', version ‘2.1.0' was not found.") makes complete sense when you remember it is being generated from a container running an earlier version of .NET Core.
The answer of course is to change the Docker files in your solution to refer to an image running a later version of .NET Core. However, this is also a great opportunity to upgrade your Docker files to the latest specification used in new Visual Studio projects, as it does seem to change on every release. I do this by simply creating a new ASP.NET Code project in Visual Studio and then working out what needs to change in the Docker file I'm upgrading. In my case this saw my Docker file change from
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
|
FROM microsoft/aspnetcore:2.0 AS base WORKDIR /app EXPOSE 80 FROM microsoft/aspnetcore-build:2.0 AS build WORKDIR /src COPY MegaStore.sln ./ COPY MegaStore.Web/MegaStore.Web.csproj MegaStore.Web/ RUN dotnet restore -nowarn:msb3202,nu1503 COPY . . WORKDIR /src/MegaStore.Web RUN dotnet build -c Release -o /app FROM build AS publish RUN dotnet publish -c Release -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "MegaStore.Web.dll"] |
to
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
|
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base WORKDIR /app EXPOSE 80 FROM microsoft/dotnet:2.1-sdk AS build WORKDIR /src COPY ["MegaStore.Web/MegaStore.Web.csproj", "MegaStore.Web/"] RUN dotnet restore "MegaStore.Web/MegaStore.Web.csproj" COPY . . WORKDIR "/src/MegaStore.Web" RUN dotnet build "MegaStore.Web.csproj" -c Release -o /app FROM build AS publish RUN dotnet publish "MegaStore.Web.csproj" -c Release -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "MegaStore.Web.dll"] |
The obvious changes to the specification are the removal of -nowarn:msb3202,nu1503 and changes to the Docker syntax. I'm not sure what improvements changes to the syntax bring however it makes sense to me to keep up with the latest thinking from the folks writing the Docker files for Visual Studio projects.
On the face of it your project should now run as it did before the upgrade. However in my case I was still getting error messages as per this GitHub issue. The problem for me was an outdated microsoft/dotnet:2.1-aspnetcore-runtime image and running docker pull microsoft/dotnet:2.1-aspnetcore-runtime got things running again. Probably just something peculiar to my machine due all the testing I do but if you run in to this then hopefully this will do the trick.
Cheers -- Graham
Deploy a Dockerized ASP.NET Core Application to Azure Kubernetes Service Using a VSTS CI/CD Pipeline: Part 3
In this blog post series I'm working my way through the process of deploying and running an ASP.NET Core application on Microsoft's hosted Kubernetes environment. Formerly known as Azure Container Service (AKS), it has recently been renamed Azure Kubernetes Service, which is why the title of my blog series has changed slightly. In previous posts in this series I covered the key configuration elements both on a developer workstation and in Azure and VSTS and then how to actually deploy a simple ASP.NET Core application to AKS using VSTS. This is the full series of posts to date:
In this post I introduce MegaStore (just a fictional name), a more complicated ASP.NET Core application (in the sense that it has more moving parts), and I show how to deploy MegaStore to an AKS cluster using VSTS. Future posts will use MegaStore as I work through more advanced Kubernetes concepts. To follow along with this post you will need to have completed the following, variously from parts 1 and 2:
Introducing MegaStore
MegaStore was inspired by Elton Stoneman's evolution of NerdDinner for his excellent book Docker on Windows, which I have read and can thoroughly recommend. The concept is a sales application that rather than saving a ‘sale' directly to a database, instead adds it to a message queue. A handler monitors the queue and pulls new messages for saving to an Azure SQL Database. The main components are as follows:
- MegaStore.Web—an ASP.NET Core MVC application with a CreateSale method in the HomeController that gets called every time there is a hit on the home page.
- NATS message queue—to which a new sale is published.
- MegaStore.SaveSalehandler—a .NET Core console application that monitors the NATS message queue and saves new messages.
- Azure SQL Database—I recently heard Brendan Burns comment in a podcast that hardly anybody designing a new cloud application should be managing storage themselves. I agree and for simplicity I have chosen to use Azure SQL Database for all my environments including development.
You can clone MegaStore from my GitHub repository here.
In order to run the complete application you will first need to create an Azure SQL Database. The easiest way is probably to create a new database (also creates a server at the same time) via the portal and manage with SQL Server Management Studio. The high-level procedure is as follows:
- In the portal create a new database called MegaStoreDev and at the same time create a new server (name needs to be unique). To keep costs low I start with the Basic configuration knowing I can scale up and down as required.
- Still in the portal add a client IP to the firewall so you can connect from your development machine.
- Connect to the server/database in SSMS and create a new table called dbo.Sale:
|
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Sale]( [SaleID] [bigint] IDENTITY(1001,1) NOT NULL, [CreatedOn] [datetime] NOT NULL, [Description] [varchar](100) NOT NULL ) ON [PRIMARY] GO |
- In Security > Logins create a New Login called sales_user_dev, noting the password.
- In Databases > MegaStoreDev > Security > Users create a New User called sales_user mapped to the sales_user_dev login and with the db_owner role.
In order to avoid exposing secrets via GitHub the credentials to access the database are stored in a file called db-credentials.env which I've not committed to the repo. You'll need to create this file in the docker-compose project in your VS solution and add the following, modified for your server name and database credentials:
|
DB_CONNECTION_STRING=Server=tcp:megastore.database.windows.net,1433;Initial Catalog=MegaStoreDev;Persist Security Info=False;User ID=sales_user_dev;Password=mystrongpwd;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30; |
If you are using version control make sure you exclude db-credentials.env from being committed.
With docker-compose set as the startup project and Docker for Windows running set to Linux containers you should now be able to run the application. If everything is working you should be able to see sales being created in the database.
To understand how the components are configured you need to look at docker-compose.yml and docker-compose-override.yml. Image building is handled by docker-compose.yml, which can't have anything else in it otherwise VSTS complains if you want to use the compose file to build the images. The configuration of the components is specified in docker-compose-override.yml which gets merged with docker-compose.yml at run time. Notice the k8s folder. This contains the configuration files needed to deploy the application to AKS.
By now you may be wondering if MegaStore should be running locally under Kubernetes rather than in Docker via docker-compose. It's a good question and the answer is probably yes. However at the time of writing there isn't a great story to tell about how Visual Studio integrates with Kubernetes on a developer workstation (ie to allow debugging as is possible with Docker) so I'm purposely ignoring this for the time being. This will change over time though, and I will cover this when I think there is more to tell.
Create Azure SQL Databases for Different Release Pipeline Environments
I'll be creating a release pipeline consisting of DAT and PRD environments. I explain more about these below but to support these environments you'll need to create two new databases—MegaStoreDat and MegaStorePrd. You can do this either through the Azure portal or through SQL Server Management Studio, however be aware that if you use SSMS you'll end up on the standard pricing tier rather than the cheaper basic tier. Either way, you then use SQL Server Management Studio to create dbo.Sale and set up security as described above, ensuring that you create different logins for the different environments.
Create a Build in VSTS
Once everything is working locally the next step is to switch over to VSTS and create a build. I'm assuming that you've cloned my GitHub repo to your own GitHub account however if you are doing it another way (your repo is in VSTS for example) you'll need to amend accordingly.
- Create a new Build definition in VSTS. The first thing you get asked is to select a repository—link to your GitHub account and select the MegaStore repo:
- When you get asked to Choose a template go for the Empty process option.
- Rename the build to something like MegaStore and under Agent queue select your private build agent.
- In the Triggers tab check Enable continuous integration.
- In the Options tab set Build number format to $(Date:yyyyMMdd)$(Rev:.rr), or something meaningful to you based on the available options described here.
- In the Tasks tab use the + icon to add two Docker Compose tasks and a Publish Build Artifacts task. Note that when configuring the tasks below only the required entries and changes to defaults are listed.
- Configure the first Docker Compose task as follows:
- Display name = Build service images
- Action = Build service images
- Azure subscription = [name of existing Azure Resource Manager endpoint]
- Azure Container Registry = [name of existing Azure Container Registry]
- Additional Image Tags = $(Build.BuildNumber)
- Configure the first Docker Compose task as follows:
- Display name = Push service images
- Azure subscription = [name of existing Azure Resource Manager endpoint]
- Azure Container Registry = [name of existing Azure Container Registry]
- Action = Push service images
- Additional Image Tags = $(Build.BuildId)
- Configure the Publish Build Artifacts task as follows:
- Display name = Publish k8s config
- Path to publish = k8s
- Artifact name = k8s-config
- Artifact publish location = Visual Studio Team Services/TFS
You should now be able to test the build by committing a minor change to the source code. The build should pass and if you look in the Repositories section of your Container Registry you should see megastoreweb and megastoresavesalehandler repositories with newly created images.
Create a DAT Release Environment in VSTS
With the build working it's now time to create the release pipeline, starting with an environment I call DAT which is where automated acceptance testing might take place. At this point there is a style choice to be made for creating Kubernetes Secrets and ConfigMaps. They can be configured from files or from literal values. I've gone down the literal values route since the files route needs to specify the namespace and this would require either a separate file for each namespace creating a DRY problem or editing the config files as part of the release pipeline. To me the literal values technique seems cleaner. Either way, as far as I can tell there is no way to update a Secret or ConfigMap via a VSTS Deploy to Kubernetes task as it's a two step process and the task can't handle this. The workaround is a task to delete the Secret or ConfigMap and then a task to create it. You'll see that I've also chosen to explicitly create the image pull secret. This is partly because of a bug in the Deploy to Kubernetes task however it also avoids having to repeat a lot of the Secrets configuration in Deploy to Kubernetes tasks that deploy service or deployment configurations.
- Create a new release definition in VSTS, electing to start with an empty process and rename it MegaStore.
- In the Pipeline tab click on Add artifact and link the build that was just created which in turn makes the k8s-config artifact from step 9 above available in the release.
- Click on the lightning bolt to enable the Continuous deployment trigger.
- Still in the Pipeline tab rename Environment 1 to DAT, with the overall changes resulting in something like this:
- In the Tasks tab click on Agent phase and under Agent queue select your private build agent.
- In the Variables tab create the following variables with Release Scope:
- AcrAuthenticationSecretName = prmcrauth (or the name you are using for imagePullSecrets in the Kubernetes config files)
- AcrName = [unique name of your Azure Container Registry, eg mine is prmcr]
- AcrPassword = [password of your Azure Container Registry from Settings > Access keys], use the padlock to make it a secret
- In the Variables tab create the following variables with DAT Scope:
- DatDbConn = Server=tcp:megastore.database.windows.net,1433;Initial Catalog=MegaStoreDat;Persist Security Info=False;User ID=sales_user;Password=mystrongpwd;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30; (you will need to alter this connection string for your own Azure SQL server and database)
- DatEnvironment = dat (ie in lower case)
- In the Tasks tab add 15 Deploy to Kubernetes tasks and disable all but the first one so the release can be tested after each task is configured. Note that when configuring the tasks below only the required entries and changes to defaults are listed.
- Configure the first Deploy to Kubernetes task as follows:
- Display name = Delete image pull secret
- Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
- Namespace = $(DatEnvironment)
- Command = delete
- Arguments = secret $(AcrAuthenticationSecret)
- Control Options > Continue on error = checked
- Configure the second Deploy to Kubernetes task as follows:
- Display name = Create image pull secret
- Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
- Namespace = $(DatEnvironment)
- Command = create
- Arguments = secret docker-registry $(AcrAuthenticationSecretName) --namespace=$(DatEnvironment) --docker-server=$(AcrName).azurecr.io --docker-username=$(AcrName) --docker-password=$(AcrPassword) --docker-email=fred@bloggs.com (note that the email address can be anything you like)
- Configure the third Deploy to Kubernetes task as follows:
- Display name = Delete ASPNETCORE_ENVIRONMENT config map
- Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
- Namespace = $(DatEnvironment)
- Command = delete
- Arguments = configmap aspnetcore.env
- Control Options > Continue on error = checked
- Configure the fourth Deploy to Kubernetes task as follows:
- Display name = Create ASPNETCORE_ENVIRONMENT config map
- Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
- Namespace = $(DatEnvironment)
- Command = create
- Arguments = configmap aspnetcore.env --from-literal=ASPNETCORE_ENVIRONMENT=$(DatEnvironment)
- Configure the fifth Deploy to Kubernetes task as follows:
- Display name = Delete DB_CONNECTION_STRING secret
- Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
- Namespace = $(DatEnvironment)
- Command = delete
- Arguments = secret db.connection
- Control Options > Continue on error = checked
- Configure the sixth Deploy to Kubernetes task as follows:
- Display name = Create DB_CONNECTION_STRING secret
- Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
- Namespace = $(DatEnvironment)
- Command = create
- Arguments = secret generic db.connection --from-literal=DB_CONNECTION_STRING="$(DatDbConn)"
- Configure the seventh Deploy to Kubernetes task as follows:
- Display name = Delete MESSAGE_QUEUE_URL config map
- Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
- Namespace = $(DatEnvironment)
- Command = delete
- Arguments = configmap message.queue
- Control Options > Continue on error = checked
- Configure the eighth Deploy to Kubernetes task as follows:
- Display name = Create MESSAGE_QUEUE_URL config map
- Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
- Namespace = $(DatEnvironment)
- Command = create
- Arguments = configmap message.queue --from-literal=MESSAGE_QUEUE_URL=nats://message-queue-service.$(DatEnvironment):4222
- Configure the ninth Deploy to Kubernetes task as follows:
- Display name = Create message-queue service
- Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
- Namespace = $(DatEnvironment)
- Command = apply
- Use Configuration files = checked
- Configuration File = $(System.DefaultWorkingDirectory)/_MegaStore/k8s-config/message-queue-service.yaml
- Configure the tenth Deploy to Kubernetes task as follows:
- Display name = Create megastore-web service
- Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
- Namespace = $(DatEnvironment)
- Command = apply
- Use Configuration files = checked
- Configuration File = $(System.DefaultWorkingDirectory)/_MegaStore/k8s-config/megastore-web-service.yaml
- Configure the eleventh Deploy to Kubernetes task as follows:
- Display name = Create message-queue deployment
- Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
- Namespace = $(DatEnvironment)
- Command = apply
- Use Configuration files = checked
- Configuration File = $(System.DefaultWorkingDirectory)/_MegaStore/k8s-config/message-queue-deployment.yaml
- Configure the twelfth Deploy to Kubernetes task as follows:
- Display name = Create megastore-web deployment
- Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
- Namespace = $(DatEnvironment)
- Command = apply
- Use Configuration files = checked
- Configuration File = $(System.DefaultWorkingDirectory)/_MegaStore/k8s-config/message-queue-deployment.yaml
- Configure the thirteenth Deploy to Kubernetes task as follows:
- Display name = Update megastore-web with latest image
- Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
- Namespace = $(DatEnvironment)
- Command = set
- Arguments = image deployment/megastore-web-deployment megastoreweb=$(AcrName).azurecr.io/megastoreweb:$(Build.BuildNumber)
- Configure the fourteenth Deploy to Kubernetes task as follows:
- Display name = Create megastore-savesalehandler deployment
- Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
- Namespace = $(DatEnvironment)
- Command = apply
- Use Configuration files = checked
- Configuration File = $(System.DefaultWorkingDirectory)/_MegaStore/k8s-config/megastore-savesalehandler-deployment.yaml
- Configure the fifthteenth Deploy to Kubernetes task as follows:
- Display name = Update megastore-savesalehandler with latest image
- Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
- Namespace = $(DatEnvironment)
- Command = set
- Arguments = image deployment/megastore-savesalehandler-deployment megastoresavesalehandler=$(AcrName).azurecr.io/megastoresavesalehandler:$(Build.BuildNumber)
That's a heck of a lot of configuration, so what exactly have we built?
The first eight tasks deal with the configuration that support the services and deployments:
- The image pull secret stores the credentials to the Azure Container Registry so that deployments that need to pull images from the ACR can authenticate.
- The ASPNETCORE_ENVIRONMENT config map sets the environment for ASP.NET Core. I don't do anything with this but it could be handy for troubleshooting purposes.
- The DB_CONNECTION_STRING secret stores the connection string to the Azure SQL database and is used by the megastore-savesalehandler-deployment.yaml configuration.
- The MESSAGE_QUEUE_URL config map stores the URL to the NATS message queue and is used by the megastore-web-deployment.yaml and megastore-savesalehandler-deployment.yaml configurations.
As mentioned above, a limitation of the VSTS Deploy to Kubernetes task means that in order to be able to update Secrets and ConfigMaps they need to be deleted first and then created again. This does mean that an exception is thrown the first time a delete task is run however the Continue on error option ensures that the release doesn't fail.
The remaining seven tasks deal with the deployment and configuration of the components (other than the Azure SQL database) that make up the MegaStore application:
- The NATS message queue requires a service so other components can talk to it and the deployment that specifies the specification for the image.
- The MegaStore.Web front end requires a service so that it is exposed to the outside world and the deployment that specifies the specification for the image.
- MegaStore.SaveSalehandler monitoring component only needs the deployment that specifies the specification for the image as nothing connects to it directly.
If everything has been configured correctly then triggering a release should result in a megastore-web-service being created. You can check the deployment was successful by executing kubectl get services --namespace=dat to get the external IP address of the LoadBalancer which you can paste in to a browser to confirm that the ASP.NET Core website is running. On the backend, you can use SQL Server Management Studio to connect to the database and confirm that records are being created in dbo.Sale.
If you are running in to problems, you can run the Kubernetes Dashboard to find out what is failing. Typically it's deployments that fail, and navigating to Workloads > Deployments can highlight the failing deployment. You can find out what the error is from the New Replica Set panel by clicking on the Logs icon which brings up a new browser tab with a command line style output of the error. If there is no error it displays any Console.WiteLine output. Very neat:
Create a PRD Release Environment in VSTS
With a DAT environment created we can now create other environments on the route to production. This could be whatever else is needed to test the application, however here I'm just going to create a production environment I'll call PRD. I described this process in my previous post so here I'll just list the high level process:
- Clone the DAT environment and rename it PRD.
- In the Variables tab rename the cloned DatDbConn and DatEnvironment variables (the ones with PRD scope) to PrdDbConn and PrdEnvironment and change their values accordingly.
- In the Tasks tab visit each task and change all references of $(DatDbConn) and $(DatEnvironment) to $(PrdDbConn) and $(PrdEnvironment). All Namespace fields will need changing and many of the tasks with use the Arguments fields will need attention.
- Trigger a build and check the deployment was successful by executing kubectl get services --namespace=prd to get the external IP address of the LoadBalancer which you can paste in to a browser to confirm that the ASP.NET Core website is running.
Wrapping Up
Although the final result is a CI/CD pipeline that certainly works there are more tasks than I'm happy with due to the need to delete and then recreate Secrets and ConfigMaps and this also adds quite a bit of overhead to the time it takes to deploy to an environment. There's bound to be a more elegant way of doing this that either exists now and I just don't know about it or that will exist in the future. Do post in the comments if you have thoughts.
Although I'm three posts in I've barely scratched the surface of the different topics that I could cover, so plenty more to come in this series. Next time it will probably be around health and / or monitoring.
Cheers—Graham