Deploy a Dockerized Application to Azure Kubernetes Service using Azure YAML Pipelines 6 – Telemetry and Diagnostics

Posted by Graham Smith on May 26, 2020No Comments (click here to comment)

This is the sixth post in a series where I'm taking a fresh look at how to deploy a dockerized application to Azure Kubernetes Service (AKS) using Azure Pipelines after having previously blogged about this in 2018. The list of posts in this series is as follows:

  1. Getting Started
  2. Terraform Development Experience
  3. Terraform Deployment Pipeline
  4. Running a Dockerized Application Locally
  5. Application Deployment Pipelines
  6. Telemetry and Diagnostics (this post)

One of the problems with running applications in containers in an orchestration system such as Kubernetes is that it can be harder to understand what is happening when things go wrong. So while instrumenting your application for telemetry and diagnostic information should be fairly high on your to do list anyway, this is even more so when running application is containers. Whilst there are lots of third party offerings in the telemetry and diagnostics space in this post I take a look at what's available for those wanting to stick with the Microsoft experience. If you want to follow along you can clone / fork my repo here, and if you haven't already done so please take a look at the first post to understand the background, what this series hopes to cover and the tools mentioned in this post.

Azure DevOps Environments

If you are following along with this series you may recall that in the last post we configured an Azure DevOps Pipeline Environment for the Kubernetes cluster. It turns out that these are great for quickly taking a peek at the health of the components deployed to a cluster. For example, this is what's displayed for the MegaStore.SaveSaleHandler deployment and pods:

It gets better though because you can drill in to the pods and view the log for each pod. This is the log from the message-queue-deployment pod:

Of course, pipeline environments only really tell you what's going on at that moment in time (or maybe for the previous few minutes depending on how busy the logs are). In order to capture sufficient retrospective data to be useful requires the services of a dedicated tool.

Application Insights

From the docs: Application Insights, a feature of Azure Monitor, is an extensible Application Performance Management (APM) service for developers and DevOps professionals. When using it in conjunction with an application as we are here there are several configuration options to address. I describe an overview below, but everything is implemented in the sample application here.

Instrumentation keys

When using Application Insights with an application that is deployed to different environments it's important to take steps to ensure that telemetry from different environments is not mixed up together. The principal technique to avoid this is to have separate Application Insights resource instances which each have their own instrumentation key. Each stage of the deployment pipeline is then configured to make the appropriate instrumentation key available and the application running in that stage of the pipeline sends telemetry back using that key. The Terraform configuration developed in a previous post created three Application Insights resource instances for each of the environments the MegaStore application runs in:

When working with containers probably the easiest way to make an instrumentation key available to applications is via an environment variable named APPINSIGHTS_INSTRUMENTATIONKEY. An ASP.NET Core application component will automatically recognise APPINSIGHTS_INSTRUMENTATIONKEY—in other components it may need to be set manually. The MegaStore application contains a helper class (MegaStore.Helper.Env) to pass environment variables to calling code.

Server-Side Telemetry

Each component of an application that is required to generate server-side telemetry at the very least needs to consume one of the Application Insights SDKs as a NuGet. The MegaStore.Web ASP.NET Core component is configured with Microsoft.ApplicationInsights.AspNetCore and the MegaStore.SaveSaleHandler .NET Core console application component with Microsoft.ApplicationInsights.WorkerService.

These components are configured with the IServiceCollection class. For a .NET Core application the code is as follows:

For an ASP.NET Core application the code is similar however IServiceCollection is supplied via the ConfigureServices method of the Startup class.

Client Side Telemetry

For web applications you may want to generate client-side usage telemetry and in ASP.NET Core applications this is achieved through two configuration steps:

  1. In _ViewImports.cshtml add @inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet
  2. In _Layout.cshtml add @Html.Raw(JavaScriptSnippet.FullScript) at the end of the <head> section but before any other script.

You can read more about this here.

Kubernetes Enhancements

Since the deployed version of MegaStore runs under Kubernetes we can take advantage of the Microsoft.ApplicationInsights.Kubernetes NuGet package to enhance the standard Application Insights telemetry with Kubernetes-related information. With the NuGet installed you simply add the AddApplicationInsightsKubernetesEnricher(); extension method to IServiceCollection.

Visualising Application Components with Cloud Role

Application Insights uses an Application Map to visualise the components of a system. It will automatically name a component but it's a good idea to set this explicitly. This is achieved through the use of a CloudRoleTelemetryInitializer class, which you will need to add to each component that needs tracking:

The important part in the code above is in setting the RoleName. A CloudRoleTelemetryInitializer class is configured via IServiceCollection with this line of code:

Custom Telemetry

Adding your own custom telemetry is achieved with the TelemetryClient class. In an ASP.NET Core application TelemetryClient can be configured in a controller through dependency injection as described here. In a .NET Core Console app it's configured from IServiceProvider. The complete implementation in MegaStore.SaveSaleHandler is as follows:

There are then several options for generating telemetry including TelemetryClient.TrackEvent, TelemetryClient.TrackTrace and TelemetryClient.TrackException.

Generating Data

With all this configuration out of the way we can now start generating data. The first step is to find the IP addresses of the MegaStore.Web home pages for the qa and prd environments. One way is to bring up the Kubernetes Dashboard by running the following code: az aks browse --resource-group yourResourceGroup --name yourAksCluster.

Now switch to the desired namespace and navigate to Discovery and Load Balancing > Services to see the IP address of megastore-web-service:

My preferred way of generating traffic to these web pages is with a PowerShell snippet run from Azure Cloud Shell (which stops your own machine from being overloaded if you really crank things up by by reducing the Start-Sleep value):

As tings stand you will probably mostly get ‘routine' telemetry being returned.  If you want to simulate exceptions you can change the name of one of the columns in the database table dbo.Sale.

Visualising Data

Without any further configuration there are several areas of Application Insights that will now start displaying data. Here is just a small selection of what's available:

Overview Panel

Application Map Panel

Live Metrics Panel

Search Panel

Whilst all these overview representations of data are very useful it is in the detail where things perhaps get the most interesting. Drilling in to an individual Trace for example shows a useful set of standard Trace properties:

And also a set of custom Trace properties courtesy of Microsoft.ApplicationInsights.Kubernetes:

Drilling in to a synthetic exception (due to a changed column name in the database) provides details of the exception and also the stack trace:

These are just a few examples of what's available and a fuller list is available here. And this is just the application monitoring side of the whole Application Monitoring platform. A good starting point to see where Application Insights fits in to the bigger Azure Monitoring platform is the overview page here.

That's it folks!

That's it for this mini series! As I said in the first post, the ideas presented in this series are not meant to be the definitive, one and only, way of deploying a Dockerized ASP.NET Core application to Azure Kubernetes Service. Rather, they are intended to show my journey and hopefully give you ideas for doing things differently and better. To give you one example, I'm uneasy about having Kubernetes deployments described in static YAML files. Making modifications by hand somehow feels error prone and inefficient. There are other options though, and this post has a good explanation of the possibilities. From this post we see that there is a tool called Kustomize and then we see that there is an Azure DevOps Kubernetes manifest task that uses Kustomise. I've not explored this task yet but it looks like a good next step to understand how to evolve Kubernetes deployments.

If you have your own ideas for evolving the ideas in this series do leave a comment!

Cheers -- Graham

Versioning .NET Core Assemblies in Azure DevOps isn’t Straightforward (and Probably Won’t be in Other CI/CD Tools Either)

Posted by Graham Smith on June 26, 2019No Comments (click here to comment)

As part of ongoing work to enhance an existing Azure DevOps CI/CD pipeline that builds and deploys an ASP.NET Core application I thought I'd spend a pleasant 5 minutes versioning the .NET Core assemblies with the pipeline's build number. A couple of hours and 20+ test builds later...

Out of the box, creating a new build in Azure Pipelines using the ASP.NET Core template in the classic editor results in five tasks of which four are concerned with dotnet commands:

A quick look at the documentation for dotnet build and then this awesome blog post that explains the dizzying array of options and it's pretty clear that adding /p:Version=$(Build.BuildNumber) as a command line parameter to dotnet build should suffice as a good starting point. Except it didn't, with File version and Product version stubbornly remaining at their default values:

I established that /p:Version= works fine from a command line, so what's going on? After a bit of research and testing I discovered that unless you tell it otherwise dotnet publish (and dotnet test for that matter) compiles the application before doing its thing of publishing files to a folder. The way the Azure Pipelines tasks are configured means that dotnet publish is effectively cancelling out the effect of dotnet build. (And since dotnet test also cancels out out the effect of dotnet build leaves me wondering what is the point of including dotnet build in the first place?) As part of this research I also discovered that build, test and publish also do a restore unless told otherwise, again making me wonder what the point of the Restore task is? So out of the box then it seems like the four .NET Core tasks are resulting in lots of duplication and for someone like me the cause of head-scratching as to why assembly versioning doesn't work.

So based on a few hours of testing here is what I think the arguments of the different tasks need to be (for visual tasks or as YAML) to avoid duplication and implement assembly versioning.

Firstly, if you want to include an implicit Restore task:

  • build = --configuration $(BuildConfiguration) --no-restore /p:Version=$(Build.BuildNumber)
  • test = --configuration $(BuildConfiguration) --no-build
  • publish = --configuration $(BuildConfiguration) --output $(Build.ArtifactStagingdirectory) --no-build

Secondly, if you want to omit an explicit Restore task:

  • test = --configuration $(BuildConfiguration)
  • publish = --configuration $(BuildConfiguration) --output $(Build.ArtifactStagingdirectory) /p:Version=$(Build.BuildNumber)

In the first version build creates the binaries which are then used by test and publish, with the --no-build switch implicitly setting the --no-restore flag. I haven't tested it but that presumably means that --configuration $(BuildConfiguration) for test and publish is redundant.

Update A friend and former colleague Tweeted that --configuration is still needed for test and publish:


In the second version test and publish both create their own sets of binaries. (Is that the right thing to do from a purist CI/CD perspective? Maybe, maybe not.)

I did my testing on a Microsoft-hosted build agent and whilst it felt like both options above were quicker than the default settings I can't be certain without rigorous testing on a self-hosted agent with no other load. Either way though, it feels good to have optimised the tasks and I finally got assembly versioning working. Are there other optimisations? Have I missed something? Please leave a comment!

Cheers -- Graham

Upgrade a Dockerized ASP.NET Core Application to the Latest Version of .NET Core

Posted by Graham Smith on August 15, 2018No Comments (click here to comment)

In the combined worlds of .NET Core and Docker things are changing pretty quickly and at some point you may well find yourself wanting to upgrade your Dockerized ASP.NET Core application. If you are upgrading a production application then you'll certainly want to follow the official guidance. In my case and for the purposes of this blog post I'm more concerned with the upgrade from a Docker perspective. It's not difficult however there are a few steps which can leave you scratching your head if you miss them out so I'm documenting my process for upgrading as it will certainly help me in the future and hopefully someone else as well.

Upgrading ASP.NET Core

  1. Download and install the latest version of .NET Core from here. From a command prompt run dotnet --list-runtimes to show what you have installed. In my case the latest version was 2.1.2.
  2. Ensure you are running the latest version of Visual Studio 2017. At the time of writing version 15.8.0 had just been released.
  3. Open your VS solution and from the Application tab of the Properties page of each project you want to upgrade change the Target framework to the required version:
  4. Using your technique of choice now upgrade all of the NuGet packages for the solution.

Upgrading Docker files

This is the bit which will have you scratching your head if your Docker files are targeting an earlier version of .NET Core than the version you have just upgraded to as your solution will build but not run under Docker. The error message (something like "It was not possible to find any compatible framework version. The specified framework ‘Microsoft.NETCore.App', version ‘2.1.0' was not found.") makes complete sense when you remember it is being generated from a container running an earlier version of .NET Core.

The answer of course is to change the Docker files in your solution to refer to an image running a later version of .NET Core. However, this is also a great opportunity to upgrade your Docker files to the latest specification used in new Visual Studio projects, as it does seem to change on every release. I do this by simply creating a new ASP.NET Code project in Visual Studio and then working out what needs to change in the Docker file I'm upgrading. In my case this saw my Docker file change from

to

The obvious changes to the specification are the removal of -nowarn:msb3202,nu1503 and changes to the Docker syntax. I'm not sure what improvements changes to the syntax bring however it makes sense to me to keep up with the latest thinking from the folks writing the Docker files for Visual Studio projects.

On the face of it your project should now run as it did before the upgrade. However in my case I was still getting error messages as per this GitHub issue. The problem for me was an outdated microsoft/dotnet:2.1-aspnetcore-runtime image and running docker pull microsoft/dotnet:2.1-aspnetcore-runtime got things running again. Probably just something peculiar to my machine due all the testing I do but if you run in to this then hopefully this will do the trick.

Cheers -- Graham