Azure DevOps Hidden Gems #8 – Turn Azure Board Queries into Dashboard Chart Items

Posted by Graham Smith on December 10, 2019No Comments (click here to comment)

I've been working with what we now call Azure DevOps for many years and I'm constantly amazed by features I never knew existed or which I vaguely knew existed but hadn't fully appreciated. In this blog post series I'm attempting to shine a light on some of these hidden gems for the benefit of others. The full list of posts is here and if you have any suggestions for other posts please leave a comment!

I was recently helping someone proficient in Jira find the equivalent feature in Azure DevOps. They were trying to find an Azure DevOps Dashboard widget that would display a pie chart of work items segmented by owner. Whilst there are a couple of widgets that report against work items they are quite limited and likely not what you want. It's not really a problem though because a powerful and flexible solution exists!

As the title of the blog has already given away, it's possible to turn Azure Board Queries into charts that can be displayed on Dashboards. The first step is to navigate to Boards > Queries and create a query:

Then, and crucially, save the query as a Shared Query:

Now in the query menu bar switch from Editor to Charts and click New Chart:

There are plenty of options to choose from but I've created a pie chart grouped by Assigned To:

Click OK to save the chart and then use the ellipsis to Add to dashboard:

You will now be able to select the dashboard you want the chart to appear on. You can make further edits to the chart from the dashboard using the widget's ellipsis but note that if you do so the chart will become unlinked from the query. There's not really much more to it but if you do want to dig deeper the official documentation is here.

Hope this helps!

Cheers -- Graham

Azure DevOps Hidden Gems #7 – Keyboard Shortcuts

Posted by Graham Smith on November 26, 2019No Comments (click here to comment)

I've been working with what we now call Azure DevOps for many years and I'm constantly amazed by features I never knew existed or which I vaguely knew existed but hadn't fully appreciated. In this blog post series I'm attempting to shine a light on some of these hidden gems for the benefit of others. The full list of posts is here and if you have any suggestions for other posts please leave a comment!

Until recently if you'd asked me if Azure DevOps supported keyboard shortcuts I'd have guessed at yes because, well, it's a Microsoft product and that's what they would do, however I'd have had to perform a web search to get any more details. So imagine my surprise and delight when I stumbled on the Keyboard shortcuts for Azure DevOps and Team Explorer page in the official documentation and realised that Azure DevOps is positively teeming with keyboard shortcuts!

My enthusiasm was initially tempered slightly by the last updated date of the page (January 2017 as I write, making me feel that the page hadn't received any love for a while, however it's an oversight as the page has been updated with Azure DevOps references) but also due to the fact that some of the shortcuts don't seem to work. In particular, the shortcuts at the organisation level didn't work for me. Perhaps this is because the UI is in a state of flux and the shortcuts haven't caught up—who knows?

However, once you have drilled-in to a project then shortcuts (or most of them) certainly do work and can really speed up your navigation both between the different core areas of Azure DevOps and also within an area. In particular I love the Global g-series shortcuts for moving between core areas:

These g-series shortcuts can really make you zip around Azure DevOps like you had written the UI yourself! Within each area of Azure DevOps there are more shortcuts for that area, for example these are the ones for the Repos area:

Of course the problem with shortcuts is remembering them. If you are working with Azure DevOps on a reasonably regular basis as I am then the -g-series are definitely worth memorising. If you spend a lot of time in one of the specific areas then it may well pay off to master the shortcuts for that area as well.

Hope this helps!

Cheers -- Graham

Setting up a Raspberry Pi Kubernetes Cluster (with Blinkt! Strips that Show Number of Pods per Node!) Using k3sup

Posted by Graham Smith on November 19, 2019No Comments (click here to comment)

If, like me, you are interested in the worlds of both Raspberry Pi and Kubernetes you may have built or considered building a Raspberry Pi Kubernetes cluster (see here for just one of many examples). I built a three-unit cluster in early 2018 using Raspberry Pi 3 Model B+ boards and bootstrapped Kubernetes using an early version of Alex Ellis' guide and it was all pretty straightforward. By itself it's not a great thing to demo (in my case at my local Raspberry Jam for example) as there is no display so a nice improvement is to fit Pimoroni Blinkt! LED strips to the GPIO pins of each unit and then use the guide here to make the LEDs light up according to the number of pods that have been deployed. The Blinkt! improvement went fine and was working nicely—right up to the day when I decided to upgrade the Raspberry Pi OS from Raspbian Stretch to Raspbian Buster (which came out to support the new Raspberry Pi 4 model).

The first problem was that Docker wouldn't install on Buster but that was solved through a post by Alex Ellis. However I encountered other problems such as the swapfile not turning off between reboots and (critically) the Weave networking pods failing to start and I spent a lot of time messing around to no avail. The sensible option would have been to revert to Raspbian Stretch but by now I had the bit between my teeth and I wasn't giving up lightly. After even more messing about trying different configurations and getting nowhere I decided to follow Alex Ellis' advice and try k3s—a stripped-down version of Kubernetes from Rancher Labs (the k3s name is a twist on the often-used k8s abbreviation for Kubernetes). In fact, you can make things even easier by using Alex's k3sup tool to automate most of the process.

TL;DR: I had everything up-and-running on Raspbian Buster in next no time at all, including the Blinkt! LED strips displaying the number of pods on each node!

If you are looking to learn bare-metal Kubernetes installation then k3s/k3sup may not be for you. But if you just want to get a cluster configured with minimal fuss then it's just the ticket. As always there were a few twists and turns so here is a write-up of what worked for me, although I'm not documenting every single step because it's already well covered by Alex. I'm using a Windows 10 development machine so this write-up is from that perspective, however Alex's documentation is more Linux/macOS focused so everyone should be able to follow along.

Install k3s Using k3sup

I used three guides that together provided a complete picture for installing k3s on the Raspberry Pi platform using k3sup:

  1. Will it cluster? k3s on your Raspberry Pi
  2. Kubernetes Homelab with Raspberry Pi and k3sup
  3. k3sup

Use the first guide to (if necessary) build your cluster and then prepare each Pi. In addition to setting the GPU memory split, changing the hostname and changing the password for each Pi I also expanded the filesystem to use all of the SD card. You will need the IP addresses of your master and worker nodes so setting static IP addresses is a good way to go.

Follow the first guide up to and including Enable container features. I simply used sudo nano /boot/cmdline.txt to edit the file being careful not to add an extra line.

Use guides 2 and 3 to install k3s using k3sup. The key point to understand here is that you run k3sup from your development machine. As I'm running Windows 10 it was a case of grabbing the Windows binary from the k3sup releases page and copying it to my working folder for this project. On Windows bootstrapping the master node is simply a matter of opening a command prompt at your working folder and running this command, replacing $SERVER with the IP address of the master node:

Bootstrapping the worker nodes is similarly straightforward:

where $AGENT is the IP of the worker node and $SERVER the IP of the master node.

Communicating with the Cluster using kubectl

The k3s installation includes kubectl so from this point on it's just like working with any standard Kubernetes cluster. You'll obviously need kubectl installed on your development machine, and then you configure kubectl to talk to the cluster using whichever of the several techniques works best for you, using the kubeconfig file that is handily copied to the working folder on your workstation. In my case I chose simply to copy kubeconfig to ~\.kube\ (which had been previously created through working with Azure Kubernetes Service but you can create it yourself) and rename kubeconfig to config to end up with C:\Users\Graham\.kube\config. Running kubectl get nodes establishes that everything is (hopefully!) working correctly.

Configuring the Cluster so Pimoroni Blinkt! LED strips Indicate the Number of Pods per Node

The master guide to follow is here. With the Blinkt! LED strips installed you'll need to download the following files from the guide's repo to your working folder:

  • kubernetes/blinkt-k8s-controller-rbac.yaml
  • kubernetes/blinkt-k8s-controller-ds.yaml

You'll also need to create a manifest containing a deployment. Create a file in the working folder called deployment.yaml and copy the following:

Now run the following commands against the cluster, from a command prompt at the working folder:

  1. kubectl label node $NODE_NAME deviceType=blinkt (where $NODE_NAME is the name of each node with a Blinkt! strip)
  2. kubectl create -f blinkt-k8s-controller-rbac.yaml
  3. kubectl create -f blinkt-k8s-controller-ds.yaml
  4. kubectl apply -f deployment.yaml

At this point you should see five LEDs light up according to how Kubernetes has decided which nodes the five pods should run on. You can now open deployment.yaml in your favourite code editor and play around with the number of replicas, repeating the final command above after saving each change to the file. Watch in joy as pods are created with a green flash and destroyed with a red flash, and settle on a satisfying blue (which you can change in blinkt-k8s-controller-ds.yaml ) for running pods.

Final Thoughts

I've been thrilled with how easy k3sup makes installing k3s and even if you do want to experience the pain thrill of the kubeadm procedure on Raspberry Pi I would still recommend you check out Alex's posts mentioned here and others on his blog as they offer tremendous extra value and learning.

Cheers -- Graham

Azure DevOps Hidden Gems #6 – Use the Manual Intervention Task to Pause a Stage of the Release Pipeline

Posted by Graham Smith on September 17, 2019No Comments (click here to comment)

I've been working with Azure DevOps quite a lot recently (having used its predecessors for many years) and I'm constantly amazed by features I never knew existed or which I vaguely knew existed but hadn't fully appreciated. In this blog post series I'm attempting to shine a light on some of these hidden gems for the benefit of others. The full list of posts is here and if you have any suggestions for other posts please leave a comment!

If you've created a release pipeline in Azure DevOps you probably know that there is rich functionality for approvals and gates to control a deployment between stages of the pipeline. Approvals are as you would imagine: a requirement for one or more people to approve either that a release stage is allowed to proceed or that a release stage has completed successfully. Gates are slightly different. From the docs: "Gates allow you to configure automated calls to external services, where the results are used to approve or reject a deployment. You can use gates to ensure that the release meets a wide range or criteria, without requiring user intervention.":

That's all well and good for controlling the deployment between stages of a pipeline. But what if you need to control the flow within a stage of a pipeline?

A colleague and I had this requirement recently when designing a release pipeline to manage the creation and updating of resources in Azure using Terraform, the cross-platform infrastructure as code technology from HashiCorp. One of the useful features of Terrafrom is the ability to call a command that will work out which resources are going to be created, destroyed or updated for a given Terraform configuration. A bit like the -WhatIf parameter in PowerShell if you are familiar with that. The problem in our scenario is that we need to halt the pipeline part way through a stage so someone can look at what Terraform is about to do and decide if it makes sense, and abort the stage if it doesn't. There is a solution of course, and it's the Manual Intervention task.

The task is simplicity itself to use, although it does need to run in the context of an Agentless job. Don't forget to add this or you will search in vain for the Manual Intervention task:

With the Manual Intervention task added to the Agentless job it's just a matter of setting a few properties:

When the pipeline is running it halts at the Manual Intervention task and waits for an intervention to either Resume or Reject the release:

If you weren't aware of this task you might be tempted to split a stage in to two stages to handle this scenario. Whilst this would probably work to my mind it's messy and inelegant and you should certainly check out the Manual Intervention task first.

Hope this helps!

Cheers -- Graham

Azure DevOps Hidden Gems #5 – Only Download Artifacts Needed for Stages of a Release

Posted by Graham Smith on July 24, 2019No Comments (click here to comment)

I've been working with Azure DevOps quite a lot recently (having used its predecessors for many years) and I'm constantly amazed by features I never knew existed or which I vaguely knew existed but hadn't fully appreciated. In this blog post series I'm attempting to shine a light on some of these hidden gems for the benefit of others. The full list of posts is here and if you have any suggestions for other posts please leave a comment!

If you are as impatient as I am then builds and releases can never finish quickly enough, and consequently I am always delighted to find a potential optimisation. My jaw dropped as I read about this one—how come I don't remember ever having previously read about it or even seen it?

The optimisation relates to classic releases (ie ones comprising visual tasks), although there is an equivalent for YAML releases. By default all the artefacts of a build are downloaded for a classic release, but what if you don't need everything? Then your release is probably taking longer than necessary! The good news is as of Sprint 131 we've been able to select just the artefacts that are needed for each stage of the pipeline.

To achieve this, open a classic release for editing. Under Tasks select the first stage and click on Agent job (or whatever the Run on agent is called):

Now in the right-hand pane scroll down to Artifact download and click the down arrow to show all the artifacts from the build. Simply deselect the ones that aren't required:

The specifics above aren't really important, but for completeness this is my QA stage where I want to deploy a website and then run automated acceptance tests. I don't need anything else. The great thing though is that you can now repeat this for other stages. In my example the next stage is PRD where I'm only deploying the website to the live environment. I don't need the acceptance tests, so I can deselect them. Great!

You can find the official documentation here, which also links to the equivalent way to do this in YAML pipelines.

Hope this helps!

Cheers -- Graham

Azure DevOps Hidden Gems #4 – Understand Build Agents by Installing One Locally on Your Development Machine

Posted by Graham Smith on July 12, 2019No Comments (click here to comment)

I've been working with Azure DevOps quite a lot recently (having used its predecessors for many years) and I'm constantly amazed by features I never knew existed or which I vaguely knew existed but hadn't fully appreciated. In this blog post series I'm attempting to shine a light on some of these hidden gems for the benefit of others. The full list of posts is here and if you have any suggestions for other posts please leave a comment!

If you've ever examined the logs generated by the agents in Azure Pipelines that do all the actual work you might have puzzled over what exactly is going on behind the scenes as your code is built and deployed. I know I have! We can see from build and release tasks that there are variables such as $(Build.ArtifactStagingDirectory) and (System.DefaultWorkingDirectory) that point to folders where things are happening and that it's all taking place in a folder hierarchy with seemingly cryptic folder names such as D:\a\1\s orD:\a\r1\a. But what exactly is happening in all these different folders?

If you are using Microsoft-hosted agents then they are pretty much black boxes and there is no way to peel back the covers and see what's going on. You can though output a list of all the variables and their values—see below. If you are using Self-hosted agents and you have appropriate permissions to remote to the server then you might have better luck in being able to see what's going on but if your server is a critical part of your build and release process or maybe it's a headless server or perhaps the agent is running in a docker container, then maybe it's not a good idea to go poking around or perhaps there are extra hurdles that you don't want to contend with.

A simple answer to this is to install an agent on your local machine. You can then play around to your heart's content safe in the knowledge that you have full visibility of what's happening and that you won't break a critical system. The process is pretty straightforward as follows:

  1. Create a dedicated Agent Pool in Azure DevOps at Organization Settings > Pipelines > Agent Pools > New agent pool.
  2. From the same location download the agent for your OS.
  3. Create a folder (such as c:\build-agent if you are on Windows) and unzip the contents of the agent download to this folder.
  4. Follow the instructions for configuring the agent which are available for Linux, macOS and Windows. Don't forget to choose the Agent Pool you created earlier and run the agent as a service as recommended.

Those steps are all that's required to get an agent up-and-running. Next up is to start using the agent to build and perhaps release an application. Chances are that you have a test project handy but if not it's quick to create one and get it configured for build—I usually create an ASP.NET Core application. I won't go through that process here except to say that whether you use something that already exists or you create something from scratch obviously you need to configure your build (and release) to use the Agent Pool you created earlier.

You will also need the tools that are used to build (and release) your application installed locally on your workstation. In the .NET world, if you have Visual Studio installed then you've probably got everything you need for a simple demo application. However if you are using any specialist tools such as Selenium for automated tests then there will be more to do. Exactly what is obviously tool specific, but you can get an idea from the Microsoft-hosted agents. For example, if you need chromedriver.exe then by looking at the Details tab of one of the hosted agents you can see that the path of chromedriver.exe is set by an environment variable called ChromeWebDriver:

In this case all you need to do is create a folder, copy chromedriver.exe to the folder and create a system environment variable to point to the folder. (You might have to reboot for the new variable to be recognised.)

With build (and perhaps release) configured you can now poke around in the folder structure of your agent to see exactly what is happening and where. A great diagnostic tip for any build or release is to output all the environment variables and their values to the logs. On Windows simply create a command line task and have it execute cmd /k set. On Linux use printenv | sort with a Bash script. I use this technique as a standard component of builds and releases and if you are using Microsoft-hosted agents printenv | sort works universally as presumably on Windows agents there is some sort of PowerShell alias at work.

Hope this helps!

Cheers -- Graham

Azure DevOps Hidden Gems #3 – Pull Request Validation Builds AND Releases

Posted by Graham Smith on July 4, 2019No Comments (click here to comment)

I've been working with Azure DevOps quite a lot recently (having used its predecessors for many years) and I'm constantly amazed by features I never knew existed or which I vaguely knew existed but hadn't fully appreciated. In this blog post series I'm attempting to shine a light on some of these hidden gems for the benefit of others. The full list of posts is here and if you have any suggestions for other posts please leave a comment!

If you are using git in Azure Repos you can protect a branch (master for example) with a branch policy that forces any changes to master to come in via a pull request to merge code from another branch. Branch policies have a fantastic wealth of options, and whilst they are definitely a gem I don't think they are exactly hidden:

One of the options available from Protect this branch is the ability to run a validation build against an ad hoc merge of the source and destination branches. This allows the proposed merge to be subjected to unit tests and anything else you might have in place to help with code quality. Typically you'll want to use the build that is normally run as part of the deployment pipeline, but of course not all tasks will need to run—there's probably no point in deploying artifacts for a validation build for example. This is where my previous tip comes in to play—the ability to run tasks conditionally according to custom conditions.

The ability to have a proposed merge validated by a build is great, but there's more! It's also possible to extend this concept to one or more stages of the release pipeline. For example, if the first stage of your release pipeline is configured to run automated acceptance tests you can have these run against the proposed merge following a successful validation build. Brilliant!

You can find the instructions for configuring validation releases here and a great walkthrough of how to configure an end-to-end scenario by Microsoft's Olivier Léger here. I've used the validation build and release feature and I love it, so do give it a try if it's a fit for your scenario.

Hope this helps!

Cheers -- Graham

Versioning .NET Core Assemblies in Azure DevOps isn’t Straightforward (and Probably Won’t be in Other CI/CD Tools Either)

Posted by Graham Smith on June 26, 2019No Comments (click here to comment)

As part of ongoing work to enhance an existing Azure DevOps CI/CD pipeline that builds and deploys an ASP.NET Core application I thought I'd spend a pleasant 5 minutes versioning the .NET Core assemblies with the pipeline's build number. A couple of hours and 20+ test builds later...

Out of the box, creating a new build in Azure Pipelines using the ASP.NET Core template in the classic editor results in five tasks of which four are concerned with dotnet commands:

A quick look at the documentation for dotnet build and then this awesome blog post that explains the dizzying array of options and it's pretty clear that adding /p:Version=$(Build.BuildNumber) as a command line parameter to dotnet build should suffice as a good starting point. Except it didn't, with File version and Product version stubbornly remaining at their default values:

I established that /p:Version= works fine from a command line, so what's going on? After a bit of research and testing I discovered that unless you tell it otherwise dotnet publish (and dotnet test for that matter) compiles the application before doing its thing of publishing files to a folder. The way the Azure Pipelines tasks are configured means that dotnet publish is effectively cancelling out the effect of dotnet build. (And since dotnet test also cancels out out the effect of dotnet build leaves me wondering what is the point of including dotnet build in the first place?) As part of this research I also discovered that build, test and publish also do a restore unless told otherwise, again making me wonder what the point of the Restore task is? So out of the box then it seems like the four .NET Core tasks are resulting in lots of duplication and for someone like me the cause of head-scratching as to why assembly versioning doesn't work.

So based on a few hours of testing here is what I think the arguments of the different tasks need to be (for visual tasks or as YAML) to avoid duplication and implement assembly versioning.

Firstly, if you want to include an implicit Restore task:

  • build = --configuration $(BuildConfiguration) --no-restore /p:Version=$(Build.BuildNumber)
  • test = --configuration $(BuildConfiguration) --no-build
  • publish = --configuration $(BuildConfiguration) --output $(Build.ArtifactStagingdirectory) --no-build

Secondly, if you want to omit an explicit Restore task:

  • test = --configuration $(BuildConfiguration)
  • publish = --configuration $(BuildConfiguration) --output $(Build.ArtifactStagingdirectory) /p:Version=$(Build.BuildNumber)

In the first version build creates the binaries which are then used by test and publish, with the --no-build switch implicitly setting the --no-restore flag. I haven't tested it but that presumably means that --configuration $(BuildConfiguration) for test and publish is redundant.

Update A friend and former colleague Tweeted that --configuration is still needed for test and publish:


In the second version test and publish both create their own sets of binaries. (Is that the right thing to do from a purist CI/CD perspective? Maybe, maybe not.)

I did my testing on a Microsoft-hosted build agent and whilst it felt like both options above were quicker than the default settings I can't be certain without rigorous testing on a self-hosted agent with no other load. Either way though, it feels good to have optimised the tasks and I finally got assembly versioning working. Are there other optimisations? Have I missed something? Please leave a comment!

Cheers -- Graham

Azure DevOps Hidden Gems #2 – Run Build or Release Tasks According to Custom Conditions

Posted by Graham Smith on June 24, 2019No Comments (click here to comment)

I've been working with Azure DevOps quite a lot recently (having used its predecessors for many years) and I'm constantly amazed by features I never knew existed or which I vaguely knew existed but hadn't fully appreciated. In this blog post series I'm attempting to shine a light on some of these hidden gems for the benefit of others. The full list of posts is here and if you have any suggestions for other posts please leave a comment!

Imagine this scenario: you have a code branch on which you want to run an all-singing, all-dancing build packed full of tasks, and another branch where you only want to run a subset of those tasks. Cloning the build and stripping out unwanted tasks to create a second build is the answer, right? Not necessarily! It turns out that most tasks can be set to run conditionally, according to criteria that you specify.

To configure this feature (I'm illustrating using visual tasks but there is a YAML equivalent) open the task and head over to Control Options. For Run this task select Custom conditions and then enter your conditions in Custom condition:

In the build task example above the task will only run if the build is succeeding and the build is running against the master branch. For any other branches it will be skipped.

To understand the full capabilities of this fantastic feature you should take a look at the Conditions overview page and then the Expressions page which has a full guide to the conditions syntax. I'll blog soon about a specific scenario where this feature is just exactly what is needed to avoid creating a second build and the potential maintenance issue that causes.

Hope this helps!

Cheers -- Graham

Azure DevOps Hidden Gems #1 – Use Secure Files in a Build or Release Pipeline

Posted by Graham Smith on June 19, 2019No Comments (click here to comment)

I've been working with Azure DevOps quite a lot recently (having used its predecessors for many years) and I'm constantly amazed by features I never knew existed or which I vaguely knew existed but hadn't fully appreciated. In this blog post series I'm attempting to shine a light on some of these hidden gems for the benefit of others. The full list of posts is here and if you have any suggestions for other posts please leave a comment!

If you've created a Build or Release pipeline in Azure DevOps you've probably used the Variables feature to store either plain text or secret variables that can be passed in to the build or pipeline:

This works well for plain text, but what if you have more complicated requirements, such as secrets contained in a file that can't simply be copied as plain text in to a standard variable? Sure, there are solutions external to Azure DevOps that you could use (Azure Key Vault for example) but you could end up using a sledgehammer to crack a nut. No matter though, as Azure DevOps provides a solution through Secure Files. You can find this by navigating to Pipelines > Library and then clicking the Secure Files tab:

In the screenshot above I've used + Secure file to upload a file called config (which in this particular case is a file that contains credentials for connecting to an Azure Kubernetes Service cluster). Secure files are made available in the build or pipeline through the use of the Download Secure File task, which places the file in the $(Agent.TempDirectory) directory of the Azure Pipelines Agent. The file can then be used on a command line where a parameter is expecting a file, for example:

This is obviously a very specific example (an incomplete extract of a Bash script that is using kubectl to create secrets on a Kubernetes cluster) but hopefully you get the idea of how secret files can be used. Once the build or release has completed the file gets deleted—a good thing on a self-hosted agent although Microsoft-hosted agents are destroyed anyway after use.

Hope this helps!

Cheers -- Graham