Deploy a Dockerized Application to Azure Kubernetes Service using Azure YAML Pipelines 4 – Running a Dockerized Application Locally

Posted by Graham Smith on April 20, 2020No Comments (click here to comment)

This is the fourth post in a series where I'm taking a fresh look at how to deploy a dockerized application to Azure Kubernetes Service (AKS) using Azure Pipelines after having previously blogged about this in 2018. The list of posts in this series is as follows:

  1. Getting Started
  2. Terraform Development Experience
  3. Terraform Deployment Pipeline
  4. Running a Dockerized Application Locally (this post)
  5. Application Deployment Pipelines
  6. Telemetry and Diagnostics

In this post I explain the components of the sample application I wrote to accompany this (and the previous) blog series and how to run the application locally. If you want to follow along you can clone / fork my repo here, and if you haven't already done so please take a look at the first post to understand the background, what this series hopes to cover and the tools mentioned in this post. Additionally, this post assumes you have created the infrastructure—or at least the Azure SQL dev database—described in the previous Terraform posts.

MegaStore Application

The sample application is called MegaStore and is about as simple as it gets in terms of a functional application. It's a .NET Core 3.1 application and the idea is that a sales record (beers from breweries local to me if you are interested) is created in the presentation tier which eventually gets persisted to a database via a message queue. The core components are:

  • MegaStore.Web: a skeleton ASP.NET Core application that creates a ‘sales' record every time the home page is accessed and places it on a message queue.
  • NATS message queue: this is an instance of the nats image on Docker Hub using the default configuration.
  • MegaStore.SaveSaleHandler: a .NET Core console application that monitors the NATS message queue for new records and saves them to an Azure SQL database using EF Core.

When running locally in Visual Studio 2019 these application components work together using Docker Compose, which is a separate project in the Visual Studio solution. There are two configuration files in use which get merged together:

  • docker-compose.yml: contains the configuration for megastore.web and megastore.savesalehandler which is common to running the application both locally and in the deployment pipeline.
  • docker-compose.override.yml: contains additional configuration that is only needed locally.

There's a few steps you'll need to complete to run MegaStore locally.

Azure SQL dev Database

First configure the Azure SQL dev database created in the previous post. Using SQL Server Management Studio (SSMS) login to Azure SQL where Server name will be something like yourservername-asql.database.windows.net and Login and Password are the values supplied to the asql_administrator_login_name and asql_administrator_login_password Terraform variables. Once logged in create the following objects using the files in the repo's sql folder (use Ctrl+Shift+M in SSMS to show the Template Parameters dialog to add the dev suffix):

  • A SQL login called sales_user_dev based on create-login-template.sql. Make a note of the password.
  • In the dev database a user called sales_user and a table called Sale based on configure-database-template.sql.

Note: if you are having problems logging in to Azure SQL from SSMS make sure you have correctly set a firewall rule to allow your local workstation to connect.

Docker Environment File

Next create a Docker environment file to store the database connection string. In Visual Studio create a file called db-credentials.env in the docker-compose project. All on one line add the following connection string, substituting in your own values for the server name and sales_user_dev password:

Note: since this file contains sensitive data it's important that you don't add it to version control. The .gitignore file that's part of the repo is configured to ignore db-credentials.env.

Application Insights Key

In order to collect Application Insights telemetry from a locally running MegaStore you'll need to edit docker-compose.override.yml to contain the instrumentation key for the dev instance of the Application Insights resource that was created in the the previous post. You can find this in the Azure Portal in the Overview pane of the Application Insights resource:

I'll write more about Application Insights in a later post but in the meantime if you want to know more see this post from my previous 2018 series. It's largely the same with a few code changes for newer ways of doing things with updated NuGet packages.

Set docker-compose as Startup

The startup project in Visual Studio needs to be set to docker-compose by right-clicking docker-compose in the Solution Explorer and selecting Set as Startup Project:

Up and Running

You should now be able to run MegaStore using F5 which should result in a localhost+port number web page in your browser. Docker Desktop will need to be running however I've noticed that newer versions of Visual Studio offer to start it automatically if required. Notice in Visual Studio the handy Containers window that gives some insight into what's happening:

In order to establish everything is working open SSMS and run select-from-sales.sql (in the sql folder in the repo) against the dev database. You should see a new ‘beer' sales record. If you want to create more records you can keep reloading the web page in your browser or run the generate-web-traffic.ps1 PowerShell snippet that's in the repo's pipeline folder making sure that the URL is something like http://localhost:32768/ (your port number will likely be different).

To view Application Insights telemetry (from the Azure Portal) whilst running MegaStore locally you may need to be aware of services running on you network that could cause interference. For me I could run Live Metrics and see activity in most of the graphs, however I initially couldn't use the Search feature to see trace and request telemetry (the screenshot is what I was expecting to see):

I initially thought this might be a firewall issue but it wasn't, and instead it turned out to be the pi-hole ad blocking service I have running on my network. It's easy to disable pi-hole for a few minutes or you can figure out which URL's need whitelisting. The bigger picture though is that if you don't see telemetry—particularly in a corporate scenario—you may have to do some investigation.

That's it for now! Next time we look at deploying MegaStore to AKS using Azure Pipelines.

Cheers -- Graham

A Better Way of Deploying a Dockerized Application to Azure Kubernetes Service Using Azure Pipelines

Posted by Graham Smith on January 21, 2019No Comments (click here to comment)

Throughout 2018 I wrote a mini blog post series aimed at providing specific and detailed guidance on how to create a CI/CD pipeline using VSTS/Azure DevOps to deploy a dockerized ASP.NET Core application to Azure Kubernetes Service (AKS):

Whilst the resulting solution works I wasn't entirely happy with several aspects and I've spent a great deal of time thinking and tinkering to come up with something better. In this blog post I explain what I wasn't happy with and how my new solution addresses most of my concerns. You don't necessarily need to read the posts above as I'm going to provide some context, but it will probably make things much clearer if you are planning to implement any of my suggestions.

The sample application I've been using to deploy to Kubernetes consists of the following components:

  • ASP.NET Core web application, that sends messages to a
  • NATS message queue service, which pushes messages to a
  • .NET Core message queue handler application, which saves messages to an
  • Azure SQL database

Apart from the database all the components run as docker containers. The container images are built in in an Azure Pipelines build pipeline and images pushed to an Azure Container Registry (ACR). An Azure Pipelines release pipeline then deploys the necessary services and deployments to AKS which causes the images to be pulled from ACR and instantiated as containers inside pods. My release pipeline consists of two environments: dat (developer automated test where automated acceptance tests might take place) and prd (production). That's just arbitrary of course and in a live scenario the pipeline can have whatever environments are needed.

My sample application is called MegaStore and you can find the code on GitHub here. In the rest of this post I explain my areas of concern and how I addressed them.

Azure Pipelines Tasks

Whilst there is no doubt that Azure Pipelines Tasks are great for quickly building a pipeline and definitely make it easier for those less familiar with the technology behind a task to get started, I now see some tasks as more of a curse than a blessing. I've particularly taken issue with tasks that manipulate a command line application (such as docker or kubectl) and which results in the task becoming something of a Swiss Army Knife task. Why have I taken issue? There are several reasons, some specific to the Swiss Army Knife variety and some of tasks in general:

  • There is often a need to set mandatory fields in ‘Swiss Army Knife' tasks even though those parameters will not be used by the chosen sub-command. Where there are multiple instances of the same task in use this becomes very tedious and is a potential maintenance problem when something changes. (Yes, I know tasks can be cloned but this doesn't make me any happier.)
  • Tasks by their nature only allow you to do what they have been coded to do and you can sometimes find yourself in a blind alley. For example, at the time of writing the only way I know of updating an existing Kubernetes ConfigMap without deleting it first and re-creating it is with a piped command, for example:

    Running a command such as this isn't possible with the current Deploy to Kubernetes Azure DevOps task, which is very limiting.
  • Speaking of command lines, my next issue is that tasks abstract you from what is actually going on behind the scenes. For simple tasks such as copying files this might be fine, however I've become frustrated at the way tasks such as Docker or Deploy to Kubernetes ‘hide' what they are doing, and the way that makes fine-tuning that little bit harder. Additionally, for me it's also a lost learning opportunity—a missed chance to learn the full syntax of a command because the task is constructing it on your behalf.
  • Another big issue is that tasks such as Docker or Deploy to Kubernetes offer nothing in the way of code usability, and break the DRY principle in multiple dimensions (ie there is scope for repetition within an environment and also across environments). To illustrate, the release pipeline in my 2018 mini blog series consisted of no fewer than 30 Deploy to Kubernetes tasks across two environments, resulting in a great deal of repetition.
  • Finally, the use of tasks in the current version of Azure Pipelines releases means that you don't have your ‘code' under proper version control. I know there are changes coming that will help to address this, and whilst they will be welcome I think there is an opportunity to do better.

So what's my solution to all this? Very simply, get rid of multiple Swiss Army Knife tasks and implement Bash scripts running from a single Bash task. I started off by using the Inline script feature of Bash tasks but this didn't help with getting code in to version control and I also quickly realised that there were big code reusability opportunities to be had across environments by using File Path scripts. By using Bash scripts stored in the repo I solved all the issues mentioned above and in the case of the release portion of the pipeline I reduced the number of tasks from 15 in each environment to two! What follows are the techniques I used to achieve this for the Docker builds and Kubernetes deployments.

Converting Docker builds to use a Bash script was reasonably straightforward so I'll start by discussing the first problem I encountered when converting Deploy to Kubernetes tasks to Bash scripts, which was how to authenticate to Kubernetes. Tasks rely on the creation of a Kubernetes service connection (Project Settings > Service connections) and I'd been using the Kubeconfig version which involves pasting in the contents of the Kubeconfig file that gets created (if you run the appropriate command) when you set up an AKS cluster:

By tracing the logging output of the Deploy to Kubernetes tasks I could see what was happening: a Kubeconfig file was being saved to disk and referenced in a kubectl command using the --kubeconfig parameter that points to the file on disk. I could successfully pass the file in from an Artifact as a proof of concept but how to store the Kubeconfig contents securely and create the file dynamically? The obvious choice was a secret variable however that didn't work because it destroyed the Kubeconfig formatting which is important in the re-hydrated file on disk. After a lot of fiddling I finally turned to LoECDA who are super-responsive via Twitter, and very quickly the suggestion came back to try using Secure files (Pipelines > Library > Secure files). This worked perfectly: a file is first uploaded to the Secure files area and this is then available for use using the Download Secure File task. The file is downloaded in to a temporary folder which can be referenced as the $AGENT_TEMPDIRECTORY variable in a Bash script. Great!

Next up was sorting out the practicalities of using Bash scripts in Bash tasks. I created a deployment (dep) folder in the repo to hold the scripts and then arranged for this folder to be available as an Artifact created directly from the GitHub repo:

I used VS Code to create the Bash files however in order for the file to be executed as a Bash script it needs its permissions setting to make it executable (chmod +x). This needs to be done from a Linux environment and there are several possibilities for achieving this including Windows Subsystem for Linux if you are on Windows 10. I chose to go with Azure Cloud Shell, which can be configured to run either a Bash or a PowerShell command line in the cloud! Once that was configured it was a case of cloning my repo, navigating to the dep folder and running chmod +x some-filename-sh. There's no GUI in Azure Cloud Shell so it does involve using git commands to push the changes back to GitHub. If this is new to you then git add *git commit -m "Commit message" and git push origin master are what you need. To authenticate you'll likely need to use a personal access token unless you go to the bother of setting up SSH. It gets to be a bit of a pain having to enter credentials every time you want to push to GitHub however the git config credential.helper store command will save credentials across Azure Cloud Shell sessions to make life easier.

Finding out what commands needed to be executed in the Bash scripts required a bit of detective work, and involved a combination of understanding what the task was attempting to accomplish and then looking at the build or release logs to see the actual output. With the basic command figured out this exercise offered the opportunity to do a bit of fine tuning. For example, I'd been tagging my docker images with the latest tag but it turns out that this isn't a great idea for release pipelines. By writing the actual command myself I was able to get exactly what I wanted.

I describe how I organised the Bash scripts to move away from a monolithic pipeline below. In this section I want to describe the tips and tricks I used to actually write the Bash scripts. Generally, the scripts make heavy use of variables to make them applicable to all release environments, however there are some essential things to know:

  • Variables created as part of Azure DevOps pipelines can be used as variables (ie passed in to a script) however with the exception of secrets they are also created as environment variables which are available directly in scripts. This means that a variable created as MyVariable is available as $MYVARIABLE directly in a Bash script (in Bash scripts the variable is really a constant which convention dictates should be in upper case and any periods need converting to underscores to ensure valid syntax).
  • Variables created as part of Azure DevOps pipelines can have the same name as long as they are scoped to a different environment. So you can have two variables called MyVariable with different values for each environment and simply refer to $MYVARIABLE in the Bash script, ie no need to pass $MYVARIABLE in as a parameter to the script for different environments.
  • As mentioned above, secrets are not created as environment variables and must be passed in to a script via the Arguments field, and in the script a variable is declared to accept the incoming parameter. Important: as of the time of writing a secret needs to be passed in to the Argument field as $(MYSECRET) ie with parentheses around the actual parameter name. If you omit the parentheses the secret is not passed in. A non-secret parameter doesn't require parentheses and I have queried whether this is a a bug here.
  • Later in this post I explain how I break up a monolithic pipeline in to multiple pipelines, which results in the same variables being needed in different pipelines. By using Variable Groups I was able to avoid repeated variable declarations and manage many variables from just one location.
  • In addition to variables that are created manually, built-in variables are also available as environment variables in the script. The ones I've used are $AGENT_TEMPDIRECTORY to define the download location of the Kubeconfig file from the Secure files area, $RELEASE_ENVIRONMENTNAME to refer to the environment (ie dat or prd) and also $BUILD_BUILDNUMBER used to tag docker images with a unique build number in the build process and then to refer to them by their unique name in the release. However, there are many built-in variables available to use—see here for details but remember that for use in Bash scripts you should change text to uppercase and must replace periods with an underscore.

I'm not a Bash scripting expert and I'm sure my scripts would be considered very rudimentary. The great thing though is that you can do whatever you like now the code is a script. Possibilities might include adding error handling or refactoring further using functions. There's potential to really go to town here.

Monolithic Pipeline

At the time of writing this article in early 2019 there aren't that many blog post examples of implementing a CI/CD pipeline to deploy an application to Kubernetes. Furthermore, the posts that do exist tend, not unreasonably, to use a simplistic application scenario to illustrate the concepts. Typically, this involves deploying the whole application as part of a single pipeline, and indeed this is the route I took with my 2018 blog post mini series. However, it became quickly apparent to me that this is an unsatisfactory arrangement for two main reasons:

  • Just one change to one of the application components would cause all the components of the application to be redeployed (or more correctly the parts of the application that have their docker images built by the pipeline).
  • A change to the Kubernetes configuration would also trigger a redeployment of all of the application components. Sometimes this is necessary but often it's not.

These issues arise because the trigger for the build component of the pipeline is set as the root of the GitHub repo, so if anything changes in the repo a build is triggered. Clearly not an optimal situation.

My solution to this problem is to divide the monolithic pipeline in to multiple pipelines that correspond to the individual components of the overall application. Then with a bit of refactoring of the codebase it's possible to use a very nifty feature of Azure Pipelines that allows a build to be triggered from one or more specific folders (or files for that matter) in the repo, ie a much more granular solution.

One complication that I had to cater for is that the pipeline isn't just building docker images and marshalling them in to the Kubernetes cluster: additionally, the pipeline is configuring Kubernetes elements such as Namespaces, Secrets and ConfigMaps.

Through the use of Bash scripts as described above the number of tasks needed is drastically reduced: just one Bash task for the builds and two tasks for releases (a Download Secure File task to copy the kubeconfig file to disk and a Bash task to host the bash script). All scripts are Namespace/environment aware.

In terms of Azure Pipelines build and release pipelines my current CI/CD solution is as follows:

megastore.init.release

This is a release that is not associated with a build and its sole purpose is to configure a Kubernetes Namespace in preparation for the deployment of the application. As such, this component is only intended to be run to either initialise a new Kubernetes cluster or (rarely) if one of the configuration items needs to change (in which case elements of the application will likely have to be redeployed for the configuration to be built in to the appropriate pods).

The configuration handled by megastore.init.release is as follows:

  • Creation of a Namespace for a corresponding Azure Pipelines environment.
  • Creation (or update) of ACR credentials (as a specialised Secret) that allow Deployments to pull docker images from ACR.
  • Creation (or update) of the message queue URL as a ConfigMap.
  • Creation (or update) of the Application Insights instrumentation key as a ConfigMap.

This configuration is handled by init.sh.

megastore.message-queue.release

This is another release that is not associated with a build, and in this case the requirement is to deploy the NATS message queue service. The absence of a build is due to the docker image being pulled from Docker Hub. The downside of not having a build associated with the release is that if any of the NATS configuration changes the release needs to be triggered manually. I see this as an infrequent requirement though. The message queue service doesn't have any dependencies on any other part of the application and so is the first component to be deployed following the initial Kubernetes configuration.

The configuration handled by megastore.message-queue.release is as follows:

  • Deployment of the Kubernetes Service for the message queue.
  • Deployment of the Kubernetes Deployment for the message queue.

This configuration is handled by message-queue.sh.

megastore.savesalehandler.build and megastore.savesalehandler.release

This build and linked release are responsible for deploying a new version of the .NET Core message queue handler application which receives message from the message queue and saves them to an Azure SQL database. The docker image is built and uploaded to ACR using this generic Bash script. This in turn triggers the megastore.savesalehandler.release which deals with the following configuration:

  • Creation (or update) of the database connection string as a Secret.
  • Deployment of the Kubernetes Deployment for the message queue handler component.
  • Update the image for the Deployment to the latest version using the unique tag for the build that triggered the release.

This configuration is handled by megastore-savesalehandler.sh. The build is triggered through the Azure Pipelines Path filters feature:

Using the Path filters feature ensures that the build will only be triggered for continuous integration if a file in the specified folder is changed.

megastore.web.build and megastore.web.release

This build and linked release are responsible for deploying a new version of the ASP.NET Core web application which sends messages to the message queue service. As with the message queue handler, the docker image is built and uploaded to ACR using this generic Bash script. The build triggers the megastore.web.release which deals with the following configuration:

  • Creation (or update) of the ASPNETCORE_ENVIRONMENT environment variable as a ConfigMap.
  • Deployment of the Kubernetes Deployment for the web component.
  • Deployment of the Kubernetes Service for the web component.
  • Update the image for the Deployment to the latest version using the unique tag for the build that triggered the release.

This configuration is handled by megastore-web.sh and once again the build is triggered through the Azure Pipelines Path filters feature:

As before, using the Path filters feature ensures that the build will only be triggered for continuous integration if a file in the specified folder is changed.

And Finally...

In breaking down a monolithic pipeline in to multiple pipelines I exposed the problem of what to do with the shared helper library of functions that is use both by the megastore.web and megastore.savesalehandler components, because if this code changes one or sometimes both components will need redeploying. I think the answer is that helper libraries like these do not belong in the Visual Studio solution and instead should be developed separately and distributed and referenced as NuGet packages.

One of my aspirations is to get as much pipeline configuration in the GitHub repo as possible and you might well ask why I'm not using yaml files. Apart from the fact that I just haven't had time to look at this in detail yet, at the time of writing it's only a partial solution as it's only available for the build portion of the pipeline. This will change hopefully later this year when the release portion of the pipeline is supported, and at that point I'll make the switch.

That's it for now! Whether you are deploying to AKS or somewhere else I hope this post has provided you with ideas to supercharge your Azure DevOps pipelines.

Cheers -- Graham

Upgrade a Dockerized ASP.NET Core Application to the Latest Version of .NET Core

Posted by Graham Smith on August 15, 2018No Comments (click here to comment)

In the combined worlds of .NET Core and Docker things are changing pretty quickly and at some point you may well find yourself wanting to upgrade your Dockerized ASP.NET Core application. If you are upgrading a production application then you'll certainly want to follow the official guidance. In my case and for the purposes of this blog post I'm more concerned with the upgrade from a Docker perspective. It's not difficult however there are a few steps which can leave you scratching your head if you miss them out so I'm documenting my process for upgrading as it will certainly help me in the future and hopefully someone else as well.

Upgrading ASP.NET Core

  1. Download and install the latest version of .NET Core from here. From a command prompt run dotnet --list-runtimes to show what you have installed. In my case the latest version was 2.1.2.
  2. Ensure you are running the latest version of Visual Studio 2017. At the time of writing version 15.8.0 had just been released.
  3. Open your VS solution and from the Application tab of the Properties page of each project you want to upgrade change the Target framework to the required version:
  4. Using your technique of choice now upgrade all of the NuGet packages for the solution.

Upgrading Docker files

This is the bit which will have you scratching your head if your Docker files are targeting an earlier version of .NET Core than the version you have just upgraded to as your solution will build but not run under Docker. The error message (something like "It was not possible to find any compatible framework version. The specified framework ‘Microsoft.NETCore.App', version ‘2.1.0' was not found.") makes complete sense when you remember it is being generated from a container running an earlier version of .NET Core.

The answer of course is to change the Docker files in your solution to refer to an image running a later version of .NET Core. However, this is also a great opportunity to upgrade your Docker files to the latest specification used in new Visual Studio projects, as it does seem to change on every release. I do this by simply creating a new ASP.NET Code project in Visual Studio and then working out what needs to change in the Docker file I'm upgrading. In my case this saw my Docker file change from

to

The obvious changes to the specification are the removal of -nowarn:msb3202,nu1503 and changes to the Docker syntax. I'm not sure what improvements changes to the syntax bring however it makes sense to me to keep up with the latest thinking from the folks writing the Docker files for Visual Studio projects.

On the face of it your project should now run as it did before the upgrade. However in my case I was still getting error messages as per this GitHub issue. The problem for me was an outdated microsoft/dotnet:2.1-aspnetcore-runtime image and running docker pull microsoft/dotnet:2.1-aspnetcore-runtime got things running again. Probably just something peculiar to my machine due all the testing I do but if you run in to this then hopefully this will do the trick.

Cheers -- Graham

Deploy a Dockerized ASP.NET Core Application to Kubernetes on Azure Using a VSTS CI/CD Pipeline: Part 1

Posted by Graham Smith on February 20, 2018No Comments (click here to comment)

Over the past 18 months or so I've written a handful of blog posts about deploying Docker containers using Visual Studio Team Services (VSTS). The first post covered deploying a container to a Linux VM running Docker and other posts covered deploying containers to a cluster running DC/OS—all running in Microsoft Azure. Fast forward to today and everything looks completely different from when I wrote that first post: Docker is much more mature with features such as multi-stage builds dramatically streamlining the process of building source code and packaging it in to containers, and Kubernetes has emerged as a clear leader in the container orchestration battle and looks set to be a game-changing technology. (If you are new to Kubernetes I have a Getting Started blog post here with plenty of useful learning resources and tips for getting started.)

One of the key questions that's been on my mind recently is how to use Kubernetes as part of a CI/CD pipeline, specifically using VSTS to deploy to Microsoft's Azure Container Service (AKS), which is now specifically targeted at managing hosted Kubernetes environments. So in a new series of posts I'm going to be examining that very question, with each post building on previous posts as I drill deeper in to the details. In this post I'm starting as simply as I possibly can whilst still answering the key question of how to use VSTS to deploy to Kubernetes. Consequently I'm ignoring the Kubernetes experience on the development workstation, I only deploy a very simple application to one environment and I'm not looking at scaling or rolling updates. All this will come later, but meantime I hope you'll find that this walkthrough will whet your appetite for learning more about CI/CD and Kubernetes.

Development Workstation Configuration

These are the main tools you'll need on a Windows 10 Pro development workstation (I've documented the versions of certain tools at the time of writing but in general I'm always on the latest version):

  • Visual Studio 2017—version 15.5.6 with the ASP.NET and web development workload.
  • Docker for Windows—stable channel 17.12.0-ce.
  • Windows Subsystem for Linux (WSL)—see here for installation details. I'm still using Bash on Ubuntu on Windows that I installed before WSL moved to the Microsoft Store and in this post I assume you are using Ubuntu. The aim of installing WSL is to run Azure CLI, although technically you don't need WSL as Azure CLI will run happily under a Windows command prompt. However using WSL facilitates running Azure CLI commands from a Bash script.
  • Azure CLI on Windows Subsystem for Linux—see here for installation (and subsequent upgrade) instructions. There are several ways to login to Azure from the CLI however I've found that the interactive log-in works well since once you're logged-in you remain so for quite a long time (many days for me so far). Use az -v to check which version you are on (2.0.27 was latest at time of writing).
  • kubectl on Azure CLI—the kubectl CLI is used to interact with a Kubernetes cluster. Install using sudo az aks install-cli.

Create Services in Microsoft Azure

There are several services you will need to set up in Microsoft Azure:

  • Azure Container Registry—see here for an overview and links to the various methods for creating an ACR. I use the Standard SKU for the better performance and increased storage.
  • Azure Container Service (AKS) cluster—see here for more details about AKS and how to create a cluster, however you may find it easier to use my script below. I started off by creating a cluster and then destroying it after each use until I did some tests and found that a one-node cluster was costing pennies per day rather than the pounds per day I had assumed it would cost and now I just keep the cluster running.
    • From a WSL Bash prompt run nano create_k8s_cluster.sh to bring up the nano editor with a new empty file. Copy and paste (by pressing right mouse key) the following script:
    • Change the variables to your suit your requirements. If you only have one Azure subscription you can delete the lines that set a particular subscription as the default, otherwise use az account list to list your subscriptions to find the ID.
    • Exit out of nano making sure you save the changes (Ctrl +X, Y) and then apply permissions to make it executable by running chmod 700 create_k8s_cluster.sh.
    • Next run the script using ./create_k8s_cluster.sh.
    • One the cluster is fully up-and-running you can show the Kubernetes dashboard using az aks browse --resource-group $resourceGroup --name $clusterName.
    • You can also start to use the kubectl CLI to explore the cluster. Start with kubectl get nodes and then have a look at this cheat sheet for more commands to run.
    • The cluster will probably be running an older version of Kubernetes—you can check and find the procedure for upgrading here.
  • Private VSTS Agent on Linux—you can use the hosted agent (called Hosted Linux Preview at time of writing) but I find it runs very slowly and additionally because a new agent is used every time you perform a build it has to pull docker images down each time which adds to the slowness. In a future post I'll cover running a VSTS agent from a Docker image running on the Kubernetes cluster but for now you can create a private Linux agent running on a VM using these instructions. Although they date back to October 2016 they still work fine (I've checked them and tweaked them slightly).
    • Since we will only need this agent to build using Docker you can skip steps 5b, 5c and 5d.
    • Install a newer version of Git—I used these instructions.
    • Install docker-compose using these instructions and choosing the Linux tab.
    • Make the docker-user a member of the docker group by executing usermod -aG docker ${USER}.

Create VSTS Endpoints

In order to talk to the various Azure services you will need to create the following endpoints in VSTS (from the cog icon on the toolbar choose Services > New Service Endpoint):

  • Azure Resource Manager—to point to your MSDN subscription. You'll need to authenticate as part of the process.
  • Kubernetes Service Connection—to point to your Kubernetes cluster. You'll need the FQDN to the cluster (prepended with https://) which you can get from the Azure CLI by executing az aks show --resource-group $resourceGroup --name $clusterName, passing in your own resource group and cluster names. You'll also need the contents of the kubeconfig file. If you used the script above to create the cluster then the script copied the config file to C:\Users\Public and you can use Notepad to copy the contents.

Configure a CI Build

The first step to deploying containers to a Kubernetes cluster is to configure a CI build that creates a container and then pushes the container to a Docker registry—Azure Container Registry in this case.

Create a Sample App
  • Within an existing Team Project create a new Git repository (Code > $current repository$ > New repository) called k8s-aspnetcore. Feel free to select the options to add a README and a VisualStudio .gitignore.
  • Clone this repo on your development workstation:
    • Open PowerShell at the desired root folder.
    • Copy the URL from the VSTS code view of the new repository.
    • At the PowerShell prompt execute git clone along with the pasted URL.
  • Make sure Docker for Windows is running.
  • In Visual Studio create an ASP.NET Core Web Application in the folder the git clone command created.
  • Choose an MVC app and enable Docker support for Linux.
  • You should now be able to run your application using the green Docker run button on the Standard toolbar. What is interesting here is that the build process is using a multi-stage Dockerfile, ie the tooling to build the application is running from a Docker container. See Steve Lasker's post here for more details.
  • In the root of the repository folder create a folder named k8s-config, which we'll use later to store Kubernetes configuration files. In Visual Studio create a New Solution Folder with the same name and back in the file system folder create empty files named service.yaml and deployment.yaml. In Visual Studio add these files as existing items to the newly created solution folder.
  • The final step here is to commit the code and sync it with VSTS.
Create a VSTS Build
  • In VSTS create a new build based on the repository created above and start with an empty process.
  • After the wizard stage of the setup supply an appropriate name for the build and select the Agent queue created above if you are using the recommended private agent or Hosted Linux Preview if not.
  • Go ahead and perform a Save & queue to make sure this initial configuration succeeds.
  • In the Phase 1 panel use + to add two Docker Compose tasks and one Publish Build Artifacts task.
  • If you want to be able to perform a Save & queue after configuring each task (recommended) then right-click the second and third tasks and disable them.
  • Configure the first Docker Compose task as follows:
    • Display name = Build service images
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Docker Compose File = **/docker-compose.yml
    • Project Name = $(Build.Repository.Name)
    • Qualify Image Names = checked
    • Action = Build service images
    • Additional Image Tags = $(Build.BuildId)
    • Include Latest Tag = checked
  • Configure the second Docker Compose task as follows:
    • Display name = Push service images
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Docker Compose File = **/docker-compose.yml
    • Project Name = $(Build.Repository.Name)
    • Qualify Image Names = checked
    • Action = Push service images
    • Additional Image Tags = $(Build.BuildId)
    • Include Latest Tag = checked
  • Configure the Publish Build Artifacts task as follows:
    • Display name = Publish k8s config
    • Path to publish = k8s-config (this is the folder we created earlier in the repository root folder)
    • Artifact name = k8s-config
    • Artifact publish location = Visual Studio Team Services/TFS
  • Finally, in the Triggers section of the build editor check Enable continuous integration so that the build will trigger on a commit from Visual Studio.

So what does this build do? The first Docker Compose task uses the docker-compose.yml file to work out what images need building as specified by Dockerfile file(s) for different services. We only have one service (k8s-aspnetcore) but there could (and usually would) be more. With the image built on the VSTS agent the second Docker Compose task pushes the image to the Azure Container Registry. If you navigate to this ACR in the Azure portal and drill in to the Repositories section you should see your image. The build also publishes the yaml configuration files needed to deploy to the cluster.

Configure a Release Pipeline

We are now ready to configure a release to deploy the image that's hosted in ACR to our Kubernetes cluster. Note that you'll need to complete all of this section before you can perform a release.

Create a VSTS Release Definition
  • In VSTS create a new release definition, starting with an empty process and changing the name to k8s-aspnetcore.
  • In the Artifacts panel click on Add artifact and wire-up the build we created above.
  • With the build now added as an artifact click on the lightning bolt to enable the Continuous deployment trigger.
  • In the default Environment 1 click on 1phase, 0 task and in the Agent phase click on + to create three Deploy to Kubernetes tasks.
  • Configure the first Deploy to Kubernetes task as follows:
    • Display name = Create Service
    • Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint created above]
    • Command = apply
    • Use Configuration files = checked
    • Configuration File = $(System.DefaultWorkingDirectory)/k8s-aspnetcore/k8s-config/service.yaml
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Secret name [any secret word of your choosing, to be used consistently across all tasks]
  • Configure the second Deploy to Kubernetes task as follows:
    • Display name = Create Deployment
    • Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint created above]
    • Command = apply
    • Use Configuration files = checked
    • Configuration File = $(System.DefaultWorkingDirectory)/k8s-aspnetcore/k8s-config/deployment.yaml
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Secret name [any secret word of your choosing, to be used consistently across all tasks]
  • Configure the third Deploy to Kubernetes task as follows:
    • Display name = Update with Latest Image
    • Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint created above]
    • Command = set
    • Arguments = image deployment/k8s-aspnetcore-deployment k8s-aspnetcore=$yourAcrNameHere$.azurecr.io/k8s-aspnetcore:$(Build.BuildId)
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Secret name [any secret word of your choosing, to be used consistently across all tasks]
  • Make sure you save the release but don't bother testing it out just yet as it won't work.
Create the Kubernetes configuration
  • In Visual Studio paste the following code in to the service.yaml file created above.
  • Paste the following code in to the deployment.yaml file created above. The code is for my ACR so you will need to amend accordingly.
  • You can now commit these changes and then head over to VSTS to check that the release was successful.
  • If the release was successful you should be able to see the ASP.NET Core website in your browser. You can find the IP address by executing kubectl get services from wherever you installed kubectl.
  • Another command you might try running is kubectl describe deployment $nameOfYourDeployment, where $nameOfYourDeployment is the metadata > name in deployment.yaml. A quick tip here is that if you only have one deployment you only need to type the first letter of it.
  • It's worth noting that splitting the service and deployment configurations in to separate files isn't necessarily a best practice however I'm doing it here to try and help clarify what's going on.

In terms of a very high level explanation of what we've just configured in the release pipeline, for a simple application such as an ASP.NET Core website we need to deploy two key objects:

  1. A Kubernetes Service which (in our case) is configured with an external IP address and acts as an abstraction layer for Pods which are killed off and recreated every time a new release is triggered. This is handled by the first Deploy to Kubernetes task.
  2. A Kubernetes Deployment which describes the nature of the deployment—number of Pods (via Replica Sets), how they will be upgraded and so on. This is handled by the second Deploy to Kubernetes task.

On first deployment these two objects are all that is needed to perform a release. However, because of the declarative nature of these objects they do nothing on subsequent release if they haven't changed. This is where the third Deploy to Kubernetes task comes in to play—ensuring that after the first release subsequent releases do cause the container to be updated.

Wrapping Up

That concludes our initial look at CI/CD with VSTS and Azure Container Service (AKS)! As I mentioned at the beginning of the post I've purposely tried to keep this walkthrough as simple as possible, so watch out for the next installment where I'll build on what I've covered here.

Cheers—Graham

Continuous Delivery with Containers – Say Goodbye to IIS Express and LocalDB, with Visual Studio 2017, Docker and Windows Containers

Posted by Graham Smith on May 10, 20174 Comments (click here to comment)

A view I've heard expressed a few times recently, and which I completely agree with, is that we need to be discovering problems with our applications as far to the left as possible since it's much cheaper to fix problems there than further down the line towards—or even in—production. So with this in mind is it just me who feels slightly uneasy that in the Visual Studio world the development and debugging of applications destined for Windows servers tends be on Windows desktop machines using lightweight counterparts of server applications such as IIS Express to host ASP.NET websites and LocalDB to host SQL Server databases? With this setup it seems like we could be storing up trouble for later in the pipeline...

Whether my unease is justified or not, I need feel troubled no more since the world of containers offer us a solution! Since Docker for Windows now supports Windows Containers and Visual Studio 2017 has support for Docker built-in we can now develop server applications on Windows 10 and run and debug them on the exact same operating systems they will run on in production.

In this post I take my version of Contoso University that I've been using for several years now and amend it so that in the developer inner loop phase (ie everything that happens before code is checked in to the build server) the website runs in a Windows Server 2016 container running IIS (rather than IIS Express) and the SQL Server Database Project runs on SQL Server 2016 (rather than LocalDB).

Development Environment

The world of containers is evolving rapidly and the tooling might have changed by the time you read this. At the time of writing my environment is as follows:

  • Windows 10 Professional version 1703 (OS Build 15063.250)
  • Visual Studio Enterprise 2017 version 15.1 (26403.7) with the ASP.NET and web development workload
  • Docker for Windows 17.03.1-ce running Windows containers (I recommend the stable channel as at the time of writing the edge version had a bug that caused a problem for Docker support in Visual Studio)

Depending on the speed of your internet connection you might want to docker pull the following images if you are planning on following along:

It's perhaps worth saying here that I'm using these images for convenience because they are available on Docker Hub. In a production scenario you probably wouldn't want to rely on an image as fully formed as microsoft/aspnet and you would probably start with microsoft/windowsservercore or microsoft/nanoserver and have full control of what is installed. You definitely wouldn't start with microsoft/mssql-server-windows-developer of course.

The Contoso University sample application is essentially the same as Microsoft's version except I've changed the database from Entity Framework Code First to a SQL Server Database Project. I've also changed the application to work with SQL Server authentication (rather than Windows authentication) thus removing the need for a domain controller to supply a domain account. You can get the starting point code from here and the final code here.

Adding Initial Docker Support

The first step towards Dockerizing Contoso University is to add initial Docker support for the ASP.NET web application (out-of-the-box support for SQL Server Database Projects isn't available). This is a simple as right-clicking the ContosoUniversity.Web project and choosing Add > Docker Support. This has three main visible effects:

  • A new docker-compose ‘project' is added at Solution level and is made the Startup Project. This project contains several .yml files.
  • A Dockerfile file and a (nested) .dockerignore file are added to ContosoUniversity.Web.
  • The toolbar button that normally launches a browser has now switched to launching Docker:

The Dockerfile added to ContosoUniversity.Web is based on the microsoft/aspnet image so at this point you should now be able to run the application using the Docker toolbar button and have the website run in a Windows Container based on that image. The database side of things isn't working at this stage of course—Web.config is pointing to LocalDB and the container running the website can't see LocalDB.

To understand what has been created, open a PowerShell session and run docker images followed by docker ps. You should see that an image called contosouniversity.web has been created with a dev tag, and that this image has been used to create a container called something like dockercompose362878786_contosouniversity.web_1.

Adding Docker Support for the SQL Server Database Project

Adding Docker support for the SQL Server Database Project requires the following steps:

  1. Manually add a Dockerfile file and .dockerignore file to the root of ContosoUniversity.Database. Given that these files don't have file extensions and that database projects are quite prescriptive about what they think you should be adding it's easier to add them outside of Visual Studio and then add them in as existing items. (Note that if you are using Windows Explorer you'll need to create .dockerignore as .dockerignore.—Windows will drop the trailing period).
  2. Optionally, close Visual Studio and reopen the solution folder in a text editor such as Visual Studio Code. Open ContosoUniversity.Database.sqlproj and search for the Dockerfile and .dockerignore entries. Change them to look as follows to achieve the nested file effect in Visual Studio:
  3. .dockerignore just needs to contain an asterisk—meaning everything should be ignored.
  4. Dockerfile should contain the following code:
  5. Switching to the docker-compose ‘project', docker-compose.yml should be amended to the following:
  6. A change is also needed to docker-compose.vs.debug.yml which should be amended to the following:

At this point you should be able to run the application using the Docker toolbar button and again see the website running—in a Windows container. However this time a second image (contosouniversity.database, tagged with dev) and corresponding container (named something like dockercompose362878786_contosouniversity.database_1) will have been created, with the container now running SQL Server. This is a newly minted instance of SQL Server and doesn't have a database for our website to connect to, which is the next issue to address.

Connecting the Contoso University Website to its Database

These next steps assume you are following on from the previous section, ie that the website is open in a browser and that Visual Studio is still debugging.

  1. Leave the browser open but stop debugging in Visual Studio.
  2. In ContosoUniversity.Web edit Web.config so that the connection string Data Source points to contosouniversity.database:
  3. In a PowerShell session, find the IP address of the container running SQL Server using docker inspect and passing in enough of the container's ID to make it unique:
  4. In ContosoUniversity.Database edit ContosoUniversity.publish.xml so that the Target database connection points to the IP address of the SQL Server container and change the the authentication to SQL Server Authentication. The User Name should be sa (yes—I know) the password should be the same as the one specified in the Dockerfile used to build the database image. Save the profile and then click Publish.
  5. Back in the web browser running the Contoso University website, click on one of the menu bar links (eg Departments) that causes a database query. If everything has worked you should now have a fully functioning application.

Understanding the Developer Inner Loop Workflow

At this point we have achieved our aim of running and debugging both the website and database components of Contoso University in containers running operating systems that are the same as would be used in production. Once the images and containers have been created they will—as far as my testing is concerned—continue to be used as long as nothing changes. This is the case even if Visual Studio, Docker or even the workstation are restarted. The great thing is that any changes made to the containers—for example updating the database schema—will be preserved. Of course, if something changes in one of the Dockerfile files the images and containers will be rebuilt and in the case of the database the publish file will need to be updated with a new IP address and the database will need to be published again from scratch. Also, if the solution is cleaned (ie Build > Clean Solution) the containers are removed and rebuilt, again necessitating publishing the database from scratch. Overall though, the developer inner loop workflow feels quite slick.

Next Steps

As things stand the compose and Dockerfile files are not ready to be used in a continuous delivery pipeline. The website Dockerfile for example has Contoso University being deployed as the Default Web Site rather than a ContosoUniversity website and the database Dockerfile doesn't cater for any persistent storage. There is also the problem of checking in the database project's publish profile with an IP address specific to one developer's workstation—a real pain for other developers. I'll address these issues as part of getting Contoso University working in a Docker-based continuous delivery pipeline in the next post in this series.

Cheers -- Graham

Continuous Delivery with Containers – Azure CLI Command for Creating a Docker Release Pipeline with VSTS Part 2

Posted by Graham Smith on March 14, 2017One Comment (click here to comment)

In my previous post I described my experience of working through Microsoft's Continuous Integration and Deployment of Multi-Container Docker Applications to Azure Container Service tutorial which is a walkthrough of how to use an Azure CLI 2.0 command to create a VSTS deployment pipeline to push Docker images to an Azure Container Registry and then deploy and run them on an Azure Container Service running a DC/OS cluster. Whilst it's great to be able to issue some commands and have stuff magically appear it's unlikely that you would use this approach to create production-grade infrastructure: having precise control over naming things is one good reason. Another problem with commands that create infrastructure is that you don't always get a good sense of what they are up to, and that's what I found with the az container release create command.

So I spent quite a bit of time ‘reverse engineering' az container release create in order to understand what it's doing and in this post I describe, step-by-step, how to build what the command creates. In doing so I gained first-hand experience of what I think will be an import pattern for the future -- running VSTS agents in a container. If your infrastructure is in place it's quick and easy to set up and if you want more agents it takes just seconds to scale to as many as you need. In fact, once I had figured what was going on I found that working with Azure Container Service and DC/OS was pretty straightforward and even a great deal of fun. Perhaps it's just me but I found being able to create 50 VSTS agents at the ‘flick of a switch' put a big smile on my face. Read on to find out just how awesome all this is...

Getting Started

If you haven't already worked through Microsoft's tutorial and my previous post I strongly recommend those as a starting point so you understand the big picture. Either way, you'll need to have the Azure CLI 2.0 installed and also to have forked the sample code to your own GitHub account and renamed it to something shorter (I used TwoSampleApp). My previous post has all the details. If you already have the Azure CLI installed do make sure you've updated it (pip install azure-cli --upgrade) since version 2.0 was recently officially released.

Creating the Azure Infrastructure

You'll need to create the following infrastructure in Azure:

  • A dedicated resource group (not strictly necessary but helps considerably with cleaning up the 30+ resources that get created).
  • An Azure container registry.
  • An Azure container service configured with a DC/OS cluster.

The Azure CLI 2.0 commands to create all this are as follows:

The az acs create command in particular is doing a huge amount of work behind the scenes, and if configuring a container service for a production environment you'd most likely want greater control over the names of all the resources that are created. I'm not worried about that here and the output of these commands is fine for my research purposes. If you do want to delve further you can examine the automation script for the top level resources these commands create.

Configuring VSTS

Over in your VSTS account you'll need to attend to the following items:

  • Create a new team project (I called mine TwoServiceApp) configured for Git. (A new project isn't strictly necessary but it helps when cleaning up.)
  • Create an Agent Pool called TwoServiceApp. You can get to the page that manages agent pools from the agent queues tab of your team project:
  • Create a service endpoint of type Github that grants VSTS access to your GitHub account. The procedure is detailed here -- I used the personal access token method and called the connection TwoServiceAppGh.
  • Create a service endpoint of type Docker Registry that grants access to the Azure container registry created above. I describe the process in this blog post and called the endpoint TwoServiceAppAcr.
  • Create a personal access token (granting permission to all scopes) and store the value for later use.
  • Ensure the Docker Integration extension is installed from the Marketplace.

Create a VSTS Agent

This is where the fun begins because we're going to create a VSTS agent in DC/OS using a Docker container. Yep -- you read that right! If you've only ever created an agent on ‘bare metal' servers then you need to forget everything you know and prepare for awesomeness. Not least because if you suddenly feel that you want a dozen agents a quick configuration setting will have them created for you in a flash!

The first step is to configure your workstation to connect to the DC/OS cluster running in your Azure container service. There are several ways to do this but I followed these instructions (Connect to a DC/OS or Swarm clusterCreate an SSH tunnel on Windows) to configure PuTTY to create an SSH tunnel. The host name will be something like azureuser@twoserviceappacsmgmt.westeurope.cloudapp.azure.com (you can get the master FQDN from the overview blade of your Azure container service and the default login name used by az acr create is azureuser) and you will need to have created a private key in .ppk format using PuTTYGen. Once you have successfully connected (you actually SSH to a DC/OS master VM) you should be able to browse to these URLs:

  • DC/OS -- http://localhost
  • Marathon -- http://localhost/marathon
  • Mesos -- http://localhost/mesos

If you followed the Microsoft tutorial then much of what you see will be familiar, although there will be nothing configured of course. To create the application that will run the agent you'll need to be in Marathon:

Clicking Create Application will display the configuration interface:

Whilst it is possible to work through all of the pages and enter in the required information, a faster way is to toggle to JSON Mode and paste in the following script (overwriting what's there):

You will need to amend some of the settings for your environment:

  • id -- choose an appropriate name for the application (note that /vsts-agents/ creates a folder for the application).
  • VSTS_POOL -- the name of the agent pool created above.
  • VSTS_TOKEN -- the personal access token created above.
  • VSTS_ACCOUNT -- the name of your VSTS account (ie if the URL is https://myvstsaccount.visualstudio.com then use myvstsaccount).

It will only take a few seconds to create the application after which you should see something that looks like this:

For fun, click on the Scale Application button and enter a number of instances to scale to. I scaled to 50 and it literally took just a few seconds to configure them all. This resulted in this which is pretty awesome in my book for just a few seconds work:

Scaling down again is even quicker -- pretty much instant in Marathon and VSTS was very quick to get back to displaying just one agent. With the fun over, what have we actually built here?

The concept is that rather than configure an agent by hand in the traditional way, we are making use of one of the Docker images Microsoft has created specifically to contain the agent and build tools. You can examine all the different images from this page on Docker Hub. Looking at the Marathon configuration code above in the context of the instructions for using the VSTS agent images it's hopefully clear that the configuration is partially around hosting the image and creating the container and partially around passing variables in to the container to configure the agent to talk to your VSTS account and a specific agent pool.

Create a Build Definition

We're now at a point where we can switch back to VSTS and create a build definition in our team project. Most of the tasks are of the Docker Compose type and you can get further details here. Start with an empty process and name the definition TwoServiceApp. On the Options tab set the Default agent queue to be TwoServiceApp. On the tasks tab in Get sources configure the build to point to your GitHub account:

Now add and configure the following tasks (only values that need adding or amending, or which need a special mention are listed):

Task #1 -- Docker Compose
  • Display name = Build repository
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Docker Compose File = **/docker-compose.ci.build.yml
  • Action = Run a specific service image
  • Service name = ci-build

Save the definition and queue a build. The source code will be pulled down and then the instructions in the ci-build node of docker-compose.ci.build.yml will be executed which will cause service-b to be built.

Task #2 -- Docker Compose
  • Display name = Build service images
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Docker Compose File = **/docker-compose.yml
  • Qualify Image Names = checked
  • Action = Build service images
  • Additional Image Tags = $(Build.BuildId) $(Build.SourceBranchName) $(Build.SourceVersion) (on separate lines)
  • Include Source Tags = checked
  • Include Latest Tag = checked

Save the definition and queue a build. The addition of this task causes causes Docker images to be created in the agent container for service-a and service-b.

Task #3 -- Docker Compose
  • Display name = Push service images
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Docker Compose File = **/docker-compose.yml
  • Qualify Image Names = checked
  • Action = Push service images
  • Additional Image Tags = $(Build.BuildId) $(Build.SourceBranchName) $(Build.SourceVersion) (on separate lines)
  • Include Source Tags = checked
  • Include Latest Tag = checked

Save the definition and queue a build. The addition of this task causes causes the Docker images to be pushed to the Azure container registry.

Task #4 -- Docker Compose
  • Display name = Write service image digests
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Docker Compose File = **/docker-compose.yml
  • Qualify Image Names = checked
  • Action = Write service image digests
  • Image Digest Compose File = $(Build.StagingDirectory)/docker-compose.images.yml

Save the definition and queue a build. The addition of this task creates immutable identifiers for the previously built images which provide a guaranteed way of referring back to a specific image in the container registry. The identifiers are stored in a file called docker-compose.images.yml, the contents of which will look something like:

Task #5 -- Docker Compose
  • Display name = Combine configuration
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Docker Compose File = **/docker-compose.yml
  • Additional Docker Compose Files = $(Build.StagingDirectory)/docker-compose.images.yml
  • Qualify Image Names = checked
  • Action = Combine configuration
  • Remove Build Options = checked

Save the definition and queue a build. The addition of this task creates a new docker-compose.yml that is a composite of the original docker-compose.yml and docker-compose.images.yml. The contents will look something like:

This is the file that is used by the release definition to deploy the services to DC/OS.

Task #6 -- Copy Files
  • Display name = Copy Files to: $(Build.StagingDirectory)
  • Contents = **/docker-compose.env.*.yml
  • Target Folder = $(Build.StagingDirectory)

Save the definition but don't bother queuing a build since as things stand this task doesn't have any files to copy over. Instead, the task comes in to play when using environment files (see later).

Task #7 -- Publish Build Artifacts
  • Display name = Publish Artifact: docker-compose
  • Path to Publish = $(Build.StagingDirectory)
  • Artifact Name = docker-compose
  • Artifact Type = Server

Save the definition and queue a build. The addition of this task creates the build artefact containing the contents of the staging directory, which happen to be docker-compose.yml and docker-compose.images.yml, although only docker-compose.yml is needed. The artifact can be downloaded of course so you can examine the contents of the two files for yourself.

Create a Release Definition

Create a new empty release definition and configure the Source to point to the TwoServiceApp build definition, the Queue to point to the TwoServiceApp agent queue and check the Continuous deployment option:

With the definition created, edit the name to TwoServiceApp, rename the default environment to Dev and rename the default phase to AcsDeployPhase:

Add Docker Deploy task to the AcsDeployPhase and configure as follows (only values that need changing are listed):

  • Display Name = Deploy to ACS DC/OS
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Target Type = Azure Container Service (DC/OS)
  • Docker Compose File = **/docker-compose.yml
  • ACS DC/OS Connection Type = Direct

The final result should be as follows:

Trigger a release and then switch over to DC/OS (ie at http://localhost) and the Services page. Drill down through the Dev folder and the three services defined in docker-compose.yml should now be deployed and running:

To complete the exercise the Dev environment can now be cloned (click the ellipsis in the Dev environment to show the menu) to create Test and Production environments with manual approvals. If you want to view the sample application in action follow the View the application instructions in the Microsoft tutorial.

At this point there is no public endpoint for the production instance of TwoServiceApp. To remedy that follow the Expose public endpoint for production instructions in the Microsoft tutorial. Additionally, you will need to amend the production version of the Docker Deploy task so the Additional Docker Compose Files section contains docker-compose.env.production.yml.

Final Thoughts

Between Microsoft's tutorial and my two posts relating to it you have seen a glimpse of the powerful tools that are available for hosting and orchestrating containers. Yes, this has all been using Linux containers but indications are that similar functionality -- if perhaps not using exactly the same tools -- is on the way for Windows containers. Stay tuned!

Cheers -- Graham

Continuous Delivery with Containers – Azure CLI Command for Creating a Docker Release Pipeline with VSTS Part 1

Posted by Graham Smith on January 30, 20176 Comments (click here to comment)

One of the aims of my blog series on Continuous Delivery with Containers is to try and understand how best to use Visual Studio Team Services with Docker, so I was very interested to learn that Azure CLI 2.0 has a command to create a VSTS deployment pipeline to push Docker images to an Azure Container Registry and then deploy and run them on an Azure Container Service running a DC/OS cluster. Even better, Microsoft have written a tutorial (Continuous Integration and Deployment of Multi-Container Docker Applications to Azure Container Service) on how to use this command.

Whilst I'm somewhat sceptical about using generic scaffolding tooling to create production-ready workloads (I find that the naming conventions used are usually unsuitable for example) there is no doubt that they are great for quickly building proof of concepts and also for learning (what are hopefully!) best practices. It was with this aim that, armed with a large cup of tea, I sat down one afternoon to plough my way through the tutorial. It was a great learning experience, however I went down some blind alleys to get the pipeline working and then ended up doing quite a lot of head scratching (due to my ignorance I hasten to add) to fully understand what had been created.

So in this post I'm writing-up my experience of working through the tutorial with notes that I hope will help anyone else using it. In a follow-up post I'll attempt to document what the az container release create command actually creates and configures. Just a reminder that with this tutorial we're still very much in the Linux container world. Whilst this might be frustrating for those eager to see advanced tutorials based on Windows containers the learning focus here is mostly Docker and VSTS so the fact that the containers are running Linux shouldn't put you off.

On a final note before we get started, I'm using a Windows 10 Professional workstation with the beta version (1.13.0 at the time of writing) of Docker for Windows installed and running.

Getting Started with the Azure CLI

The tutorial requires version 2.0 of Azure CLI which is based on Python. The Azure CLI installation documentation suggests running Azure CLI in Docker but don't go down that path as it's a dead end as far as the tutorial is concerned. Instead follow these installation steps:

  1. Install the latest version of Python from here.
  2. From a command prompt upgrade pip (package management system for Python) using the python -m pip install --upgrade pip command.
  3. Install Azure CLI 2.0 using pip install azure-cli. (If you have previously installed Azure CLI 2.0 you should check for an upgrade using pip install azure-cli --upgrade.)
  4. Check Azure CLI is working using the az command. You should see this:

The next step is to actually log in to the Azure CLI. The process is as follows:

  1. At a command prompt type az login.
  2. Navigate to https://aka.ms/devicelogin in a browser.
  3. Supply the one-time authentication code supplied by the az login command.
  4. Complete the authentication process using your Azure credentials.

If you have multiple subscriptions you may need to set the default subscription:

  1. At the command prompt type az account list to show details of all your accounts.
  2. Each account has an isDefault property which will tell you the default account.
  3. If you need to make a change use az account set --subscription <Id> -- you can copy and paste the subscription Id from the accounts list.

Creating the Azure Container Service Cluster with DC/OS

This step is pretty straightforward and the tutorial doesn't need any further explanation. My commands to create the resource group and the ACS cluster were:

Be aware that the az acs create command results in a request to provision 18 cores. This might exceed your quota for a given region, even if you have previously contacted Microsoft Support to request an increase in the total number of cores allowed for your subscription (which you might have to do anyway if you have cores already provisioned). I found that choosing a region where I didn't have any cores provisioned fixed a quotaExceeded exception that I was getting.

For simplicity I used the --generate-ssh-keys option to save having to do this manually. This creates id_rsa and id_rsa.pub files (ie a private / public key pair) in C:\Users\<username>\.ssh.

A word of warning -- if you are using an Azure subscription with MSDN credits be aware that an ACS cluster will eat your credits at an alarming rate. As of the time of writing this post I've not found a reliable way of turning everything off and turning it back on again with everything fully working (specifically the build agent). Consequently I tend to delete the resource group and the VSTS project when I'm finished using them and then recreate them from scratch when I next need them. If you do this do be aware that if you have multiple Azure subscriptions the az account set --subscription <Id> command to set the default subscription can't be relied upon to be ‘sticky', and you can find yourself creating stuff in a different subscription by mistake.

Working with the Sample Code

The tutorial uses sample code that consists of an Angular.js-based web app (with a Node.js backend) that calls a separate .NET Core application, and these are deployed as two separate services. The problem I found was that the name of the GitHub repo (container-service-dotnet-continuous-integration-multi-container) is extremely long and is used to name some of the artefacts that get created by the Azure CLI container release command. This makes for some very unwieldy names which I found somewhat irksome. You can fix this as follows:

  1. Fork the sample code to your own GitHub account.
  2. Switch to the Settings tab:
  3. Use the Rename option to give the forked repo a more manageable name -- I chose TwoServiceApp.
  4. Clone the repo to your workstation in your preferred way -- for me this involved opening a command prompt at C:\Source\GitHub and running git clone https://github.com/GrahamDSmith/TwoServiceApp.git.

At this point it's probably a good idea to get the sample app working locally which will help with understanding how multi-container Docker deployments work. If you want to examine the source code then Visual Studio Code is an ideal tool for the job. To run the application the first step is to build the .NET Core component. At a command prompt at the root of the application run the following command:

This runs docker-compose with a specific .yml file, and executes the instructions at the ci-build node. The really neat thing about this command is that it uses a Docker container to build the .NET Core app (service-b), which means your workstation doesn't need the .NET Core to be installed for this to work. Looking at the key parts of the docker-compose.ci.build.yml file:

  • image: microsoft/dotnet:1.0.0-preview2.1-sdk -- this specifies that this particular Microsoft official Docker image for .NET Core on Linux should be used.
  • volumes: -- ./service-b:/src -- this causes the local service-b folder on your workstation to be ‘mirrored' to a folder named src in the container that will be created from the microsoft/dotnet:1.0.0-preview2.1-sdk image.
  • working_dir: /src -- set the working directory in the container to src.
  • command: /bin/bash -c "dotnet restore && dotnet publish -c Release -o bin ." -- this is the command to build and publish service-b.

Because the service-b folder on your workstation is mirrored to the src folder in the running container the result of the build command is copied from the container to your workstation. Pretty nifty!

To actually run the application now run this command:

By convention docker-compose will look for a docker-compose.yml file so there is no need to specify it. On examining docker-compose.yml it should be pretty easy to see what's going on -- three services (service-a, service-b and mycache) are specified and service-a and service-b are built according to their respective Dockerfile instructions. Both service-a and service-b containers are set to listen on port 80 at runtime and in addition service-a is accessible to the host (ie your workstation) on port 8080. Consequently, you should be able to navigate to http://localhost:8080 in your browser and see the app running.

Creating the Deployment Pipeline

This step is straightforward and the tutorial doesn't need any further explanation. One extra step I included was to create an Azure Container Registry instance in the same resource group used to create the Azure Container Service. Despite repeated attempts, for some reason I couldn't create this at the command line so ended up creating it through the portal. The command though should look similar to this:

To facilitate easy teardown I also created a dedicated project in VSTS called TwoServiceApp. The command to create the pipeline (GitHub token made up of course) was then as follows:

This command results in the creation of build and release definitions in VSTS (along with other supporting items) and a deploy of the image to a Dev environment.

Viewing the Application

To view the application as deployed to the Dev environment you need to launch the DC/OS dashboard. The tutorial instructions are easy to follow, however you might get tripped-up by the instructions for configuring Pageant since the instructions direct you to "Launch PuttyGen and load the private SSH key used to create the ACS cluster (%homepath%\id_rsa)". On my machine at least the id_rsa file was created at %homepath%\.ssh\id_rsa rather than %homepath%\id_rsa. If you persist with the instructions you eventually end up running the application in the Dev environment, but if like me you are new to cluster technologies such as DC/OS it all feels like some kind of sorcery.

A final observation here is that the configuration to launch the DC/OS dashboard requires your browser's proxy to be set. This knocked-out the Internet connection for all my other browser tabs, and was the cause of alarm for a few seconds when I realised that the tab I was using to edit my WordPress blog wouldn't save. If you launched the DC/OS dashboard from the command line (using az acs dcos browse --name TwoServiceAppAcs --resource-group TwoServiceAppRg) you need to use CTRL+C from the command line to close the session. In an emergency head over to Windows Settings > Network & Internet > Proxy to reset things back to normal.

Until Next Time

That concludes the write-up of my notes for use with the Continuous Integration and Deployment of Multi-Container Docker Applications to Azure Container Service tutorial. If you work through the tutorial and have any further tips that might be of use please do post in the comments.

In the next post I'll start to document what the what the az container release create command actually creates and configures.

Cheers -- Graham

Continuous Delivery with Containers – Amending a VSTS / Docker Hub Deployment Pipeline with Azure Container Registry

Posted by Graham Smith on December 1, 2016No Comments (click here to comment)

In this blog series on Continuous Delivery with Containers I'm documenting what I've learned about Docker and containers (both the Linux and Windows variety) in the context of continuous delivery with Visual Studio Team Services. It's a new journey for me so do let me know in the comments if there is a better way of doing things!

In the previous post in this series I explained how to use VSTS and Docker to build and deploy an ASP.NET Core application to a Linux VM running in Azure. It's a good enough starting point but one of the first objections anyone working in a private organisation is likely to have is publishing Docker images to the public Docker Hub. One answer is to pay for a private repository in the Docker Hub but for anyone using Azure a more appealing option might be the Azure Container Registry. This is a new offering from Microsoft -- it's still in preview and some of the supporting tooling is only partially baked. The core product is perfectly functional though so in this post I'm going to be amending the pipeline I built in the previous post with Azure Container Registry to find out how it differs from Docker Hub. If you want to follow along with this post you'll need to make sure  you have a working pipeline as I describe in my previous post.

Create an Azure Container Registry

At the time of writing there is no PowerShell experience for ACR so unless you want to use the CLI 2.0 it's a case of using the portal. I quite like the CLI but to keep things simple I'm using the portal. For some reason ACR is a marketplace offering so you'll need to add it from New > Marketplace > Containers > Container Registry (preview). Then follow these steps:

  1. Create a new resource group that will contain all the ACR resources -- I called mine PrmAcrResourceGroup.
  2. Create a new standard storage account for the ACR -- I called mine prmacrstorageaccount. Note that at the time of writing ACR is only available in a few regions in the US and the storage account needs to be in the same region. I chose West US.
  3. Create a new container registry using the resource group and storage account just created -- I called mine PrmContainerRegistry. As above, the registry and storage account need to be in the same location. You will also need to enable the Admin user:
    azure-portal-create-container-registry

Add a New Docker Registry Connection

This registry connection will be used to replace the connection made in the previous post to Docker Hub. The configuration details you need can be found in the Access key blade of the newly created container registry:

azure-portal-container-registry-access-key-blade

Use these settings to create a new Docker Registry connection in the VSTS team project:

vsts-services-endpoints-azure-container-registry

Amend the Build

Each of the three Docker tasks that form part of the build need amending as follows:

  • Docker Registry Connection = <name of the Azure Container Registry connection>
  • Image Name = aspnetcorelinux:$(Build.BuildNumber)
  • Qualify Image Name = checked

One of the most crucial amendments turned out to be the Qualify Image Name setting. The purpose of this setting is to prefix the image name with the registry hostname, but if left unchecked it seems to default to Docker Hub. This causes an error during the push as the task tries to push to Docker Hub which of course fails because the registry connection has authenticated to ACR rather than Docker Hub:

vsts-docker-push-error

It was obvious once I'd twigged what was going on but it had me scratching my head for a little while!

Final Push

With the amendments made you can now trigger a new build, which should work exactly as before except now the docker image is pushed to -- and run from -- your ACR instance rather than Docker Hub.

Your next question is probably going to be how can I get a list of the repositories I've created in ACR? Don't bother looking in the portal since -- at the time of writing at least -- there is no functionality there to list repositories. Instead one of the guys at Microsoft has created a separate website which, once authenticated, shows you this information:

acr-portal

If you want to do a bit more you can use the CLI 2.0. The syntax to list repositories for example is az acr repository list -n <Azure Container Registry name>.

It's early days yet however the ACR is looking like a great option for anyone needing a private container registry and for whom an Azure option makes sense. Do have a look at the documentation and also at Steve Lasker's Connect(); video here.

Cheers -- Graham

Continuous Delivery with Containers – Use Visual Studio Team Services and Docker to Build and Deploy ASP.NET Core to Linux

Posted by Graham Smith on October 27, 20168 Comments (click here to comment)

In this blog series on Continuous Delivery with Containers I'm documenting what I've learned about Docker and containers (both the Linux and Windows variety) in the context of continuous delivery with Visual Studio Team Services. The Docker and containers world is mostly new to me and I have only the vaguest idea of what I'm doing so feel free to let me know in the comments if I get something wrong.

Although the Windows Server Containers feature is now a fully supported part of Windows it is still extremely new in comparison to containers on Linux. It's not surprising then that even in the world of the Visual Studio developer the tooling is most mature for deploying containers to Linux and that I chose this as my starting point for doing something useful with Docker. As I write this the documentation for deploying containers with Visual Studio Team Services is fragmented and almost non-existent. The main references I used for this post were:

However to my mind none of these blogs cover the whole process to any satisfactory depth and in any case they are all somewhat out of date. In this post I've therefore tried to piece all of the bits of the jigsaw together that form the end-to-end process of creating an ASP.NET Core app in Visual Studio and debugging it whilst running on Linux, all the way through to using VSTS to deploy the app in a container to a target node running Linux. I'm not attempting to teach the basics of Docker and containers here and if you need to get up to speed with this see my Getting Started post here.

Install the Tooling for the Visual Studio Development Inner Loop

In order to get your development environment properly configured you'll need to be running a version of Windows that is supported by Docker for Windows and have the following tooling installed:

You'll also need a VSTS account and an Azure subscription.

Create an ASP.NET Core App

I started off by creating a new Team Project in VSTS and called Containers and then from the Code tab creating a New repository using Git called AspNetCoreLinux:

vsts-code-new-repository

Over in Visual Studio I then cloned this repository to my source control folder (in my case to C:\Source\VSTS\AspNetCoreLinux as I prefer a short filepath) and added .gitignore and .gitattributes files (see here if this doesn't make sense) and committed and synced the changes. Then from File > New > Project I created an ASP.NET Core Web Application (.NET Core) application called AspNetCoreLinux using the Web Application template (not shown):

visual-studio-create-new-asp-net-core-application

Visual Studio will restore the packages for the project after which you can run it with F5 or Ctrl+F5.

The next step is to install support for Docker by right-clicking the project and choosing Add > Docker Support. You should now see that the Run dropdown has an option for Docker:

visual-studio-run-dropdown

With Docker selected and Docker for Windows running (with Shared Drives enabled!) you will now be running and debugging the application in a Linux container. For more information about how this works see the resources on the Visual Studio Tools for Docker site or my list of resources here. Finally, if everything is working don't forget to commit and sync the changes.

Provision a Linux Build VM

In order to build the project in VSTS we'll need a build machine. We'll provision this machine in Azure using the Azure driver for Docker Machine which offers a very neat way for provisioning a Linux VM with Docker installed in Azure. You can learn more about Docker Machine from these sources:

To complete the following steps you'll need the Subscription ID of the Azure subscription you intend to use which you can get from the Azure portal.

  1. At a command prompt enter the following command:

    By default this will create a Standard A2 VM running Ubuntu called vstsbuildvm (note that "Container names must be 3-63 characters in length and may contain only lower-case alphanumeric characters and hyphen. Hyphen must be preceded and followed by an alphanumeric character.") in a resource group called VstsBuildDeployRG in the West US datacentre (make sure you use your own Azure Subscription ID). It's fully customisable though and you can see al the options here. In particular I've added the option for the VM to be created with a static public IP address as without that there's the possibility of certificate problems when the VM is shut down and restarted with a different IP address.
  2. Azure now wants you to authenticate. The procedure is explained in the output of the command window, and requires you to visit https://aka.ms/devicelogin and enter the one-time code:
    command-prompt-docker-machine-create
    Docker Machine will then create the VM in Azure and configure it with Docker and also generate certificates at C:\Users\<yourname>\.docker\machine. Do have a poke a round the subfolders of this path as some of the files are needed later on and it will also help to understand how connections to the VM are handled.
  3. This step isn't strictly necessary right now, but if you want to run Docker commands from the current command prompt against the Docker Engine running on the new VM you'll need to configure the shell by first running docker-machine env vstsbuildvm. This will print out the environment variables that need setting and the command (@FOR /f "tokens=*" %i IN (‘docker-machine env vstsbuilddeployvm') DO @%I) to set them. These settings only persist for the life of the command prompt window so if you close it you'll need to repeat the process.
  4. In order to configure the internals of the VM you need to connect to it. Although in theory you can use the docker-machine ssh vstsbuildvm command to do this in practice the shell experience is horrible. Much better is to use a tool like PuTTY. Donovan Brown has a great explanation of how to get this working about half way down this blog post. Note that the folder in which the id_rsa file resides is C:\Users\<yourname>\.docker\machine\machines\<yourvmname>. A tweak worth making is to set the DNS name for the server as I describe in this post so that you can use a fixed host name in the PuTTY profile for the VM rather than an IP address.
  5. With a connection made to the VM you need to issue the following commands to get it configured with the components to build an ASP.NET Core application:
    1. Upgrade the VM with sudo apt-get update && sudo apt-get dist-upgrade.
    2. Install .NET Core following the instructions here, making sure to use the instructions for Ubuntu 16.04.
    3. Install npm with sudo apt -y install npm.
    4. Install Bower with sudo npm install -g bower.
  6. Next up is installing the VSTS build agent for Linux following the instructions for Team Services here. In essence (ie do make sure you follow the instructions) the steps are:
    1. Create and switch to a downloads folder using mkdir Downloads && cd Downloads.
    2. At the Get Agent page in VSTS select the Linux tab and the Ubuntu 16.04-x64 option and then the copy icon to copy the URL download link to the clipboard:
      vsts-download-agent-get-agent
    3. Back at the PuTTY session window type sudo wget followed by a space and then paste the URL from the clipboard. Run this command to download the agent to the Downloads folder.
    4. Go up a level using cd .. and then make and switch to a folder for the agent using mkdir myagent && cd myagent.
    5. Extract the compressed agent file to myagent using tar zxvf ~/Downloads/vsts-agent-ubuntu.16.04-x64-2.108.0.tar.gz (note the exact file name will likely be different).
    6. Install the Ubuntu dependencies using sudo ./bin/installdependencies.sh.
    7. Configure the agent using ./config.sh after first making sure you have created a personal access token to use. I created my agent in a pool I created called Linux.
    8. Configure the agent to run as a service using sudo ./svc.sh install and then start it using sudo ./svc.sh start.

If the procedure was successful you should see the new agent showing green in the VSTS Agent pools tab:

vsts-agent-pools

Provision a Linux Target Node VM

Next we need a Linux VM we can deploy to. I used the same syntax as for the build VM calling the machine vstsdeployvm:

Apart from setting the DNS name for the server as I describe in this post there's not much else to configure on this server except for updating it using sudo apt-get update && sudo apt-get dist-upgrade.

Gearing Up to Use the Docker Integration Extension for VSTS

Configuration activities now shift over to VSTS. The first thing you'll need to do is install the Docker Integration extension for VSTS from the Marketplace. The process is straightforward and wizard-driven so I won't document the steps here.

Next up is creating three service end points -- two of the Docker Host type (ie our Linux build and deploy VMs) and one of type Docker Registry. These are created by selecting Services from the Settings icon and then Endpoints and then the New Service Endpoint dropdown:

vsts-services-endpoints-docker

To create a Docker Host endpoint:

  1. Connection Name = whatever suits -- I used the name of my Linux VM.
  2. Server URL = the DNS name of the Linux VM in the format tcp://your.dns.name:2376.
  3. CA Certificate = contents of C:\Users\<yourname>\.docker\machine\machines\<yourvmname>\ca.pem.
  4. Certificate = contents of C:\Users\<yourname>\.docker\machine\machines\<yourvmname>\cert.pem.
  5. Key = contents of C:\Users\<yourname>\.docker\machine\machines\<yourvmname>\key.pem.

The completed dialog (in this case for the build VM) should look similar to this:

vsts-services-endpoints-docker-host

Repeat this process for the deploy VM.

Next, if you haven't already done so you will need to create an account at Docker Hub. To create the Docker Registry endpoint:

  1. Connection Name = whatever suits -- I used my name
  2. Docker Registry = https://index.docker.io/v1/
  3. Docker ID = username for Docker Hub account
  4. Password = password for Docker Hub account

The completed dialog should look similar to this:

vsts-services-endpoints-docker-hub

Putting Everything Together in a Build

Now the fun part begins. To keep things simple I'm going to run everything from a single build, however in a more complex scenario I'd use both a VSTS build and a VSTS release definition. From the VSTS Build & Release tab create a new build definition based on an Empty template. Use the AspNetCoreLinux repository, check the Continuous integration box and select Linux for the Default agent queue (assuming you create a queue named Linux as I've done):

vsts-create-new-build-definition

Using Add build step add two Command Line tasks and three Docker tasks:

vsts-add-tasks

In turn right-click all but the first task and disable them -- this will allow the definition to be saved without having to complete all the tasks.

The configuration for Command Line task #1 is:

  • Tool = dotnet
  • Arguments = restore -v minimal
  • Advanced > Working folder = src/AspNetCoreLinux (use the ellipsis to select)

Save the definition (as AspNetCoreLinux) and then queue a build to make sure there are no errors. This task restores the packages specified in project.json.

The configuration for Command Line task #2 is:

  • Tool = dotnet
  • Arguments = publish -c $(Build.Configuration) -o $(Build.StagingDirectory)/app/
  • Advanced > Working folder = src/AspNetCoreLinux (use the ellipsis to select)

Enable the task and then queue a build to make sure there are no errors. This task publishes the application to$(Build.StagingDirectory)/app (which equates to home/docker-user/myagent/_work/1/a/app).

The configuration for Docker task #1 is:

  • Docker Registry Connection = <name of your Docker registry connection>
  • Action = Build an image
  • Docker File = $(Build.StagingDirectory)/app/Dockerfile
  • Build Context = $(Build.StagingDirectory)/app
  • Image Name = <your Docker ID>/aspnetcorelinux:$(Build.BuildNumber)
  • Docker Host Connection = vstsbuildvm (or your Docker Host name for the build server)
  • Working Directory = $(Build.StagingDirectory)/app

Enable the task and then queue a build to make sure there are no errors. If you run sudo docker images on the build machine you should see the image has been created.

The configuration for Docker task #2 is:

  • Docker Registry Connection = <name of your Docker registry connection>
  • Action = Push an image
  • Image Name = <your Docker ID>/aspnetcorelinux:$(Build.BuildNumber)
  • Advanced Options > Docker Host Connection = vstsbuildvm (or your Docker Host name for the build server)
  • Advanced Options > Working Directory = $(System.DefaultWorkingDirectory)

Enable the task and then queue a build to make sure there are no errors. If you log in to Docker Hub you should see the image under your profile.

The configuration for Docker task #3 is:

  • Docker Registry Connection = <name of your Docker registry connection>
  • Action = Run an image
  • Image Name = <your Docker ID>/aspnetcorelinux:$(Build.BuildNumber)
  • Container Name = aspnetcorelinux$(Build.BuildNumber) (slightly different from above!)
  • Ports = 80:80
  • Advanced Options > Docker Host Connection = vstsdeployvm (or your Docker Host name for the deploy server)
  • Advanced Options > Working Directory = $(System.DefaultWorkingDirectory)

Enable the task and then queue a build to make sure there are no errors. If you navigate to the URL of your deployment sever (eg http://vstsdeployvm.westus.cloudapp.azure.com/) you should see the web application running. As things stand though if you want to deploy again you'll need to stop the container first.

That's all for now...

Please do be aware that this is only a very high-level run-through of this toolchain and there many gaps to be filled: how does a website work with databases, how to host a website on something other than the Kestrel server used here and how to secure containers that should be private are just a few of the many questions in my mind. What's particularly exciting though for me is that we now have a great solution to the problem of developing a web app on Windows 10 but deploying it to Windows Server, since although this post was about Linux, Docker for Windows supports the same way of working with Windows Server Core and Nanao Server (currently in beta). So I hope you found this a useful starting point -- do watch out for my next post in this series!

Cheers -- Graham

Getting Started with Containers and Docker for Visual Studio Developers

Posted by Graham Smith on October 18, 2016No Comments (click here to comment)

Docker and containers are one of the hot topics of the development world right now and there's no sign of them going away. With the recent launch of Windows Server 2016 and with it the Windows Server Containers feature the world of containers is one that can't be ignored by developers on the Windows platform who don't want to get left behind. I've spent quite a bit of time over the summer learning Docker and attempting to understand how containers fit in to the Visual Studio developer workflow and the continuous delivery pipeline -- a precursor for my new blog series on Continuous Delivery with Containers. I've found quite a lot of useful resources and in this Getting Started post I've listed the ones which I think are the most useful. I've also added some narrative for anyone who is trying to make sense of all the different tooling as it took me a little while to get this clear in my mind.

Docker

Docker is an open-source technology for managing containers -- not to be confused with Docker Inc, the company which is the original author and primary sponsor of the Docker open source project. Since containers have their roots in the Linux world it's not surprising that most of the in-depth resources for learning Docker come from the Linux world. The question for those of us who predominantly use Windows and who don't want to have to install Linux is how to get going? Two possibilities are the Katacoda browser-based labs and Docker for Windows (both covered below). These essentially allow you to learn in a predominantly Linux world (no bad thing) but if this is too much you could go straight to Docker on Windows, although I think you'll be missing out.

Docker on Windows

For the past couple of years Microsoft has been contributing to the Docker open source project to bring the benefits of Docker to Windows Server. For some time now it's been possible to install Docker on Windows Server 2016 and use it to manage Windows Server containers. Here's the thing though: I'm using the term Docker on Windows to specifically differentiate it from Docker for Windows. Whilst clearly related they are two different things and the lack of anyone pointing this out in blog posts and documentation caused confusion for me in my early dealings with Docker on Windows. One more point to note is that there are two types of containers for Windows -- Windows Server Containers and Hyper-V Containers. At the time of writing both types run on Windows Server but only Hyper-V Containers run on Windows 10. Whilst I'm in pointing-things-out mode I might as well mention that there is a PowerShell module for managing containers as an alternative to the Docker command-line interface, as you might see it being used in some blog posts.

Docker for Windows

Docker for Windows -- not to be confused with Docker on Windows -- is a technology that leverages Hyper-V on Windows 10 (certain versions only at time of writing) to host a lightweight Linux VM. Docker commands can then be issued against this VM from the Windows 10 host. It's a great tool for supporting learning about Docker and an essential element of the toolkit required for Visual Studio developers wanting to start working with containers as part of the developer workflow.

Visual Studio Tools for Docker and Developer Workflow

For Visual Studio developers we're now getting to the really good stuff. Visual Studio Tools for Docker enables support for running ASP.NET Core applications on the lightweight Linux VM that is at the heart of Docker for Windows. It even supports debugging. What's even more exciting is that at the time of writing the latest beta version of Docker for Windows supports Windows Containers so as well as running an ASP.NET Core application under development against Linux you can now also run it against Windows Server (Core or Nano Server). Why is this exciting? Well, has it ever bothered you that you might be developing a web application on Windows 10 and hosting it in IIS Express but in production it will be running on Windows Server 2012/16 hosted in full-fat IIS? I know it's bothered me and the excitement is that this toolchain promises to fix all that.

Time to Down Tools

A resource that doesn't really fit in to any of the categories above is the The Containers Channel on Channel 9. It's got a great mix of content and worth keeping an eye on for new additions.

I hope you find these resources useful and if you have any great discoveries worth sharing please let me know in the comments. Do be sure to keep an eye on my new Continuous Delivery with Containers blog series where I'll be documenting my journey to understand how containers can play a part in the continuous delivery pipeline.

Cheers -- Graham