Continuous Delivery with Containers – Say Goodbye to IIS Express and LocalDB, with Visual Studio 2017, Docker and Windows Containers

Posted by Graham Smith on May 10, 20174 Comments (click here to comment)

A view I've heard expressed a few times recently, and which I completely agree with, is that we need to be discovering problems with our applications as far to the left as possible since it's much cheaper to fix problems there than further down the line towards—or even in—production. So with this in mind is it just me who feels slightly uneasy that in the Visual Studio world the development and debugging of applications destined for Windows servers tends be on Windows desktop machines using lightweight counterparts of server applications such as IIS Express to host ASP.NET websites and LocalDB to host SQL Server databases? With this setup it seems like we could be storing up trouble for later in the pipeline...

Whether my unease is justified or not, I need feel troubled no more since the world of containers offer us a solution! Since Docker for Windows now supports Windows Containers and Visual Studio 2017 has support for Docker built-in we can now develop server applications on Windows 10 and run and debug them on the exact same operating systems they will run on in production.

In this post I take my version of Contoso University that I've been using for several years now and amend it so that in the developer inner loop phase (ie everything that happens before code is checked in to the build server) the website runs in a Windows Server 2016 container running IIS (rather than IIS Express) and the SQL Server Database Project runs on SQL Server 2016 (rather than LocalDB).

Development Environment

The world of containers is evolving rapidly and the tooling might have changed by the time you read this. At the time of writing my environment is as follows:

  • Windows 10 Professional version 1703 (OS Build 15063.250)
  • Visual Studio Enterprise 2017 version 15.1 (26403.7) with the ASP.NET and web development workload
  • Docker for Windows 17.03.1-ce running Windows containers (I recommend the stable channel as at the time of writing the edge version had a bug that caused a problem for Docker support in Visual Studio)

Depending on the speed of your internet connection you might want to docker pull the following images if you are planning on following along:

It's perhaps worth saying here that I'm using these images for convenience because they are available on Docker Hub. In a production scenario you probably wouldn't want to rely on an image as fully formed as microsoft/aspnet and you would probably start with microsoft/windowsservercore or microsoft/nanoserver and have full control of what is installed. You definitely wouldn't start with microsoft/mssql-server-windows-developer of course.

The Contoso University sample application is essentially the same as Microsoft's version except I've changed the database from Entity Framework Code First to a SQL Server Database Project. I've also changed the application to work with SQL Server authentication (rather than Windows authentication) thus removing the need for a domain controller to supply a domain account. You can get the starting point code from here and the final code here.

Adding Initial Docker Support

The first step towards Dockerizing Contoso University is to add initial Docker support for the ASP.NET web application (out-of-the-box support for SQL Server Database Projects isn't available). This is a simple as right-clicking the ContosoUniversity.Web project and choosing Add > Docker Support. This has three main visible effects:

  • A new docker-compose ‘project' is added at Solution level and is made the Startup Project. This project contains several .yml files.
  • A Dockerfile file and a (nested) .dockerignore file are added to ContosoUniversity.Web.
  • The toolbar button that normally launches a browser has now switched to launching Docker:

The Dockerfile added to ContosoUniversity.Web is based on the microsoft/aspnet image so at this point you should now be able to run the application using the Docker toolbar button and have the website run in a Windows Container based on that image. The database side of things isn't working at this stage of course—Web.config is pointing to LocalDB and the container running the website can't see LocalDB.

To understand what has been created, open a PowerShell session and run docker images followed by docker ps. You should see that an image called contosouniversity.web has been created with a dev tag, and that this image has been used to create a container called something like dockercompose362878786_contosouniversity.web_1.

Adding Docker Support for the SQL Server Database Project

Adding Docker support for the SQL Server Database Project requires the following steps:

  1. Manually add a Dockerfile file and .dockerignore file to the root of ContosoUniversity.Database. Given that these files don't have file extensions and that database projects are quite prescriptive about what they think you should be adding it's easier to add them outside of Visual Studio and then add them in as existing items. (Note that if you are using Windows Explorer you'll need to create .dockerignore as .dockerignore.—Windows will drop the trailing period).
  2. Optionally, close Visual Studio and reopen the solution folder in a text editor such as Visual Studio Code. Open ContosoUniversity.Database.sqlproj and search for the Dockerfile and .dockerignore entries. Change them to look as follows to achieve the nested file effect in Visual Studio:
  3. .dockerignore just needs to contain an asterisk—meaning everything should be ignored.
  4. Dockerfile should contain the following code:
  5. Switching to the docker-compose ‘project', docker-compose.yml should be amended to the following:
  6. A change is also needed to docker-compose.vs.debug.yml which should be amended to the following:

At this point you should be able to run the application using the Docker toolbar button and again see the website running—in a Windows container. However this time a second image (contosouniversity.database, tagged with dev) and corresponding container (named something like dockercompose362878786_contosouniversity.database_1) will have been created, with the container now running SQL Server. This is a newly minted instance of SQL Server and doesn't have a database for our website to connect to, which is the next issue to address.

Connecting the Contoso University Website to its Database

These next steps assume you are following on from the previous section, ie that the website is open in a browser and that Visual Studio is still debugging.

  1. Leave the browser open but stop debugging in Visual Studio.
  2. In ContosoUniversity.Web edit Web.config so that the connection string Data Source points to contosouniversity.database:
  3. In a PowerShell session, find the IP address of the container running SQL Server using docker inspect and passing in enough of the container's ID to make it unique:
  4. In ContosoUniversity.Database edit ContosoUniversity.publish.xml so that the Target database connection points to the IP address of the SQL Server container and change the the authentication to SQL Server Authentication. The User Name should be sa (yes—I know) the password should be the same as the one specified in the Dockerfile used to build the database image. Save the profile and then click Publish.
  5. Back in the web browser running the Contoso University website, click on one of the menu bar links (eg Departments) that causes a database query. If everything has worked you should now have a fully functioning application.

Understanding the Developer Inner Loop Workflow

At this point we have achieved our aim of running and debugging both the website and database components of Contoso University in containers running operating systems that are the same as would be used in production. Once the images and containers have been created they will—as far as my testing is concerned—continue to be used as long as nothing changes. This is the case even if Visual Studio, Docker or even the workstation are restarted. The great thing is that any changes made to the containers—for example updating the database schema—will be preserved. Of course, if something changes in one of the Dockerfile files the images and containers will be rebuilt and in the case of the database the publish file will need to be updated with a new IP address and the database will need to be published again from scratch. Also, if the solution is cleaned (ie Build > Clean Solution) the containers are removed and rebuilt, again necessitating publishing the database from scratch. Overall though, the developer inner loop workflow feels quite slick.

Next Steps

As things stand the compose and Dockerfile files are not ready to be used in a continuous delivery pipeline. The website Dockerfile for example has Contoso University being deployed as the Default Web Site rather than a ContosoUniversity website and the database Dockerfile doesn't cater for any persistent storage. There is also the problem of checking in the database project's publish profile with an IP address specific to one developer's workstation—a real pain for other developers. I'll address these issues as part of getting Contoso University working in a Docker-based continuous delivery pipeline in the next post in this series.

Cheers -- Graham

Continuous Delivery with Containers – Azure CLI Command for Creating a Docker Release Pipeline with VSTS Part 2

Posted by Graham Smith on March 14, 2017One Comment (click here to comment)

In my previous post I described my experience of working through Microsoft's Continuous Integration and Deployment of Multi-Container Docker Applications to Azure Container Service tutorial which is a walkthrough of how to use an Azure CLI 2.0 command to create a VSTS deployment pipeline to push Docker images to an Azure Container Registry and then deploy and run them on an Azure Container Service running a DC/OS cluster. Whilst it's great to be able to issue some commands and have stuff magically appear it's unlikely that you would use this approach to create production-grade infrastructure: having precise control over naming things is one good reason. Another problem with commands that create infrastructure is that you don't always get a good sense of what they are up to, and that's what I found with the az container release create command.

So I spent quite a bit of time ‘reverse engineering' az container release create in order to understand what it's doing and in this post I describe, step-by-step, how to build what the command creates. In doing so I gained first-hand experience of what I think will be an import pattern for the future -- running VSTS agents in a container. If your infrastructure is in place it's quick and easy to set up and if you want more agents it takes just seconds to scale to as many as you need. In fact, once I had figured what was going on I found that working with Azure Container Service and DC/OS was pretty straightforward and even a great deal of fun. Perhaps it's just me but I found being able to create 50 VSTS agents at the ‘flick of a switch' put a big smile on my face. Read on to find out just how awesome all this is...

Getting Started

If you haven't already worked through Microsoft's tutorial and my previous post I strongly recommend those as a starting point so you understand the big picture. Either way, you'll need to have the Azure CLI 2.0 installed and also to have forked the sample code to your own GitHub account and renamed it to something shorter (I used TwoSampleApp). My previous post has all the details. If you already have the Azure CLI installed do make sure you've updated it (pip install azure-cli --upgrade) since version 2.0 was recently officially released.

Creating the Azure Infrastructure

You'll need to create the following infrastructure in Azure:

  • A dedicated resource group (not strictly necessary but helps considerably with cleaning up the 30+ resources that get created).
  • An Azure container registry.
  • An Azure container service configured with a DC/OS cluster.

The Azure CLI 2.0 commands to create all this are as follows:

The az acs create command in particular is doing a huge amount of work behind the scenes, and if configuring a container service for a production environment you'd most likely want greater control over the names of all the resources that are created. I'm not worried about that here and the output of these commands is fine for my research purposes. If you do want to delve further you can examine the automation script for the top level resources these commands create.

Configuring VSTS

Over in your VSTS account you'll need to attend to the following items:

  • Create a new team project (I called mine TwoServiceApp) configured for Git. (A new project isn't strictly necessary but it helps when cleaning up.)
  • Create an Agent Pool called TwoServiceApp. You can get to the page that manages agent pools from the agent queues tab of your team project:
  • Create a service endpoint of type Github that grants VSTS access to your GitHub account. The procedure is detailed here -- I used the personal access token method and called the connection TwoServiceAppGh.
  • Create a service endpoint of type Docker Registry that grants access to the Azure container registry created above. I describe the process in this blog post and called the endpoint TwoServiceAppAcr.
  • Create a personal access token (granting permission to all scopes) and store the value for later use.
  • Ensure the Docker Integration extension is installed from the Marketplace.

Create a VSTS Agent

This is where the fun begins because we're going to create a VSTS agent in DC/OS using a Docker container. Yep -- you read that right! If you've only ever created an agent on ‘bare metal' servers then you need to forget everything you know and prepare for awesomeness. Not least because if you suddenly feel that you want a dozen agents a quick configuration setting will have them created for you in a flash!

The first step is to configure your workstation to connect to the DC/OS cluster running in your Azure container service. There are several ways to do this but I followed these instructions (Connect to a DC/OS or Swarm clusterCreate an SSH tunnel on Windows) to configure PuTTY to create an SSH tunnel. The host name will be something like azureuser@twoserviceappacsmgmt.westeurope.cloudapp.azure.com (you can get the master FQDN from the overview blade of your Azure container service and the default login name used by az acr create is azureuser) and you will need to have created a private key in .ppk format using PuTTYGen. Once you have successfully connected (you actually SSH to a DC/OS master VM) you should be able to browse to these URLs:

  • DC/OS -- http://localhost
  • Marathon -- http://localhost/marathon
  • Mesos -- http://localhost/mesos

If you followed the Microsoft tutorial then much of what you see will be familiar, although there will be nothing configured of course. To create the application that will run the agent you'll need to be in Marathon:

Clicking Create Application will display the configuration interface:

Whilst it is possible to work through all of the pages and enter in the required information, a faster way is to toggle to JSON Mode and paste in the following script (overwriting what's there):

You will need to amend some of the settings for your environment:

  • id -- choose an appropriate name for the application (note that /vsts-agents/ creates a folder for the application).
  • VSTS_POOL -- the name of the agent pool created above.
  • VSTS_TOKEN -- the personal access token created above.
  • VSTS_ACCOUNT -- the name of your VSTS account (ie if the URL is https://myvstsaccount.visualstudio.com then use myvstsaccount).

It will only take a few seconds to create the application after which you should see something that looks like this:

For fun, click on the Scale Application button and enter a number of instances to scale to. I scaled to 50 and it literally took just a few seconds to configure them all. This resulted in this which is pretty awesome in my book for just a few seconds work:

Scaling down again is even quicker -- pretty much instant in Marathon and VSTS was very quick to get back to displaying just one agent. With the fun over, what have we actually built here?

The concept is that rather than configure an agent by hand in the traditional way, we are making use of one of the Docker images Microsoft has created specifically to contain the agent and build tools. You can examine all the different images from this page on Docker Hub. Looking at the Marathon configuration code above in the context of the instructions for using the VSTS agent images it's hopefully clear that the configuration is partially around hosting the image and creating the container and partially around passing variables in to the container to configure the agent to talk to your VSTS account and a specific agent pool.

Create a Build Definition

We're now at a point where we can switch back to VSTS and create a build definition in our team project. Most of the tasks are of the Docker Compose type and you can get further details here. Start with an empty process and name the definition TwoServiceApp. On the Options tab set the Default agent queue to be TwoServiceApp. On the tasks tab in Get sources configure the build to point to your GitHub account:

Now add and configure the following tasks (only values that need adding or amending, or which need a special mention are listed):

Task #1 -- Docker Compose
  • Display name = Build repository
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Docker Compose File = **/docker-compose.ci.build.yml
  • Action = Run a specific service image
  • Service name = ci-build

Save the definition and queue a build. The source code will be pulled down and then the instructions in the ci-build node of docker-compose.ci.build.yml will be executed which will cause service-b to be built.

Task #2 -- Docker Compose
  • Display name = Build service images
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Docker Compose File = **/docker-compose.yml
  • Qualify Image Names = checked
  • Action = Build service images
  • Additional Image Tags = $(Build.BuildId) $(Build.SourceBranchName) $(Build.SourceVersion) (on separate lines)
  • Include Source Tags = checked
  • Include Latest Tag = checked

Save the definition and queue a build. The addition of this task causes causes Docker images to be created in the agent container for service-a and service-b.

Task #3 -- Docker Compose
  • Display name = Push service images
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Docker Compose File = **/docker-compose.yml
  • Qualify Image Names = checked
  • Action = Push service images
  • Additional Image Tags = $(Build.BuildId) $(Build.SourceBranchName) $(Build.SourceVersion) (on separate lines)
  • Include Source Tags = checked
  • Include Latest Tag = checked

Save the definition and queue a build. The addition of this task causes causes the Docker images to be pushed to the Azure container registry.

Task #4 -- Docker Compose
  • Display name = Write service image digests
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Docker Compose File = **/docker-compose.yml
  • Qualify Image Names = checked
  • Action = Write service image digests
  • Image Digest Compose File = $(Build.StagingDirectory)/docker-compose.images.yml

Save the definition and queue a build. The addition of this task creates immutable identifiers for the previously built images which provide a guaranteed way of referring back to a specific image in the container registry. The identifiers are stored in a file called docker-compose.images.yml, the contents of which will look something like:

Task #5 -- Docker Compose
  • Display name = Combine configuration
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Docker Compose File = **/docker-compose.yml
  • Additional Docker Compose Files = $(Build.StagingDirectory)/docker-compose.images.yml
  • Qualify Image Names = checked
  • Action = Combine configuration
  • Remove Build Options = checked

Save the definition and queue a build. The addition of this task creates a new docker-compose.yml that is a composite of the original docker-compose.yml and docker-compose.images.yml. The contents will look something like:

This is the file that is used by the release definition to deploy the services to DC/OS.

Task #6 -- Copy Files
  • Display name = Copy Files to: $(Build.StagingDirectory)
  • Contents = **/docker-compose.env.*.yml
  • Target Folder = $(Build.StagingDirectory)

Save the definition but don't bother queuing a build since as things stand this task doesn't have any files to copy over. Instead, the task comes in to play when using environment files (see later).

Task #7 -- Publish Build Artifacts
  • Display name = Publish Artifact: docker-compose
  • Path to Publish = $(Build.StagingDirectory)
  • Artifact Name = docker-compose
  • Artifact Type = Server

Save the definition and queue a build. The addition of this task creates the build artefact containing the contents of the staging directory, which happen to be docker-compose.yml and docker-compose.images.yml, although only docker-compose.yml is needed. The artifact can be downloaded of course so you can examine the contents of the two files for yourself.

Create a Release Definition

Create a new empty release definition and configure the Source to point to the TwoServiceApp build definition, the Queue to point to the TwoServiceApp agent queue and check the Continuous deployment option:

With the definition created, edit the name to TwoServiceApp, rename the default environment to Dev and rename the default phase to AcsDeployPhase:

Add Docker Deploy task to the AcsDeployPhase and configure as follows (only values that need changing are listed):

  • Display Name = Deploy to ACS DC/OS
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Target Type = Azure Container Service (DC/OS)
  • Docker Compose File = **/docker-compose.yml
  • ACS DC/OS Connection Type = Direct

The final result should be as follows:

Trigger a release and then switch over to DC/OS (ie at http://localhost) and the Services page. Drill down through the Dev folder and the three services defined in docker-compose.yml should now be deployed and running:

To complete the exercise the Dev environment can now be cloned (click the ellipsis in the Dev environment to show the menu) to create Test and Production environments with manual approvals. If you want to view the sample application in action follow the View the application instructions in the Microsoft tutorial.

At this point there is no public endpoint for the production instance of TwoServiceApp. To remedy that follow the Expose public endpoint for production instructions in the Microsoft tutorial. Additionally, you will need to amend the production version of the Docker Deploy task so the Additional Docker Compose Files section contains docker-compose.env.production.yml.

Final Thoughts

Between Microsoft's tutorial and my two posts relating to it you have seen a glimpse of the powerful tools that are available for hosting and orchestrating containers. Yes, this has all been using Linux containers but indications are that similar functionality -- if perhaps not using exactly the same tools -- is on the way for Windows containers. Stay tuned!

Cheers -- Graham

Continuous Delivery with Containers – Azure CLI Command for Creating a Docker Release Pipeline with VSTS Part 1

Posted by Graham Smith on January 30, 20176 Comments (click here to comment)

One of the aims of my blog series on Continuous Delivery with Containers is to try and understand how best to use Visual Studio Team Services with Docker, so I was very interested to learn that Azure CLI 2.0 has a command to create a VSTS deployment pipeline to push Docker images to an Azure Container Registry and then deploy and run them on an Azure Container Service running a DC/OS cluster. Even better, Microsoft have written a tutorial (Continuous Integration and Deployment of Multi-Container Docker Applications to Azure Container Service) on how to use this command.

Whilst I'm somewhat sceptical about using generic scaffolding tooling to create production-ready workloads (I find that the naming conventions used are usually unsuitable for example) there is no doubt that they are great for quickly building proof of concepts and also for learning (what are hopefully!) best practices. It was with this aim that, armed with a large cup of tea, I sat down one afternoon to plough my way through the tutorial. It was a great learning experience, however I went down some blind alleys to get the pipeline working and then ended up doing quite a lot of head scratching (due to my ignorance I hasten to add) to fully understand what had been created.

So in this post I'm writing-up my experience of working through the tutorial with notes that I hope will help anyone else using it. In a follow-up post I'll attempt to document what the az container release create command actually creates and configures. Just a reminder that with this tutorial we're still very much in the Linux container world. Whilst this might be frustrating for those eager to see advanced tutorials based on Windows containers the learning focus here is mostly Docker and VSTS so the fact that the containers are running Linux shouldn't put you off.

On a final note before we get started, I'm using a Windows 10 Professional workstation with the beta version (1.13.0 at the time of writing) of Docker for Windows installed and running.

Getting Started with the Azure CLI

The tutorial requires version 2.0 of Azure CLI which is based on Python. The Azure CLI installation documentation suggests running Azure CLI in Docker but don't go down that path as it's a dead end as far as the tutorial is concerned. Instead follow these installation steps:

  1. Install the latest version of Python from here.
  2. From a command prompt upgrade pip (package management system for Python) using the python -m pip install --upgrade pip command.
  3. Install Azure CLI 2.0 using pip install azure-cli. (If you have previously installed Azure CLI 2.0 you should check for an upgrade using pip install azure-cli --upgrade.)
  4. Check Azure CLI is working using the az command. You should see this:

The next step is to actually log in to the Azure CLI. The process is as follows:

  1. At a command prompt type az login.
  2. Navigate to https://aka.ms/devicelogin in a browser.
  3. Supply the one-time authentication code supplied by the az login command.
  4. Complete the authentication process using your Azure credentials.

If you have multiple subscriptions you may need to set the default subscription:

  1. At the command prompt type az account list to show details of all your accounts.
  2. Each account has an isDefault property which will tell you the default account.
  3. If you need to make a change use az account set --subscription <Id> -- you can copy and paste the subscription Id from the accounts list.

Creating the Azure Container Service Cluster with DC/OS

This step is pretty straightforward and the tutorial doesn't need any further explanation. My commands to create the resource group and the ACS cluster were:

Be aware that the az acs create command results in a request to provision 18 cores. This might exceed your quota for a given region, even if you have previously contacted Microsoft Support to request an increase in the total number of cores allowed for your subscription (which you might have to do anyway if you have cores already provisioned). I found that choosing a region where I didn't have any cores provisioned fixed a quotaExceeded exception that I was getting.

For simplicity I used the --generate-ssh-keys option to save having to do this manually. This creates id_rsa and id_rsa.pub files (ie a private / public key pair) in C:\Users\<username>\.ssh.

A word of warning -- if you are using an Azure subscription with MSDN credits be aware that an ACS cluster will eat your credits at an alarming rate. As of the time of writing this post I've not found a reliable way of turning everything off and turning it back on again with everything fully working (specifically the build agent). Consequently I tend to delete the resource group and the VSTS project when I'm finished using them and then recreate them from scratch when I next need them. If you do this do be aware that if you have multiple Azure subscriptions the az account set --subscription <Id> command to set the default subscription can't be relied upon to be ‘sticky', and you can find yourself creating stuff in a different subscription by mistake.

Working with the Sample Code

The tutorial uses sample code that consists of an Angular.js-based web app (with a Node.js backend) that calls a separate .NET Core application, and these are deployed as two separate services. The problem I found was that the name of the GitHub repo (container-service-dotnet-continuous-integration-multi-container) is extremely long and is used to name some of the artefacts that get created by the Azure CLI container release command. This makes for some very unwieldy names which I found somewhat irksome. You can fix this as follows:

  1. Fork the sample code to your own GitHub account.
  2. Switch to the Settings tab:
  3. Use the Rename option to give the forked repo a more manageable name -- I chose TwoServiceApp.
  4. Clone the repo to your workstation in your preferred way -- for me this involved opening a command prompt at C:\Source\GitHub and running git clone https://github.com/GrahamDSmith/TwoServiceApp.git.

At this point it's probably a good idea to get the sample app working locally which will help with understanding how multi-container Docker deployments work. If you want to examine the source code then Visual Studio Code is an ideal tool for the job. To run the application the first step is to build the .NET Core component. At a command prompt at the root of the application run the following command:

This runs docker-compose with a specific .yml file, and executes the instructions at the ci-build node. The really neat thing about this command is that it uses a Docker container to build the .NET Core app (service-b), which means your workstation doesn't need the .NET Core to be installed for this to work. Looking at the key parts of the docker-compose.ci.build.yml file:

  • image: microsoft/dotnet:1.0.0-preview2.1-sdk -- this specifies that this particular Microsoft official Docker image for .NET Core on Linux should be used.
  • volumes: -- ./service-b:/src -- this causes the local service-b folder on your workstation to be ‘mirrored' to a folder named src in the container that will be created from the microsoft/dotnet:1.0.0-preview2.1-sdk image.
  • working_dir: /src -- set the working directory in the container to src.
  • command: /bin/bash -c "dotnet restore && dotnet publish -c Release -o bin ." -- this is the command to build and publish service-b.

Because the service-b folder on your workstation is mirrored to the src folder in the running container the result of the build command is copied from the container to your workstation. Pretty nifty!

To actually run the application now run this command:

By convention docker-compose will look for a docker-compose.yml file so there is no need to specify it. On examining docker-compose.yml it should be pretty easy to see what's going on -- three services (service-a, service-b and mycache) are specified and service-a and service-b are built according to their respective Dockerfile instructions. Both service-a and service-b containers are set to listen on port 80 at runtime and in addition service-a is accessible to the host (ie your workstation) on port 8080. Consequently, you should be able to navigate to http://localhost:8080 in your browser and see the app running.

Creating the Deployment Pipeline

This step is straightforward and the tutorial doesn't need any further explanation. One extra step I included was to create an Azure Container Registry instance in the same resource group used to create the Azure Container Service. Despite repeated attempts, for some reason I couldn't create this at the command line so ended up creating it through the portal. The command though should look similar to this:

To facilitate easy teardown I also created a dedicated project in VSTS called TwoServiceApp. The command to create the pipeline (GitHub token made up of course) was then as follows:

This command results in the creation of build and release definitions in VSTS (along with other supporting items) and a deploy of the image to a Dev environment.

Viewing the Application

To view the application as deployed to the Dev environment you need to launch the DC/OS dashboard. The tutorial instructions are easy to follow, however you might get tripped-up by the instructions for configuring Pageant since the instructions direct you to "Launch PuttyGen and load the private SSH key used to create the ACS cluster (%homepath%\id_rsa)". On my machine at least the id_rsa file was created at %homepath%\.ssh\id_rsa rather than %homepath%\id_rsa. If you persist with the instructions you eventually end up running the application in the Dev environment, but if like me you are new to cluster technologies such as DC/OS it all feels like some kind of sorcery.

A final observation here is that the configuration to launch the DC/OS dashboard requires your browser's proxy to be set. This knocked-out the Internet connection for all my other browser tabs, and was the cause of alarm for a few seconds when I realised that the tab I was using to edit my WordPress blog wouldn't save. If you launched the DC/OS dashboard from the command line (using az acs dcos browse --name TwoServiceAppAcs --resource-group TwoServiceAppRg) you need to use CTRL+C from the command line to close the session. In an emergency head over to Windows Settings > Network & Internet > Proxy to reset things back to normal.

Until Next Time

That concludes the write-up of my notes for use with the Continuous Integration and Deployment of Multi-Container Docker Applications to Azure Container Service tutorial. If you work through the tutorial and have any further tips that might be of use please do post in the comments.

In the next post I'll start to document what the what the az container release create command actually creates and configures.

Cheers -- Graham

Continuous Delivery with Containers – Amending a VSTS / Docker Hub Deployment Pipeline with Azure Container Registry

Posted by Graham Smith on December 1, 2016No Comments (click here to comment)

In this blog series on Continuous Delivery with Containers I'm documenting what I've learned about Docker and containers (both the Linux and Windows variety) in the context of continuous delivery with Visual Studio Team Services. It's a new journey for me so do let me know in the comments if there is a better way of doing things!

In the previous post in this series I explained how to use VSTS and Docker to build and deploy an ASP.NET Core application to a Linux VM running in Azure. It's a good enough starting point but one of the first objections anyone working in a private organisation is likely to have is publishing Docker images to the public Docker Hub. One answer is to pay for a private repository in the Docker Hub but for anyone using Azure a more appealing option might be the Azure Container Registry. This is a new offering from Microsoft -- it's still in preview and some of the supporting tooling is only partially baked. The core product is perfectly functional though so in this post I'm going to be amending the pipeline I built in the previous post with Azure Container Registry to find out how it differs from Docker Hub. If you want to follow along with this post you'll need to make sure  you have a working pipeline as I describe in my previous post.

Create an Azure Container Registry

At the time of writing there is no PowerShell experience for ACR so unless you want to use the CLI 2.0 it's a case of using the portal. I quite like the CLI but to keep things simple I'm using the portal. For some reason ACR is a marketplace offering so you'll need to add it from New > Marketplace > Containers > Container Registry (preview). Then follow these steps:

  1. Create a new resource group that will contain all the ACR resources -- I called mine PrmAcrResourceGroup.
  2. Create a new standard storage account for the ACR -- I called mine prmacrstorageaccount. Note that at the time of writing ACR is only available in a few regions in the US and the storage account needs to be in the same region. I chose West US.
  3. Create a new container registry using the resource group and storage account just created -- I called mine PrmContainerRegistry. As above, the registry and storage account need to be in the same location. You will also need to enable the Admin user:
    azure-portal-create-container-registry

Add a New Docker Registry Connection

This registry connection will be used to replace the connection made in the previous post to Docker Hub. The configuration details you need can be found in the Access key blade of the newly created container registry:

azure-portal-container-registry-access-key-blade

Use these settings to create a new Docker Registry connection in the VSTS team project:

vsts-services-endpoints-azure-container-registry

Amend the Build

Each of the three Docker tasks that form part of the build need amending as follows:

  • Docker Registry Connection = <name of the Azure Container Registry connection>
  • Image Name = aspnetcorelinux:$(Build.BuildNumber)
  • Qualify Image Name = checked

One of the most crucial amendments turned out to be the Qualify Image Name setting. The purpose of this setting is to prefix the image name with the registry hostname, but if left unchecked it seems to default to Docker Hub. This causes an error during the push as the task tries to push to Docker Hub which of course fails because the registry connection has authenticated to ACR rather than Docker Hub:

vsts-docker-push-error

It was obvious once I'd twigged what was going on but it had me scratching my head for a little while!

Final Push

With the amendments made you can now trigger a new build, which should work exactly as before except now the docker image is pushed to -- and run from -- your ACR instance rather than Docker Hub.

Your next question is probably going to be how can I get a list of the repositories I've created in ACR? Don't bother looking in the portal since -- at the time of writing at least -- there is no functionality there to list repositories. Instead one of the guys at Microsoft has created a separate website which, once authenticated, shows you this information:

acr-portal

If you want to do a bit more you can use the CLI 2.0. The syntax to list repositories for example is az acr repository list -n <Azure Container Registry name>.

It's early days yet however the ACR is looking like a great option for anyone needing a private container registry and for whom an Azure option makes sense. Do have a look at the documentation and also at Steve Lasker's Connect(); video here.

Cheers -- Graham

Continuous Delivery with Containers – Use Visual Studio Team Services and Docker to Build and Deploy ASP.NET Core to Linux

Posted by Graham Smith on October 27, 20168 Comments (click here to comment)

In this blog series on Continuous Delivery with Containers I'm documenting what I've learned about Docker and containers (both the Linux and Windows variety) in the context of continuous delivery with Visual Studio Team Services. The Docker and containers world is mostly new to me and I have only the vaguest idea of what I'm doing so feel free to let me know in the comments if I get something wrong.

Although the Windows Server Containers feature is now a fully supported part of Windows it is still extremely new in comparison to containers on Linux. It's not surprising then that even in the world of the Visual Studio developer the tooling is most mature for deploying containers to Linux and that I chose this as my starting point for doing something useful with Docker. As I write this the documentation for deploying containers with Visual Studio Team Services is fragmented and almost non-existent. The main references I used for this post were:

However to my mind none of these blogs cover the whole process to any satisfactory depth and in any case they are all somewhat out of date. In this post I've therefore tried to piece all of the bits of the jigsaw together that form the end-to-end process of creating an ASP.NET Core app in Visual Studio and debugging it whilst running on Linux, all the way through to using VSTS to deploy the app in a container to a target node running Linux. I'm not attempting to teach the basics of Docker and containers here and if you need to get up to speed with this see my Getting Started post here.

Install the Tooling for the Visual Studio Development Inner Loop

In order to get your development environment properly configured you'll need to be running a version of Windows that is supported by Docker for Windows and have the following tooling installed:

You'll also need a VSTS account and an Azure subscription.

Create an ASP.NET Core App

I started off by creating a new Team Project in VSTS and called Containers and then from the Code tab creating a New repository using Git called AspNetCoreLinux:

vsts-code-new-repository

Over in Visual Studio I then cloned this repository to my source control folder (in my case to C:\Source\VSTS\AspNetCoreLinux as I prefer a short filepath) and added .gitignore and .gitattributes files (see here if this doesn't make sense) and committed and synced the changes. Then from File > New > Project I created an ASP.NET Core Web Application (.NET Core) application called AspNetCoreLinux using the Web Application template (not shown):

visual-studio-create-new-asp-net-core-application

Visual Studio will restore the packages for the project after which you can run it with F5 or Ctrl+F5.

The next step is to install support for Docker by right-clicking the project and choosing Add > Docker Support. You should now see that the Run dropdown has an option for Docker:

visual-studio-run-dropdown

With Docker selected and Docker for Windows running (with Shared Drives enabled!) you will now be running and debugging the application in a Linux container. For more information about how this works see the resources on the Visual Studio Tools for Docker site or my list of resources here. Finally, if everything is working don't forget to commit and sync the changes.

Provision a Linux Build VM

In order to build the project in VSTS we'll need a build machine. We'll provision this machine in Azure using the Azure driver for Docker Machine which offers a very neat way for provisioning a Linux VM with Docker installed in Azure. You can learn more about Docker Machine from these sources:

To complete the following steps you'll need the Subscription ID of the Azure subscription you intend to use which you can get from the Azure portal.

  1. At a command prompt enter the following command:

    By default this will create a Standard A2 VM running Ubuntu called vstsbuildvm (note that "Container names must be 3-63 characters in length and may contain only lower-case alphanumeric characters and hyphen. Hyphen must be preceded and followed by an alphanumeric character.") in a resource group called VstsBuildDeployRG in the West US datacentre (make sure you use your own Azure Subscription ID). It's fully customisable though and you can see al the options here. In particular I've added the option for the VM to be created with a static public IP address as without that there's the possibility of certificate problems when the VM is shut down and restarted with a different IP address.
  2. Azure now wants you to authenticate. The procedure is explained in the output of the command window, and requires you to visit https://aka.ms/devicelogin and enter the one-time code:
    command-prompt-docker-machine-create
    Docker Machine will then create the VM in Azure and configure it with Docker and also generate certificates at C:\Users\<yourname>\.docker\machine. Do have a poke a round the subfolders of this path as some of the files are needed later on and it will also help to understand how connections to the VM are handled.
  3. This step isn't strictly necessary right now, but if you want to run Docker commands from the current command prompt against the Docker Engine running on the new VM you'll need to configure the shell by first running docker-machine env vstsbuildvm. This will print out the environment variables that need setting and the command (@FOR /f "tokens=*" %i IN (‘docker-machine env vstsbuilddeployvm') DO @%I) to set them. These settings only persist for the life of the command prompt window so if you close it you'll need to repeat the process.
  4. In order to configure the internals of the VM you need to connect to it. Although in theory you can use the docker-machine ssh vstsbuildvm command to do this in practice the shell experience is horrible. Much better is to use a tool like PuTTY. Donovan Brown has a great explanation of how to get this working about half way down this blog post. Note that the folder in which the id_rsa file resides is C:\Users\<yourname>\.docker\machine\machines\<yourvmname>. A tweak worth making is to set the DNS name for the server as I describe in this post so that you can use a fixed host name in the PuTTY profile for the VM rather than an IP address.
  5. With a connection made to the VM you need to issue the following commands to get it configured with the components to build an ASP.NET Core application:
    1. Upgrade the VM with sudo apt-get update && sudo apt-get dist-upgrade.
    2. Install .NET Core following the instructions here, making sure to use the instructions for Ubuntu 16.04.
    3. Install npm with sudo apt -y install npm.
    4. Install Bower with sudo npm install -g bower.
  6. Next up is installing the VSTS build agent for Linux following the instructions for Team Services here. In essence (ie do make sure you follow the instructions) the steps are:
    1. Create and switch to a downloads folder using mkdir Downloads && cd Downloads.
    2. At the Get Agent page in VSTS select the Linux tab and the Ubuntu 16.04-x64 option and then the copy icon to copy the URL download link to the clipboard:
      vsts-download-agent-get-agent
    3. Back at the PuTTY session window type sudo wget followed by a space and then paste the URL from the clipboard. Run this command to download the agent to the Downloads folder.
    4. Go up a level using cd .. and then make and switch to a folder for the agent using mkdir myagent && cd myagent.
    5. Extract the compressed agent file to myagent using tar zxvf ~/Downloads/vsts-agent-ubuntu.16.04-x64-2.108.0.tar.gz (note the exact file name will likely be different).
    6. Install the Ubuntu dependencies using sudo ./bin/installdependencies.sh.
    7. Configure the agent using ./config.sh after first making sure you have created a personal access token to use. I created my agent in a pool I created called Linux.
    8. Configure the agent to run as a service using sudo ./svc.sh install and then start it using sudo ./svc.sh start.

If the procedure was successful you should see the new agent showing green in the VSTS Agent pools tab:

vsts-agent-pools

Provision a Linux Target Node VM

Next we need a Linux VM we can deploy to. I used the same syntax as for the build VM calling the machine vstsdeployvm:

Apart from setting the DNS name for the server as I describe in this post there's not much else to configure on this server except for updating it using sudo apt-get update && sudo apt-get dist-upgrade.

Gearing Up to Use the Docker Integration Extension for VSTS

Configuration activities now shift over to VSTS. The first thing you'll need to do is install the Docker Integration extension for VSTS from the Marketplace. The process is straightforward and wizard-driven so I won't document the steps here.

Next up is creating three service end points -- two of the Docker Host type (ie our Linux build and deploy VMs) and one of type Docker Registry. These are created by selecting Services from the Settings icon and then Endpoints and then the New Service Endpoint dropdown:

vsts-services-endpoints-docker

To create a Docker Host endpoint:

  1. Connection Name = whatever suits -- I used the name of my Linux VM.
  2. Server URL = the DNS name of the Linux VM in the format tcp://your.dns.name:2376.
  3. CA Certificate = contents of C:\Users\<yourname>\.docker\machine\machines\<yourvmname>\ca.pem.
  4. Certificate = contents of C:\Users\<yourname>\.docker\machine\machines\<yourvmname>\cert.pem.
  5. Key = contents of C:\Users\<yourname>\.docker\machine\machines\<yourvmname>\key.pem.

The completed dialog (in this case for the build VM) should look similar to this:

vsts-services-endpoints-docker-host

Repeat this process for the deploy VM.

Next, if you haven't already done so you will need to create an account at Docker Hub. To create the Docker Registry endpoint:

  1. Connection Name = whatever suits -- I used my name
  2. Docker Registry = https://index.docker.io/v1/
  3. Docker ID = username for Docker Hub account
  4. Password = password for Docker Hub account

The completed dialog should look similar to this:

vsts-services-endpoints-docker-hub

Putting Everything Together in a Build

Now the fun part begins. To keep things simple I'm going to run everything from a single build, however in a more complex scenario I'd use both a VSTS build and a VSTS release definition. From the VSTS Build & Release tab create a new build definition based on an Empty template. Use the AspNetCoreLinux repository, check the Continuous integration box and select Linux for the Default agent queue (assuming you create a queue named Linux as I've done):

vsts-create-new-build-definition

Using Add build step add two Command Line tasks and three Docker tasks:

vsts-add-tasks

In turn right-click all but the first task and disable them -- this will allow the definition to be saved without having to complete all the tasks.

The configuration for Command Line task #1 is:

  • Tool = dotnet
  • Arguments = restore -v minimal
  • Advanced > Working folder = src/AspNetCoreLinux (use the ellipsis to select)

Save the definition (as AspNetCoreLinux) and then queue a build to make sure there are no errors. This task restores the packages specified in project.json.

The configuration for Command Line task #2 is:

  • Tool = dotnet
  • Arguments = publish -c $(Build.Configuration) -o $(Build.StagingDirectory)/app/
  • Advanced > Working folder = src/AspNetCoreLinux (use the ellipsis to select)

Enable the task and then queue a build to make sure there are no errors. This task publishes the application to$(Build.StagingDirectory)/app (which equates to home/docker-user/myagent/_work/1/a/app).

The configuration for Docker task #1 is:

  • Docker Registry Connection = <name of your Docker registry connection>
  • Action = Build an image
  • Docker File = $(Build.StagingDirectory)/app/Dockerfile
  • Build Context = $(Build.StagingDirectory)/app
  • Image Name = <your Docker ID>/aspnetcorelinux:$(Build.BuildNumber)
  • Docker Host Connection = vstsbuildvm (or your Docker Host name for the build server)
  • Working Directory = $(Build.StagingDirectory)/app

Enable the task and then queue a build to make sure there are no errors. If you run sudo docker images on the build machine you should see the image has been created.

The configuration for Docker task #2 is:

  • Docker Registry Connection = <name of your Docker registry connection>
  • Action = Push an image
  • Image Name = <your Docker ID>/aspnetcorelinux:$(Build.BuildNumber)
  • Advanced Options > Docker Host Connection = vstsbuildvm (or your Docker Host name for the build server)
  • Advanced Options > Working Directory = $(System.DefaultWorkingDirectory)

Enable the task and then queue a build to make sure there are no errors. If you log in to Docker Hub you should see the image under your profile.

The configuration for Docker task #3 is:

  • Docker Registry Connection = <name of your Docker registry connection>
  • Action = Run an image
  • Image Name = <your Docker ID>/aspnetcorelinux:$(Build.BuildNumber)
  • Container Name = aspnetcorelinux$(Build.BuildNumber) (slightly different from above!)
  • Ports = 80:80
  • Advanced Options > Docker Host Connection = vstsdeployvm (or your Docker Host name for the deploy server)
  • Advanced Options > Working Directory = $(System.DefaultWorkingDirectory)

Enable the task and then queue a build to make sure there are no errors. If you navigate to the URL of your deployment sever (eg http://vstsdeployvm.westus.cloudapp.azure.com/) you should see the web application running. As things stand though if you want to deploy again you'll need to stop the container first.

That's all for now...

Please do be aware that this is only a very high-level run-through of this toolchain and there many gaps to be filled: how does a website work with databases, how to host a website on something other than the Kestrel server used here and how to secure containers that should be private are just a few of the many questions in my mind. What's particularly exciting though for me is that we now have a great solution to the problem of developing a web app on Windows 10 but deploying it to Windows Server, since although this post was about Linux, Docker for Windows supports the same way of working with Windows Server Core and Nanao Server (currently in beta). So I hope you found this a useful starting point -- do watch out for my next post in this series!

Cheers -- Graham

Getting Started with Containers and Docker for Visual Studio Developers

Posted by Graham Smith on October 18, 2016No Comments (click here to comment)

Docker and containers are one of the hot topics of the development world right now and there's no sign of them going away. With the recent launch of Windows Server 2016 and with it the Windows Server Containers feature the world of containers is one that can't be ignored by developers on the Windows platform who don't want to get left behind. I've spent quite a bit of time over the summer learning Docker and attempting to understand how containers fit in to the Visual Studio developer workflow and the continuous delivery pipeline -- a precursor for my new blog series on Continuous Delivery with Containers. I've found quite a lot of useful resources and in this Getting Started post I've listed the ones which I think are the most useful. I've also added some narrative for anyone who is trying to make sense of all the different tooling as it took me a little while to get this clear in my mind.

Docker

Docker is an open-source technology for managing containers -- not to be confused with Docker Inc, the company which is the original author and primary sponsor of the Docker open source project. Since containers have their roots in the Linux world it's not surprising that most of the in-depth resources for learning Docker come from the Linux world. The question for those of us who predominantly use Windows and who don't want to have to install Linux is how to get going? Two possibilities are the Katacoda browser-based labs and Docker for Windows (both covered below). These essentially allow you to learn in a predominantly Linux world (no bad thing) but if this is too much you could go straight to Docker on Windows, although I think you'll be missing out.

Docker on Windows

For the past couple of years Microsoft has been contributing to the Docker open source project to bring the benefits of Docker to Windows Server. For some time now it's been possible to install Docker on Windows Server 2016 and use it to manage Windows Server containers. Here's the thing though: I'm using the term Docker on Windows to specifically differentiate it from Docker for Windows. Whilst clearly related they are two different things and the lack of anyone pointing this out in blog posts and documentation caused confusion for me in my early dealings with Docker on Windows. One more point to note is that there are two types of containers for Windows -- Windows Server Containers and Hyper-V Containers. At the time of writing both types run on Windows Server but only Hyper-V Containers run on Windows 10. Whilst I'm in pointing-things-out mode I might as well mention that there is a PowerShell module for managing containers as an alternative to the Docker command-line interface, as you might see it being used in some blog posts.

Docker for Windows

Docker for Windows -- not to be confused with Docker on Windows -- is a technology that leverages Hyper-V on Windows 10 (certain versions only at time of writing) to host a lightweight Linux VM. Docker commands can then be issued against this VM from the Windows 10 host. It's a great tool for supporting learning about Docker and an essential element of the toolkit required for Visual Studio developers wanting to start working with containers as part of the developer workflow.

Visual Studio Tools for Docker and Developer Workflow

For Visual Studio developers we're now getting to the really good stuff. Visual Studio Tools for Docker enables support for running ASP.NET Core applications on the lightweight Linux VM that is at the heart of Docker for Windows. It even supports debugging. What's even more exciting is that at the time of writing the latest beta version of Docker for Windows supports Windows Containers so as well as running an ASP.NET Core application under development against Linux you can now also run it against Windows Server (Core or Nano Server). Why is this exciting? Well, has it ever bothered you that you might be developing a web application on Windows 10 and hosting it in IIS Express but in production it will be running on Windows Server 2012/16 hosted in full-fat IIS? I know it's bothered me and the excitement is that this toolchain promises to fix all that.

Time to Down Tools

A resource that doesn't really fit in to any of the categories above is the The Containers Channel on Channel 9. It's got a great mix of content and worth keeping an eye on for new additions.

I hope you find these resources useful and if you have any great discoveries worth sharing please let me know in the comments. Do be sure to keep an eye on my new Continuous Delivery with Containers blog series where I'll be documenting my journey to understand how containers can play a part in the continuous delivery pipeline.

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Instrument for Telemetry with Application Insights

Posted by Graham Smith on October 4, 20164 Comments (click here to comment)

If you get to the stage where you are deploying your application on a very frequent basis and you are relying on automated tests for the bulk of your quality assurance then a mechanism to alert you when things go wrong in production is essential. There are many excellent tools that can help with this however anyone working working with ASP.NET websites (such as the one used in this blog series) and who has access to Azure can get going very quickly using Application Insights. I should qualify that by saying that whilst it is possible to get up-and-running very quickly with Application Insights there is a bit more work to do to make Application Insights a useful part of a continuous delivery pipeline. In this post in my blog series on Continuous Delivery with TFS / VSTS we take a look at doing just that!

The Big Picture

My aim in this post is to get telemetry from the Contoso University sample ASP.NET application running a) on my developer workstation, b) in the DAT environment and c) in the DQA environment. I'm not bothering with the PRD environment as it's essentially the same as DQA. (If you haven't been following along with this series please see this post for an explanation of the environments in my pipeline.) I also want to configure my web servers running IIS to send server telemetry to Azure.

Azure Portal Configuration

The starting point is some foundation work in Azure. We need to create three Application Insights resources inside three different resource groups representing the development workstation, the DAT environment and the DQA environment. A resource group for the development workstation doesn't exist so the first step is to create a new resource group called PRM-DEV. Then create three Application Insights resources in each of the resource groups -- I used the same names as the resource groups. For the DAT environment for example:

azure-portal-create-application-insights-resource

The final result should look something like this (note I added the resource group column in to the table):

azure-portal-application-insights-resources

Add the Application Insights SDK

With the Azure foundation work out of the way we can now turn our attention to adding the Application Insights SDK to the Contoso University ASP.NET application. (You can get the starting code from my GitHub repository here.) Application Insights is a NuGet package but it can be added by right-clicking the web project and choosing Add Application Insights Telemetry:

visual-studio-add-application-insights-telemetry

You are then presented with a configuration dialog which will allow you to select the correct Azure subscription and then the Application Insights resource -- in this case the one for the development environment:

visual-studio-configure-application-insights-telemetry

You can then run Contoso University and see telemetry appear in both Visual Studio and the Azure portal. There is a wealth of information available so do explore the links to understand the extent.

Configure for Multiple Environments

As things stand we have essentially hard-coded Contoso University with an instrumentation key to send telemetry to just one Application Insights resource (PRM-DEV). Instrumentation keys are specific to one Application Insights resource so if we were to leave things as they are then a deployment of the application to the delivery pipeline would cause each environment to send its telemetry to the PRM-DEV Application Insights resource which would cause utter confusion. To fix this the following procedure can be used to amend an ASP.NET MVC application so that an instrumentation key can be passed in as a configuration variable as part of the deployment process:

  1. Add an iKey attribute to the appSettings section of Web.config (don't forget to use your own instrumentation key value from ApplicationInsights.config):
  2. Add a transform to Web.Release.config that consists of a token (__IKEY__) that can be used by Release Management:
  3. Add the following code to Application_Start in Global.asax.cs:
  4. As part of the Application Insights SDK installation Views.Shared._Layout.cshtml is altered with some JavaScript that adds the iKey to each page. This isn't dynamic and the JavaScript instrumentationKey line needs altering to make it dynamic as follows:
  5. Remove or comment out the InstrumentationKey section in ApplicationInsights.config.

As a final step run the application to ensure that Application Insights is still working. The code that accompanies this post can be downloaded from my GitHub account here.

Amend Release Management

As things stand a release build of Contoso University will have a tokenised appSettings key in Web.config as follows:

When the build is deployed to the DAT and DQA environments the __IKEY__ token needs replacing with the instrumentation key for the respective resource group. This is achieved as follows:

  1. In the ContosoUniversity release definition click on the ellipsis of the DAT environment and choose Configure Variables. This will bring up a dialog to add an InstrumentationKey variable:
    web-portal-contosouniversity-release-definition-add-instrumentationkey-variable
  2. The value for InstrumentationKey can be copied from the Azure portal. Navigate to Application Insights and then to the resource (PRM-DAT in the above screenshot) and then Configure > Properties where Instrumentation Key is to be found.
  3. The preceding process should be repeated for the DQA environment.
  4. Whilst still editing the release definition, edit the Website configuration tasks of both environments so that the Deployment > Scrip Arguments field takes a new parameter at the end called $(InstrumentationKey):
    web-portal-contosouniversity-release-definition-add-instrumentationkey-parameter-to-website-task
  5. In Visual Studio with the ContosoUniversity solution open, edit ContosoUniversity.Web.Deploy.Website.ps1 to accept the new InstrumentationKey as a parameter, add it to the $configurationData block and to use it in the ReplaceWebConfigTokens DSC configuration:
  6. Check in the code changes so that a build and release are triggered and then check that the Application Insights resources in the Azure portal are displaying telemetry.

Install Release Annotations

A handy feature that became available in early 2016 was the ability to add Release Annotations, which is a way to identify releases in the Application Insights Metrics Explorer. Getting this set up is as follows:

  1. Release Annotations is an extension for VSTS or TFS 2015.2 and later and needs to be installed from the marketplace via this page. I installed it for my VSTS account.
  2. In the release definition, for each environment (I'm just showing the DAT environment below) add two variables -- ApplicationId and ApiKey but leave the window open for editing:
    web-portal-contosouniversity-release-definition-add-variables-to-environment
  3. In a separate browser window, navigate to the Application Insights resource for that environment in the Azure portal and then to the API Access section.
  4. Click on Create API key and complete the details as follows:
    azure-portal-create-application-insights-api-key
  5. Clicking Generate key will do just that:
    azure-portal-copy-application-insights-application-id-and-api-key
  6. You should now copy the Application ID value hand API key value (both highlighted in the screenshot above) to the respective text boxes in the browser window where the release definition environment variables window should still be open. After marking the ApiKey as a secret with the padlock icon this window can now be closed.
  7. The final step is to add a Release Annotation task to the release definition:
    web-portal-contosouniversity-release-definition-add-release-annotation-task
  8. The Release Annotation is then edited with the ApplicationId and ApiKey variables:
    web-portal-contosouniversity-release-definition-edit-release-annotation-task
  9. The net result of this can be seen in the Application Insights Metrics Explorer following a successful release where the release is displayed as a blue information icon:
    azure-portal-copy-application-insights-metrics-explorer-release-properties
  10. Clicking the icon opens the Release Properties window which displays rich details about the release.

Install the Application Insights Status Monitor

Since we are running our web application under IIS even more telemetry can be gleaned by installing the Application Insights Status Monitor:

  1. On the web servers running IIS download and install Status Monitor.
  2. Sign in to your Azure account in the configuration dialog.
  3. Use Configure settings to choose the correct Application Insights resource.
  4. Add the domain account the website is running under (via the application pool) to the Performance Monitor Users local security group.

The Status Monitor window should finish looking something like this:

application-insights-status-monitor

See this documentation page to learn about the extra telemetry that will appear.

Wrapping Up

In this post I've only really covered configuring the basic components of Application Insights. In reality there's a wealth of other items to configure and the list is bound to grow. Here's a quick list I've come up with to give you a flavour:

This list however doesn't include the huge number of options for configuring Application Insights itself. There's enough to keep anyone interested in this sort of thing busy for weeks. The documentation is a great starting point -- check out the sidebar here.

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Automated Acceptance Tests with SpecFlow and Selenium Part 2

Posted by Graham Smith on July 25, 2016One Comment (click here to comment)

In part-one of this two-part mini series I covered how to get acceptance tests written using Selenium working as part of the deployment pipeline. In that post the focus was on configuring the moving parts needed to get some existing acceptance tests up-and-running with the new Release Management tooling in TFS or VSTS. In this post I make good on my promise to explain how to use SpecFlow and Selenium together to write business readable web tests as opposed to tests that probably only make sense to a developer.

If you haven't used SpecFlow before then I highly recommend taking the time to understand what it can do. The SpecFlow website has a good getting started guide here however the best tutorial I have found is Jason Roberts' Automated Business Readable Web Tests with Selenium and SpecFlow Pluralsight course. Pluralsight is a paid-for service of course but if you don't have a subscription then you might consider taking up the offer of the free trial just to watch Jason's course.

As I started to integrate SpecFlow in to my existing Contoso University sample application for this post I realised that the way I had originally written the Selenium-based page object model using a fluent API didn't work well with SpecFlow. Consequently I re-wrote the code to be more in line with the style used in Jason's Pluralsight course. The versions are on GitHub -- you can find the ‘before' code here and the ‘after' code here. The instructions that follow are written from the perspective of someone updating the ‘before' code version.

Install SpecFlow Components

To support SpecFlow development, components need to be installed at two levels. With the Contoso University sample application open in Visual Studio (actually not necessary for the first item):

  • At the Visual Studio application level the SpecFlow for Visual Studio 2015 extension should be installed.
  • At the Visual Studio solution level the ContosoUniversity.Web.AutoTests project needs to have the SpecFlow NuGet package installed.

You may also find if using MSTest that the specFlow section of App.config in ContosoUniversity.Web.AutoTests needs to have an <unitTestProvider name="MsTest" /> element added.

Update the Page Object Model

In order to see all the changes I made to update my page object model to a style that worked well with SpecFlow please examine the ‘after' code here. To illustrate the style of my updated model, I created CreateDepartmentPage class in ContosoUniversity.Web.SeFramework with the following code:

The key difference is that rather than being a fluent API the model now consists of separate properties that more easily map to SpecFlow statements.

Add a Basic SpecFlow Test

To illustrate some of the power of SpecFlow we'll first add a basic test and then make some improvements to it. The test should be added to ContosoUniversity.Web.AutoTests -- if you are using my ‘before' code you'll want to delete the existing C# class files that contain the tests written for the earlier page object model.

  • Right-click ContosoUniversity.Web.AutoTests and choose Add > New Item. Select SpecFlow Feature File and call it Department.feature.
  • Replace the template text in Department.feature with the following:
  • Right-click Department.feature in the code editor and choose Generate Step Definitions which will generate the following dialog:
    visual-studio-specflow-generate-step-definitions
  • By default this will create a DepartmentSteps.cs file that you should save in ContosoUniversity.Web.AutoTests.
  • DepartmentSteps.cs now needs to be fleshed-out with code that refers back to the page object model. The complete class is as follows:

If you take a moment to examine the code you'll see the following features:

  • The presence of methods with the BeforeScenario and AfterScenario attributes to initialise the test and clean up afterwards.
  • Since we specified a value for Budget in Department.feature a method step with a (poorly named) parameter was created for reusability.
  • Although we specified a name for the Administrator the step method wasn't parameterised.

As things stand though this test is complete and you should see a NewDepartmentCreatedSuccessfully test in Test Explorer which when run (don't forget IIS Express needs to be running) should turn green.

Refining the SpecFlow Test

We can make some improvements to DepartmentSteps.cs as follows:

  • The GivenIEnterABudgetOf method can have its parameter renamed to budget.
  • The GivenIEnterAnAdministratorWithNameOfKapoor method can be parameterised by changing as follows:

In the preceding change note the change to both the attribute and the method name.

Updating the Build Definition

In order to start integrating SpecFlow tests in to the continuous delivery pipeline the first step is to update the build definition, specifically the AcceptanceTests artifact that was created in the previous post which needs amending to include TechTalk.SpecFlow.dll as a new item of the Contents property. A successful build should result in this dll appearing in the Artifacts Explorer window for the AcceptanceTests artifact:

web-portal-contosouniversity-artifacts-explorer

Update the Test Plan with a new Test Case

If you are running your tests using the test assembly method then you should find that they just run without and further amendment. If on the other hand you are using the test plan method then you will need to remove the test cases based on the old Selenium tests and add a new test case (called New Department Created Successfully to match the scenario name) and edit it in Visual Studio to make it automated.

And Finally

Do be aware that I've only really scratched the surface in terms of what SpecFlow can do and there's plenty more functionality for you to explore. Whilst it's not really the subject of this post it's worth pointing out that when deciding to adopt acceptance tests as part of your continuous delivery pipeline it's worth doing so in a considered way. If you don't it's all too easy to wake up one day to realise you have hundreds of tests which take may hours to run and which require a significant amount of time to maintain. To this end do have a listen to Dave Farley's QCon talk on Acceptance Testing for Continuous Delivery.

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Automated Acceptance Tests with SpecFlow and Selenium Part 1

Posted by Graham Smith on June 15, 20167 Comments (click here to comment)

In the previous post in this series we covered using Release Management to deploy PowerShell DSC scripts to target nodes that both configured the nodes for web and database roles and then deployed our sample application. With this done we are now ready to do useful work with our deployment pipeline, and the big task for many teams is going to be running automated acceptance tests to check that previously developed functionality still works as expected as an application undergoes further changes.

I covered how to create a page object model framework for running Selenium web tests in my previous blog series on continuous delivery here. The good news is that nothing much has changed and the code still runs fine, so to learn about how to create a framework please refer to this post. However one thing I didn't cover in the previous series was how to use SpecFlow and Selenium together to write business readable web tests and that's something I'll address in this series. Specifically, in this post I'll cover getting acceptance tests working as part of the deployment pipeline and in the next post I'll show how to integrate SpecFlow.

What We're Hoping to Achieve

The acceptance tests are written using Selenium which is able to automate ‘driving' a web browser to navigate to pages, fill in forms, click on submit buttons and so on. Whilst these tests are created on and thus able to run on developer workstations the typical scenario is that the number of tests quickly mounts making it impractical to run them locally. In any case running them locally is of limited use since what we really want to know is if checked-in code changes from team members have broken any tests.

The solution is to run the tests in an environment that is part of the deployment pipeline. In this blog series I call that the DAT (development automated test) environment, which is the first stage of the pipeline after the build process. As I've explained previously in this blog series, the DAT environment should be configured in such a way as to minimise the possibility of tests failing due to factors other than code issues. I solve this for web applications by having the database, web site and test browser all running on the same node.

Make Sure the Tests Work Locally

Before attempting to get automated tests running in the deployment pipeline it's a good idea to confirm that the tests are running locally. The steps for doing this (in my case using Visual Studio 2015 Update 2 on a workstation with FireFox already installed) are as follows:

  1. If you don't already have a working Contoso University sample application available:
    1. Download the code that accompanies this post from my GitHub site here.
    2. Unblock and unzip the solution to a convenient location and then build it to restore NuGet packages.
    3. In ContosoUniversity.Database open ContosoUniversity.publish.xml and then click on Publish to create the ContosoUniversity database in LocalDB.
  2. Run ContosoUniversity.Web (and in so doing confirm that Contoso University is working) and then leaving the application running in the browser switch back to Visual Studio and from the Debug menu choose Detatch All. This leaves IIS Express running which FireFox needs to be able to navigate to any of the application's URLs.
  3. From the Test menu navigate to Playlist > Open Playlist File and open AutoWebTests.playlist which lives under ContosoUniversity.Web.AutoTests.
  4. In Test Explorer two tests (Can_Navigate_To_Departments and Can_Create_Department) should now appear and these can be run in the usual way. FireFox should open and run each test which will hopefully turn green.

Edit the Build to Create an Acceptance Tests Artifact

The first step to getting tests running as part of the deployment pipeline is to edit the build to create an artifact containing all the files needed to run the tests on a target node. This is achieved by editing the ContosoUniversity.Rel build definition and adding a Copy Publish Artifact task. This should be configured as follows:

  • Copy Root = $(build.stagingDirectory)
  • Contents =
    • ContosoUniversity.Web.AutoTests.*
    • ContosoUniversity.Web.SeFramework.*
    • Microsoft.VisualStudio.QualityTools.UnitTestFramework.*
    • WebDriver.*
  • Artifact Name = AcceptanceTests
  • Artifact Type = Server

After queuing a successful build the AcceptanceTests artifact should appear on the build's Artifacts tab:

web-portal-contosouniversity-rel-build-artifacts-acceptance-tests

Edit the Release to Deploy the AcceptanceTests Artifact

Next up is copying the AcceptanceTests artifact to a target node -- in my case a server called PRM-DAT-AIO. This is no different from the previous post where we copied database and website artifacts and is a case of adding a Windows Machine File Copy task to the DAT environment of the ContosoUniversity release and configuring it appropriately:

web-portal-contosouniversity-release-definition-copy-acceptance-tests-files

Deploy a Test Agent

The good news for those of us working in the VSTS and TFS 2015 worlds is that test controllers are a thing of the past because Agents for Microsoft Visual Studio 2015 handle communicating with VSTS or TFS 2015 directly. The agent needs to be deployed to the target node and this is handled by adding a Visual Studio Test Agent Deployment task to the DAT environment. The configuration of this task is very straightforward (see here) however you will probably want to create a dedicated domain service account for the agent service to run under. The process is slightly different between VSTS and TFS 2015 Update 2.1 in that in VSTS the machine details can be entered directly in the task whereas in TFS there is a requirement to create a Test Machine Group.

Running Tests -- Test Assembly Method

In order to actually run the acceptance tests we need to add a Run Functional Tests task to the DAT pipeline directly after the Visual Studio Test Agent Deployment task. Examining this task reveals two ways to select the tests to be run -- Test Assembly or Test Plan. Test Assembly is the most straightforward and needs very little configuration:

  • Test Machine Group (TFS) or Machines (VSTS) = Group name or $(TargetNode-DAT-AIO)
  • Test Drop Location = $(TargetNodeTempFolder)\AcceptanceTests
  • Test Selection = Test Assembly
  • Test Assembly = **\*test*.dll
  • Test Run Title = Acceptance Tests

As you will see though there are many more options that can be configured -- see the help page here for details.

Before you create a build to test these setting out you will need to make sure that the node where the tests are to be run from is specified in Driver.cs which lives in ContosoUniversity.Web.SeFramework. You will also need to ensure that FireFox is installed on this node. I've been struggling to reliably automate the installation of FireFox which turned out to be just as well because I was trying to automate the installation of the latest version from the Mozilla site. This turns out to be a bad thing because the latest version at time of writing (47.0) doesn't work with the latest (at time of writing) version of Selenium (2.53.0). Automation installation efforts for FireFox therefore need to centre around installing a Selenium-compatible version which makes things easier since the installer can be pre-downloaded to a known location. I ran out of time and installed FireFox 46.1 (compatible with Selenium 2.53.0) manually but this is something I'll revisit. Disabling automatic updates in FireFox is also essential to ensure you don't get out of sync with Selenum.

When you finally get your tests running you can see the results form the web portal by navigating to Test > Runs. You should hopefully see something similar to this:

web-portal-contosouniversity-test-run-summary

Running Tests -- Test Plan Method

The first question you might ask about the Test Plan method is why bother if the Test Assembly method works? Of course, if the Test Assembly method gives you what you need then feel free to stick with that. However you might need to use the Test Plan method if a test plan already exists and you want to continue using it. Another reason is the possibility of more flexibility in choosing which tests to run. For example, you might organise your tests in to logical areas using static suites and then use query-based suites to choose subsets of tests, perhaps with the use of tags.

To use the Test Plan method, in the web portal navigate to Test > Test Plan and then:

  1. Use the green cross to create a new test plan called Acceptance Tests.
  2. Use the down arrow next to Acceptance Tests to create a New static suite called Department:
    web-portal-contosouniversity-create-test-suite
  3. Within the Department suite use the green cross to create two new test cases called Can_Navigate_To_Departments and Can_Create_Department (no other configuration necessary):
    web-portal-contosouniversity-create-test-case
  4. Making a note of the test case IDs, switch to Visual Studio and in Team Explorer > Work Items search for each test case in turn to open it for editing.
  5. For each test case, click on Associated Automation (screenshot below is VSTS and looks slightly different from TFS) and then click on the ellipsis to bring up the Choose Test dialogue where you can choose the correct test for the test case:
    visual-studio-test-case-associated-automation
  6. With everything saved switch back to the web portal Release hub and edit the Run Functional Tests task as follows:
    1. Test Selection = Test Plan
    2. Test Plan = Acceptance Tests
    3. Test Suite =Acceptance Tests\Department

With the configuration complete trigger a new release and if everything has worked you should be able to navigate to Test > Runs and see output similar to the Test Assembly method.

That's it for now. In the next post in this series I'll look at adding SpecFlow in to the mix to make the acceptance tests business readable.

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Server Configuration and Application Deployment with Release Management

Posted by Graham Smith on May 2, 20162 Comments (click here to comment)

At this point in my blog series on Continuous Delivery with TFS / VSTS we have finally reached the stage where we are ready to start using the new web-based release management capabilities of VSTS and TFS. The functionality has been in VSTS for a little while now but only came to TFS with Update 2 of TFS 2015 which was released at the end of March 2016.

Don't tell my wife but I'm having a torrid love affair with the new TFS / VSTS Release Management. It's flippin' brilliant! Compared to the previous WPF desktop client it's a breath of fresh air: easy to understand, quick to set up and a joy to use. Sure there are some improvements that could be made (and these will come in time) but for the moment, for a relatively new product, I'm finding the experience extremely agreeable. So let's crack on!

Setting the Scene

The previous posts in this series set the scene for this post but I'll briefly summarise here. We'll be deploying the Contoso University sample application which consists of an ASP.NET MVC website and a SQL Server database which I've converted to a SQL Server Database Project so deployment is by DACPAC. We'll be deploying to three environments (DAT, DQA and PRD) as I explain here and not only will we be deploying the application we'll first be making sure the environments are correctly configured with PowerShell DSC using an adaptation of the procedure I describe here.

My demo environment in Azure is configured as a Windows domain and includes an instance of TFS 2015 Update 2 which I'll be using for this post as it's the lowest common denominator, although I will point out any VSTS specifics where needed. We'll be deploying to newly minted Windows Server 2012 R2 VMs which have been joined to the domain, configured with WMF 5.0 and had their domain firewall turned off -- see here for details. (Note that if you are using versions of Windows server earlier than 2012 that don't have remote management turned on you have a bit of extra work to do.) My TFS instance is hosting the build agent and as such the agent can ‘see' all the machines in the domain. I'm using Integrated Security to allow the website to talk to the database and use three different domain accounts (CU-DAT, CU-DQA and CU-PRD) to illustrate passing different credentials to different environments. I assume you have these set up in advance.

As far as development tools are concerned I'm using Visual Studio 2015 Update 2 with PowerShell Tools installed and Git for version control within a TFS / VSTS team project. It goes without saying that for each release I'm building the application only once and as part of the build any environment-specific configuration is replaced with tokens. These tokens are replaced with the correct values for that environment as that same tokenised build moves through the deployment pipeline.

Writing Server Configuration Code Alongside Application Code

A key concept I am promoting in this blog post series is that configuring the servers that your application will run on should not be an afterthought and neither should it be a manual click-through-GUI process. Rather, you should be configuring your servers through code and that code should be written at the same time as you write your application code. Furthermore the server configuration code should live with your application code. To start then we need to configure Contoso University for this way of working. If you are following along you can get the starting point code from here.

  1. Open the ContosoUniversity solution in Visual Studio and add new folders called Deploy to the ContosoUniversity.Database and ContosoUniversity.Web projects.
  2. In ContosoUniversity.Database\Deploy create two new files: Database.ps1 and DbDscResources.ps1. (Note that SQL Server Database Projects are a bit fussy about what can be created in Visual Studio so you might need to create these files in Windows Explorer and add them in as new items.)
  3. Database.ps1 should contain the following code:
  4. DbDscResources.ps1 should contain the following code:
  5. In ContosoUniversity.Web\Deploy create two new files: Website.ps1 and WebDscResources.ps1.
  6. Website.ps1 should contain the following code:
  7. WebDscResources.ps1 should contain the following code:
  8. In ContosoUniversity.Database\Scripts move Create login and database user.sql to the Deploy folder and remove the Scripts folder.
  9. Make sure all these files have their Copy to Output Directory property set to Copy always. For the files in ContosoUniversity.Database\Deploy the Build Action property should be set to None.

The Database.ps1 and Website.ps1 scripts contain the PowerShell DSC to both configure servers for either IIS or SQL Server and then to deploy the actual component. See my Server Configuration as Code with PowerShell DSC post for more details. (At the risk of jumping ahead to the deployment part of this post, the bits to be deployed are copied to temp folders on the target nodes -- hence references in the scripts to C:\temp\$whatever$.)

In the case of the database component I'm using the xDatabase custom DSC resource to deploy the DACPAC. I came across a problem with this resource where it wouldn't install the DACPAC using domain credentials, despite the credentials having the correct permissions in SQL Server. I ended up having to install SQL Server using Mixed Mode authentication and installing the DACPAC using the sa login. I know, I know!

My preferred technique for deploying website files is plain xcopy. For me the requirement is to clear the old files down and replace them with the new ones. After some experimentation I ended up with code to stop IIS, remove the web folder, copy the new web folder from its temp location and then restart IIS.

Both the database and website have files with configuration tokens that needed replacing as part of the deployment. I'm using the xReleaseManagement custom DSC resource which takes a hash table of tokens (in the __TOKEN_NAME__ format) to replace.

In order to use custom resources on target nodes the custom resources need to be in place before attempting to run a configuration. I had hoped to use a push server technique for this but it was not to be since for this post at least I'm running the DSC configurations on the actual target nodes and the push server technique only works if the MOF files are created on a staging machine that has the custom resources installed. Instead I'm copying the custom resources to the target nodes just prior to running the DSC configurations and this is the purpose of the DbDscResources.ps1 and WebDscResources.ps1 files. The custom resources live on a UNC that is available to target nodes and get there by simply copying them from a machine where they have been installed (C:\Program Files\WindowsPowerShell\Modules is the location) to the UNC.

Create a Release Build

With the Visual Studio now configured (don't forget to commit the changes) we now need to create a build to check that initial code quality checks have passed and if so to publish the database and website components ready for deployment. Create a new build definition called ContosoUniversity.Rel and follow this post to configure the basics and this post to create a task to run unit tests. Note that for the Visual Studio Build task the MSBuild Arguments setting is /p:OutDir=$(build.stagingDirectory) /p:UseWPP_CopyWebApplication=True /p:PipelineDependsOnBuild=False /p:RunCodeAnalysis=True. This gives us a _PublishedWebsites\ContosoUniversity.Web folder (that contains all the web files that need to be deployed) and also runs the transformation to tokensise Web.config. Additionally, since we are outputting to $(build.stagingDirectory) the Test Assembly setting of the Visual Studio Test task needs to be $(build.stagingDirectory)\**\*UnitTests*.dll;-:**\obj\**. At some point we'll want to version our assemblies but I'll return to that in a another post.

One important step that has changed since my earlier posts is that the Restore NuGet Packages option in the Visual Studio Build task has been deprecated. The new way of doing this is to add a NuGet Installer task as the very first item and then in the Visual Studio Build task (in the Advanced section in VSTS) uncheck Restore NuGet Packages.

To publish the database and website as components -- or Artifacts (I'm using the TFS spelling) as they are known -- we use the Copy and Publish Build Artifacts tasks. The database task should be configured as follows:

  • Copy Root = $(build.stagingDirectory)
  • Contents =
    • ContosoUniversity.Database.d*
    • Deploy\Database.ps1
    • Deploy\DbDscResources.ps1
    • Deploy\Create login and database user.sql
  • Artifact Name = Database
  • Artifact Type = Server

Note that the Contents setting can take multiple entries on separate lines and we use this to be explicit about what the database artifact should contain. The website task should be configured as follows:

  • Copy Root = $(build.stagingDirectory)\_PublishedWebsites
  • Contents = **\*
  • Artifact Name = Website
  • Artifact Type = Server

Because we are specifying a published folder of website files that already has the Deploy folder present there's no need to be explicit about our requirements. With all this done the build should look similar to this:

web-portal-contosouniversity-rel-build

In order to test the successful creation of the artifacts, queue a build and then -- assuming the build was successful -- navigate to the build and click on the Artifacts link. You should see the Database and Website artifact folders and you can examine the contents using the Explore link:

web-portal-contosouniversity-rel-build-artifacts

Create a Basic Release

With the artifacts created we can now turn our attention to creating a basic release to get them copied on to a target node and then perform a deployment. Switch to the Release hub in the web portal and use the green cross icon to create a new release definition. The Deployment Templates window is presented and you should choose to start with an Empty template. There are four immediate actions to complete:

  1. Provide a Definition name -- ContosoUniversity for example.
  2. Change the name of the environment that has been added to DAT.
    web-portal-contosouniversity-release-definition-initial-tasks
  3. Click on Link to a build definition to link the release to the ContosoUniversity.Rel build definition.
    web-portal-contosouniversity-release-definition-link-to-build-definition
  4. Save the definition.

Next up we need to add two Windows Machine File Copy tasks to copy each artifact to one node called PRM-DAT-AIO. (As a reminder the DAT environment as I define it is just one server which hosts both the website and the database and where automated testing takes place.) Although it's possible to use just one task here the result of selecting artifacts differs according to the selected node in the artifact tree. At the node root, folders are created for each artifact but go one node lower and they aren't. I want a procedure that works for all environments which is as follows:

  1. Click on Add tasks to bring up the Add Tasks window. Use the Deploy link to filter the list of tasks and Add two Windows Machine File Copy tasks:
    web-portal-contosouniversity-release-definition-add-task
  2. Configure the properties of the tasks as follows:
    1. Edit the names (use the pencil icon) to read Copy Database files and Copy Website files respectively.
    2. Source = $(System.DefaultWorkingDirectory)/ContosoUniversity.Rel/Database or $(System.DefaultWorkingDirectory)/ContosoUniversity.Rel/Website accordingly (use the ellipsis to select)
    3. Machines = PRM-DAT-AIO.prm.local
    4. Admin login = Supply a domain account login that has admin privileges for PRM-DAT-AIO.prm.local
    5. Password = Password for the above domain account
    6. Destination folder = C:\temp\Database or C:\temp\Website accordingly
    7. Advanced Options > Clean Target = checked
  3. Click the ellipsis in the DAT environment and choose Deployment conditions.
    web-portal-contosouniversity-release-definition-environment-settings-deployment-conditions
  4. Change the Trigger to After release creation and click OK to accept.
  5. Save the changes and trigger a release using the green cross next to Release. You'll be prompted to select a build as part of the process:
    web-portal-contosouniversity-release-definition-environment-create-release
  6. If the release succeeds a C:\temp folder containing the artifact folders will have been created on on PRM-DAT-AIO.
  7. If the release fails switch to the Logs tab to troubleshoot. Permissions and whether the firewall has been configured to allow WinRM are the likely culprits. To preserve my sanity I do everything as domain admin and I have the domain firewall turned off. The usual warnings about these not necessarily being best practices in non-test environments apply!

Whilst you are checking the C:\temp folder on the target node have a look inside the artifact folders. They should both contain a Deploy folder that contains the PowerShell scripts that will be executed remotely using the PowerShell on Target Machines task. You'll need to configure two of each for the two artifacts as follows:

  1. Add two PowerShell on Target Machines tasks to alternately follow the Windows Machine File Copy tasks.
  2. Edit the names (use the pencil icon) to read Configure Database and Configure Website respectively.
  3. Configure the properties of the task as follows:
    1. Machines = PRM-DAT-AIO.prm.local
    2. Admin login = Supply a domain account that has admin privileges for PRM-DAT-AIO.prm.local
    3. Password = Password for the above domain account
    4. Protocol = HTTP
    5. Deployment > PowerShell Script = C:\temp\Database\Deploy\Database.ps1 or C:\temp\Website\Deploy\Website.ps1 accordingly
    6. Deployment > Initialization Script = C:\temp\Database\Deploy\DbDscResources.ps1 or C:\temp\Website\Deploy\WebDscResources.ps1 accordingly
  4. With reference to the parameters required by C:\temp\Database\Deploy\Database.ps1 configure Deployment > Script Arguments for the Database task as follows:
    1. $domainSqlServerSetupLogin = Supply a domain login that has privileges to install SQL Server on PRM-DAT-AIO.prm.local
    2. $domainSqlServerSetupPassword = Password for the above domain login
    3. $sqlServerSaPassword = Password you want to use for the SQL Server sa account
    4. $domainUserForIntegratedSecurityLogin = Supply a domain login to use for integrated security (PRM\CU-DAT in my case for the DAT environment)
    5. The finished result will be similar to: ‘PRM\Graham' ‘YourSecurePassword' ‘YourSecurePassword' ‘PRM\CU-DAT'
  5. With reference to the parameters required by C:\temp\Website\Deploy\Website.ps1 configure Deployment > Script Arguments for the Website task as follows:
    1. $domainUserForIntegratedSecurityLogin = Supply a domain login to use for integrated security (PRM\CU-DAT in my case for the DAT environment)
    2. $domainUserForIntegratedSecurityPassword = Password for the above domain account
    3. $sqlServerName = machine name for the SQL Server instance (PRM-DAT-AIO in my case for the DAT environment)
    4. The finished result will be similar to: ‘PRM\CU-DAT' ‘YourSecurePassword' ‘PRM-DAT-AIO'

At this point you should be able to save everything and the release should look similar to this:

web-portal-contosouniversity-release-definition-environment-create-release-all-tasks-added

Go ahead and trigger a new release. This should result in the PowerShell scripts being executed on the target node and IIS and SQL Server being installed, as well as the Contoso University application. You should be able to browse the application at http://prm-dat-aio. Result!

Variable Quality

Although we now have a working release for the DAT environment it will hopefully be obvious that there are serious shortcomings with the way we've configured the release. Passwords in plain view is one issue and repeated values is another. The latter issue is doubly of concern when we start creating further environments.

The answer to this problem is to create custom variables at both a ‘release' level and and at the ‘environment' level. Pretty much every text box seems to take a variable so you can really go to town here. It's also possible to create compound values based on multiple variables -- I used this to separate the location of the C:\temp folder from the rest of the script location details. It's worth having a bit of a think about your variable names in advance of using them because if you change your mind you'll need to edit every place they were used. In particular, if you edit the declaration of secret variables you will need to click the padlock to clear the value and re-enter it. This tripped me up until I added Write-Verbose statements to output the parameters in my DSC scripts and realised that passwords were not being passed through (they are asterisked so there is no security concern). (You do get the scriptArguments as output to the console but I find having them each on a separate line easier.)

Release-level variables are created in the Configuration section and if they are passwords can be secured as secrets by clicking the padlock icon. The release-level variables I created are as follows:

web-portal-contosouniversity-release-definition-release-variables

Environment-level variables are created by clicking the ellipsis in the environment and choosing Configure Variables. I created the following:

web-portal-contosouniversity-release-definition-environment-variables

The variables can then be used to reconfigure the release as per this screen shot which shows the PowerShell on Target Machines Configure Database task:

web-portal-contosouniversity-release-definition-tasks-using-variables

The other tasks are obviously configured in a similar way, and notice how some fields use more than one variable. Nothing has a actually changed by replacing hard-coded values with variables so triggering another release should be successful.

Environments Matter

With a successful deployment to the DAT environment we can now turn our attention to the other stages of the deployment pipeline -- DQA and PRD. The good news here is that all the work we did for DAT can be easily cloned for DQA which can then be cloned for PRD. Here's the procedure for DQA which don't forget is a two-node deployment:

  1. In the Configuration section create two new release level variables:
    1. TargetNode-DQA-SQL = PRM-DQA-SQL.prm.local
    2. TargetNode-DQA-IIS = PRM-DQA-IIS.prm.local
  2. In the DAT environment click on the ellipsis and select Clone environment and name it DQA.
  3. Change the two database tasks so the Machines property is $(TargetNode-DQA-SQL).
  4. Change the two website tasks so the Machines property is $(TargetNode-DQA-IIS).
  5. In the DQA environment click on the ellipsis and select Configure variables and make the following edits:
    1. Change DomainUserForIntegratedSecurityLogin to PRM\CU-DQA
    2. Click on the padlock icon for the DomainUserForIntegratedSecurityPassword variable to clear it then re-enter the password and click the padlock icon again to make it a secret. Don't miss this!
    3. Change SqlServerName to PRM-DQA-SQL
  6. In the DQA environment click on the ellipsis and select Deployment conditions and set Trigger to No automated deployment.

With everything saved and assuming the PRM-DQA-SQL and PRM-DQA-SQL nodes are running the release can now be triggered. Assuming the deployment to DAT was successful the release will wait for DQA to be manually deployed (almost certainly what is required as manual testing could be going on here):

web-portal-contosouniversity-release-definition-manual-deploy-of-DQA

To keep things simple I didn't assign any approvals for this release (ie they were all automatic) but do bear in mind there is some rich and flexible functionality available around this. If all is well you should be able to browse Contoso University on http://prm-dqa-iis. I won't describe cloning DQA to create PRD as it's very similar to the process above. Just don't forget to re-enter cloned password values! Do note that in the Environment Variables view of the Configuration section you can view and edit (but not create) the environment-level variables for all environments:

web-portal-contosouniversity-release-definition-all-environment-variables

This is a great way to check that variables are the correct values for the different environments.

And Finally...

There's plenty more functionality in Release Management that I haven't described but that's as far as I'm going in this post. One message I do want to get across is that the procedure I describe in this post is not meant to be a statement on the definitive way of using Release Management. Rather, it's designed to show what's possible and to get you thinking about your own situation and some of the factors that you might need to consider. As just one example, if you only have one application then the Visual Studio solution for the application is probably fine for the DSC code that installs IIS and SQL Server. However if you have multiple similar applications then almost certainly you don't want all that code repeated in every solution. Moving this code to the point at which the nodes are created could be an option here -- or perhaps there is a better way!

That's it for the moment but rest assured there's lots more to be covered in this series. If you want the final code that accompanies this post I've created a release here on my GitHub site.

Cheers -- Graham