Upgrading a TFS 2013 Continuous Delivery Platform to TFS 2015

Posted by Graham Smith on August 25, 2015No Comments (click here to comment)

In a previous post I described the high-level process for upgrading the continuous delivery platform I've been using for my Continuous Delivery with TFS blog post series to TFS 2013 Update 5. With the release of TFS 2015 on 6 August 2015 it's now time to upgrade this platform to the brand new version. See my previous post for the list of servers that participate in my continuous delivery platform, and as with the upgrade to TFS 2013 Update 5 all servers had been patched with the latest Windows Updates.

Team Foundation Server

There is a wealth of advice for upgrading to TFS 2015 and if you are upgrading a production instance you'll probably want to do some research. A good starting point is the ALM Rangers' TFS Upgrade -- New Elements of TFS 2015 that influence an upgrade. I can also recommend listening to Radio TFS episode 95 -- Out and About with TFS 2015, and don't forget to check out the show links.

Since I was upgrading a non-production instance I dove straight in, mounted the iso and started the setup routine. After installing the bits the Upgrade Wizard started automatically and as with previous upgrades each page of the wizard was straightforward.

tfs-upgrade-wizard

One addition to the wizard is a page to configure the new Team Foundation Build. Although in the fullness of time anyone upgrading will want to move to the new build system it is quite likely that you will need to keep existing XAML builds running as you gradually move over. This is fully supported but you will need to enable this from the Team Foundation Server Administration Console under Additional Tools and Components > XAML Build Configuration. The good news is that settings from the previous version of TFS are remembered. You can read the release notes for TFS 2015 here and the list of known issues here.

Visual Studio

Although TFS 2015 is backwards compatible with some prior versions of Visual Studio many teams are likely to want to move to Visual Studio 2015 at the same time as moving to TFS 2015. Visual Studio 2015 installs side-by-side with previous versions and whilst the installation process is straightforward there are some other things to consider since the installation routine now offers third party products as well as the ability to select just the Microsoft components you require. I chose to keep things light but did take the opportunity to install PowerShell and Git-based components as they are areas I want to explore for future blogs.

visual-studio-enterprise-install

As a test of Visual Studio 2015 I opened up my Contoso University sample application (previously created in Visual Studio 2013) and the only issue was with the SQL Server Database Project whose connection string was referring to a previous version of LocalDB. This was a quick fix as per Bill Wagner's post -- don't forget to update Web.config as well if this affects you. Whilst Contoso University worked fine locally, running an existing XAML build on the new TFS 2015 instance wasn't quite as successful and partially failed with TF900547: The directory containing the assemblies for the Visual Studio Test Runner is not valid ‘C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow'. I'd pretty much expected something like this since I hadn't installed Visual Studio 2015 yet on my all-in-one TFS machine which hosts the build agent, and performing the install (Microsoft components only) fixed this problem. (As an aside, I always make sure Visual Studio instances on a build agent server are registered with a licence key to avoid trial period expiry problems.) You can read the release notes for Visual Studio 2015 here and the list of known issues here.

Release Management

As discussed in my previous post there was no Update 5 to Release Management 2013 and instead Release Management 2015 was released and is backwards compatible with TFS 2013. Since I'd already upgraded to Release Management 2015 there was nothing much for me to do except test that the build definition that started my release pipeline still worked -- which it did! However, the automated web tests part of the pipeline (based on the MTM Automated Tests Manager tool) was still running against 2013 Agents and Microsoft Test Manager 2013 and naturally I wanted to get these components upgraded.

Microsoft Test Manager and Agents for Visual Studio

Microsoft Test Manager 2015 (aka Test Professional 2015) is installed as part of Visual Studio 2015 Enterprise however it is also available as a separate installation for testers who don't use Visual Studio or where it is needed to run the bundled tcm.exe. The latter scenario applies to me as I have a Windows 8.1 web client machine that runs automated web tests written using Selenium that require FireFox, Microsoft Test Manager and Agents for Visual Studio (Test Agent component) to be installed. I upgraded this machine (please keep reading as most of what I did was unnecessary) by first uninstalling Microsoft Test Manager 2013 and the Agents for Visual Studio 2013 and then installing and configuring the 2015 replacements. One key difference was that at the time of writing there was no upgrade to the Test Controller component of the Agents for Visual Studio 2015 so the Test Agent was shipping as a standalone exe. However, when I came to configure the Test Agent there was no Test Agent Configuration Tool for Agents for Visual Studio 2015. No matter since I knew it would be possible to partially configure the agent from Microsoft Test Manager 2015 (Lab Center > Environments and then choose to repair). However, all this did was install Agents for Visual Studio 2013. (Note that when it does this the service will run under Network Service. If you want to use a domain account you need to use the Test Agent Configuration Tool on the machine where the test agent is installed.) So a lot of work for nothing and it looks like we'll need to wait for a new Test Controller before the Test Agent can be upgraded.

Last Piece of the Puzzle

I wasn't done yet, since I upgraded Microsoft Test Manager on the web client and it turns out that the PowerShell script that forms part of the MTM Automated Tests Manager tool in Release Management 2015 hasn't been upgraded to work with the Visual Studio 2015 installation path. The proper way to fix this would be to create a new tool  in Release Management (since the built-in tools can't be edited) with an updated script that includes a reference to the VS140COMNTOOLS system environment variable which allows the script to locate tcm.exe. Since I'm due to retire my current continuous delivery infrastructure in the near future (as I prepare for a new blog post series based on TFS2015 Update 1 which should contain the new web-based Release Management) I opted for the quick and dirty trick of pointing the VS120COMNTOOLS system environment variable to C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Tools\. Probably not something for a production environment but good enough to get my automated web tests working and showing green. Always a relief...

Cheers -- Graham

Continuous Delivery with VSO: Executing Automated Web Tests with Microsoft Test Manager

Posted by Graham Smith on April 9, 20154 Comments (click here to comment)

In this fourth post in my series on continuous delivery with VSO we take a look at executing automated web tests with Microsoft Test Manager. There are quite a few moving parts involved in getting all this working so it's worth me explaining the overall aim before diving in with the specifics.

Overview

The tests we want to run are automated web tests written using the Selenium framework. I first wrote these tests for my Continuous Delivery with TFS blog posts series and you can read about how to create the tests here and how run run them using MTM and TFS here. The goal in this post is to run these tests using MTM and VSO, triggered as part of the DAT stage of the pipeline from RM. The tests are run from a client workstation that is configured with MTM (a requirement at the time of writing) and the Microsoft Test Agent. I've used Selenium's Firefox driver in the test code so Firefox is also required on the client machine.

In terms of what actually happens, firstly RM copies the complete build over to the client workstation and then executes a PowerShell script that runs TCM.exe which is a command-line utility that lets you run tests that are part of a test plan. Precisely what happens next is under the bonnet stuff but it's along the lines of the test controller is informed that there is work to be done and that in turn informs the test agent on the client machine that it needs to run tests. The test agent knows from the test plan which tests to run and in which DLL they live and has access to the DLLs in the local copy of the build folder. Each test first starts Firefox and then connects to the web server running the deployed Contoso University and performs the automation specified in the test.

In many ways the process of getting all this to work with VSO rather than TFS is very similar and because of that I don't go in to every detail in this post and instead refer back to my TFS blog post.

Configure a Test Controller

VSO doesn't offer a test controller facility so you'll need to configure this yourself. If you have a test controller already in use then it's simplicity itself to repurpose it to point to your VSO account using the Browse button. If you are starting from scratch see here for the details but obviously ensure you connect to VSO rather than TFS. One other difference is that in order to get past some permissions problems I found it necessary to specify credentials for the lab service account -- I used the same as the service logon account.

Although I started off by repurposing an existing controller, because of permissions problems I ended up creating a dedicated build and test server as I wanted to start with a clean sheet. One thing I found was that the Visual Studio Test Controller service wouldn't automatically start after booting the OS from the Stopped (deallocated) state. The application error log was clearly reporting that the test controller wasn't able to connect to VSO. Manually starting the service was fine so presumably there was some sort of timing issue with other OS components not being ready.

Configure Microsoft Test Manager

If MTM isn't already installed on your development workstation then that's the first step. The second step is to connect MTM to your VSO account. I already had MTM installed and when I went to connect it to VSO the website was already listed. If that's not the case you can use the Add server link from the Connect to Your Team Project dialog. Navigating down to your Team Project (ContosoUniversity) enables the Connect now link which then takes you to a screen that allows you to choose between Testing Center and Lab Center. Choose the latter and then configure Lab Center as per the instructions here.

Continue following these instructions to configure Testing Centre with a new test plan and test cases. Note that you need to have the Contoso University solution open in order to associate the actual tests with the test cases. You'll also need to ensure that when deployed the tests navigate to the correct URL. In the Contoso University demo application this is hard-coded and you need to make the change in Driver.cs located in the ContosoUniversity.Web.SeFramework project.

Configure a Web Client Test Machine

The client test machine needs to be created in the cloud service that was created for DAT and joined to the domain if you are using one. The required configuration is very similar to that required for TFS as described here with the exception that the Release Management Deployment Agent isn't required and nor is the RMDEPLOYER account. Getting permissions correctly configured on this machine proved critical and I eventually realised that the Windows account that the tests will run under needs to be configured so that MTM can successfully connect to VSO with the appropriate credentials. To be clear, these are not the test account credentials themselves but rather the normal credentials you use to connect to VSO. To configure all this, once the test account has been added to the Local Administrators group and MTM has been installed and the licence key applied you will need to log on to Windows as the test account and start MTM. Connect to VSO and supply your VSO credentials in the same way as you did for your development workstation and and verify that you can navigate down to the Contoso University team project and open the test plan that was created in the previous section.

Initially I also battled with getting the test agent to register correctly with the test controller. I eventually uninstalled the test agent (which I had installed manually) and let the test controller perform the install followed by the configuration. Whether that was the real solution to the problem I don't know but it got things working for me.

Executing TCM.exe with PowerShell

As mentioned above the code that starts the tests is a PowerShell script that executes TCM.exe. As a starting point I used the script that Microsoft developed for agent-based release templates but had to modify it to make it work with RM-VSO. In particular changes were made to accommodate the way variables are passed in to the script (some implicit such as $TfsUrl or $TeamProject and some explicit such as $PlanId or $SuiteId) and to remove the optional build definition and build number parameters which are not available to the vNext pipeline and caused errors when specified on the TCM.exe command line. The modified script (TcmExecvNext.ps1) and the original Microsoft script for comparison (TcmExec.ps1) are available in a zip here and TcmExecvNext.ps1 should be copied to the Deploy folder in your source control root. One point to note is that for agent-based pipelines to TFS Collection URL is passed as $TfsUrlWithCollection however in vNext pipelines it is passed in as $TfsUrl.

Configure Release Management

Because we are using RM-VSO this part of the configuration is completely different from the instructions for RM-TFS. However before starting any new configuration you'll need to make a change to the component we created in the previous post. This is because TCM.exe doesn't seem to like accepting the name of a build folder if it has a space in it. Some more fiddling with PowerShell might have found a solution but I eventailly changed the component's name from Drop Folder to DropFolder. Note that you'll need to visit the existing action and reselect the newly named component. Another issue which cropped-up is that TCM.exe choked when the build directory parameter was supplied with a local file path. The answer was to create a share at C:\Windows\DtlDownloads\DropFolder and configure with appropriate permissions.

The new configuration procedure for RM-VSO is as follows:

  1. From Configure Paths > Environments link the web client test machine to the DAT environment.
  2. From Configure Apps > vNext Release Templates open Contoso University\DAT>DQA.
  3. From the Toolbox drag a Deploy Using PS/DSC action to the deployment sequence to follow Deploy Web and Database and rename the action Run Automated Web Tests.
  4. Open up the properties of Run Automated Web Tests and set the Configuration Variables as follows:
    1. ServerName = choose the name of the web client test machine from the dropdown.
    2. UserName = this is the test domain account (ALM\TFSTEST in my case) that was configured for the web client test machine.
    3. Password = password for the UserName
    4. ComponentName = choose DropFolder from the dropdown.
    5. PSScriptPath = Deploy\TcmExecvNext.ps1
    6. SkipCaCheck = true
  5. Still in the properties of Run Automated Web Tests and set the Custom configuration as follows:
    1. PlanId = 8 (or whatever your Plan ID is as it is likely to be different)
    2. SuiteId = 10 (or whatever your Suite ID is as it is likely to be different)
    3. ConfigId = 1 (or whatever your Configuration ID is as it is likely to be different)
    4. BuildDirectory = \\almclientwin81b\DtlDownloads\DropFolder (your machine name may be different)
    5. TestEnvironment = ALMCLIENTWIN81B (yours may be different)
    6. Title = Automated Web Tests

Bearing in mind that the Deploy Using PS/DSC action doesn't allow itself to be resized to show all configuration values the result should look something like this:

release-management-run-automated-tests

Start a Build

From Visual Studio manually queue a new build from your build definition. If everything is in place the build should succeed and you can open Microsoft Test Manager to check the results. Navigate to Testing Center > Test > Analyze Test Runs. You should see your test run listed and double-clicking it will hopefully show the happy sight of passing tests:

microsoft-test-manager-tests-passed-vso

Testing Times

As I noted in the TFS version of this post there are a lot of moving parts to get configured and working in order to be able to trigger tests to run from RM. Making all this work with VSO took many hours working through all the details and battling with permissions problems and myriad other things that didn't work in the way I was expecting them to. With luck I've hopefully captured all the details you need to try this in your own environment. If you do encounter difficulties please post in the comments and I'll do what I can to help.

Cheers -- Graham

Continuous Delivery with VSO: Application Deployment with Release Management

Posted by Graham Smith on March 30, 20155 Comments (click here to comment)

In the previous post in my blog series on implementing continuous delivery with VSO we got as far as configuring Release Management with a release path. In this post we cover the application deployment stage where we'll create the items to actually deploy the Contoso University application. In order to achieve this we'll need to create a component which will orchestrate copying the build to a temporary location on target nodes and then we'll need to create PowerShell scripts to actually install the web files to their proper place on disk and run the DACPAC to deploy any database changes. Note that although RM supports PowerShell DSC I'm not using it here and instead I'm using plain PowerShell. Why is that? It's because for what we're doing here -- just deploying components -- it feels like an unnecessary complication. Just because you can doesn't mean you should...

Sort out Build

The first thing you are going to want to sort out is build. VSO comes with 60 minutes of bundled build which disappears in no time. You can pay for more by linking your VSO account to an Azure subscription that has billing activated or the alternative is to use your own build server. This second option turns out to be ridiculously easy and Anthony Borton has a great post on how to do this starting from scratch here. However if you already have a build server configured it's a moment's work to reconfigure it for VSO. From Team Foundation Server Administration Console choose the Build Configuration node and select the Properties of the build controller. Stop the service and then use the familiar dialogs to connect to your VSO URL. Configure a new controller and agent and that's it!

Deploying PowerShell Scripts

The next piece of the jigsaw is how to get the PowerShell scripts you will write to the nodes where they should run. Several possibilities present themselves amongst which is embedding the scripts in your Visual Studio projects. From a reusability perspective this doesn't feel quite right somehow and instead I've adopted and reproduced the technique described by Colin Dembovsky here with his kind permission. You can implement this as follows:

  1. Create folders called Build and Deploy in the root of your version control for ContosoUniversity and check them in.
  2. Create a PowerShell script in the Build folder called CopyDeployFiles.ps1 in and add the following code:
  3. Check CopyDeployFiles.ps1 in to source control.
  4. Modify the process template of the build definition created in a previous post as follows:

2.Build > 5. Advanced > Post-build script arguments = -pathToCopy Deploy
2.Build > 5. Advanced > Post-build script path = Build/CopyDeployFiles.ps1

To explain, Post-build script path specifies that CopyDeployFiles.ps1 created above should be run and Post-build script arguments feeds in the -pathToCopy argument which is the Deploy folder we created above. The net effect of all this is that the Deploy folder and any contents gets created as part of the build.

Create a Component

In a multi-server world we'd create a component in RM from Configure Apps > Components for each server that we need to deploy to since a component is involved in ensuring that the build is copied to the target node. Each component would then be associated with an appropriately named PowerShell script to do the actual work of installing/copying/running tests or whatever is needed for that node. Because we are hosting IIS and SQL Server on the same machine we only actually need one component. We're getting ahead of ourselves a little but a side effect of this is that we will use only one PowerShell script for several tasks which is a bit ugly. (Okay, we could use two components but that would mean two build copy operations which feels equally ugly.)

With that noted create a component called Drop Folder and add a backslash (\) to Source > Builds with application > Path to package. The net effect of this when the deployment has taken place is the existence a folder called Drop Folder on the target node with the contents of the original drop folder copied over to the remote folder. As long as we don't need to create configuration variables for the component it can be reused in this basic form. It probably needs a better name though.

Create a vNext Release Template

Navigate to Configure Apps > vNext Release Templates and create a new template called Contoso University\DAT>DQA based on the Contoso University\DAT>DQA release path. You'll need to specify the build definition and check Can Trigger a Release from a Build. We now need to create the workflow on the DAT design surface as follows:

  1. Right-click the Components node of the Toolbox and Add the Drop Folder component.
  2. Expand the Actions node of the Toolbox and drag a Deploy Using PS/DSC action to the Deployment Sequence. Click the pen icon to rename to Deploy Web and Database.
  3. Double click the action and set the Configuration Variables as follows:
    1. ServerName = choose the appropriate server from the dropdown.
    2. UserName = the name of an account that has permissions on the target node. I'm using the RMDEPLOYER domain account that was set up for Deployment Agents to use in agent based deployments.
    3. Password = password for the UserName
    4. ComponentName = choose Drop Folder from the dropdown.
    5. SkipCaCheck = true
  4. The Actions do not display very well so a complete screenshot is not possible but it should look something like this (note SkipCaCheck isn't shown):
    release-management-deploy-using-ps-dsc-action

At this stage we can save the template and trigger a build. If everything is working you should be able to examine the target node and observe a folder called C:\Windows\DtlDownloads\Drop Folder that contains the build.

Deploy the Bits

With the build now existing on the target node the next step is to actually get the web files in place and deploy the database. We'll do this from one PowerShell script called WebAndDatabase.ps1 that you should create in the Deploy folder created above. Every time you edit this and want it to run do make sure you check it in to version control. To actually get it to run we need to edit the Deploy Web and Database action created above. The first step is to add Deploy\WebAndDatabase.ps1 as the parameter to the PSScriptPath configuration variable. We then need to add the following custom configuration variables by clicking on the green plus sign:

  • destinationPathC:\inetpub\wwwroot\CU-DAT
  • websiteSourcePath = _PublishedWebsites\ContosoUniversity.Web
  • dacpacNameContosoUniversity.Database.dacpac
  • databaseServerALMWEBDB01
  • databaseNameCU-DAT
  • loginOrUserALM\CU-DAT

The first section of the script will deploy the web files to C:\inetpub\wwwroot\CU-DAT on the target node, so create this folder if you haven't already. Obviously we could get PowerShell to do this but I'm keeping things simple. I'm using functions in WebAndDatabase.ps1 to keep things neat and tidy and to make debugging a bit easier if I want to only run one function.

The first function is as follows:

The code clears out the current set of web files and then copies the new set over. The tokens in Web.config get changed in the copied set so the originals can be used for the DQA stage.  Note how I'm using Write-Verbose statements with the -Verbose switch at the end. This causes the RM Deployment Log grid to display a View Log link in the Command Output column. Very handy for debugging purposes.

The second function deploys the DACPAC:

The code is simply building the command to run sqlpackage.exe -- pretty straightforward. Note that the script is hardcoded to SQL Server 2014 -- more on that below.

The final function deals with the Create login and database user.sql script that lives in the Scripts folder of the ContosoUniversity.Database project. This script ensures that the necessary SQL Server login and database user exists and is tokenised so it can be used in different stages -- see this article for all the details.

The tokens in the SQL script are first swapped for passed-in values and then the code builds a command to run the script. Again, pretty straightforward.

Loose Ends

At this stage you should be able to trigger a build and have all of the components deploy. In order to fully test that everything is working you'll want to create and configure a web application in IIS -- this article has the details.

To create the stated aim of an initial pipeline with both a DAT and DQA stage the final step is to actually configure all of the above for DQA. It's essentially a repeat of DAT so I'm not going to describe it here but do note that you can copy and paste the Deployment Sequence:

release-management-copy-stage

One remaining aspect to cover is the subject of script reusability. With RM-TFS there is an out-of-the-box way to achieve reusability with tools and actions. This isn't available in RM-VSO and instead potential reusability comes via storing scripts outside of the Visual Studio solution. This needs some thought though since the all-in-one script used above (by necessity) only has limited reusability and in a non-demo environment you would want to consider splitting the script and co-ordinating everything from a master script. Some of this would happen anyway if the web and database servers were distinct machines but there is probably more that should be done. For example, tokens that are to be swapped-out are hard-coded in the script above which limits reusability. I've left it like that for readability but this certainly feels like the sort of thing that should be improved upon. In a similar vein the path to sqlpackage.exe is hard coded and thus tied to a specific version of SQL Server and probably needs addressing.

In the next post we'll look at executing automated web tests. Meantime if you have any thoughts on great ways to use PowerShell with RM-VSO please do share in the comments.

Cheers -- Graham

Continuous Delivery with VSO: Configuring Release Management

Posted by Graham Smith on March 15, 20154 Comments (click here to comment)

In this post in my blog series on continuous delivery with VSO we look at configuring Release Management for Visual Studio. RM is part of the TFS ecosystem and is used to deploy our code to the different environments that constitute the delivery pipeline. It was originally built to work with TFS however the 2013.4 version released in November 2014 now works with VSO. Inevitably of course I'm going to be comparing how RM with VSO stacks up against RM with TFS.

Setting the Scene

From now on in this series of blog posts I'm going to assume that you are working in Azure and have a setup that resembles the one I created for my Continuous Delivery with TFS series of posts. If you are starting from scratch and need to catch up then these are the posts that can help:

One of the big advantages of RM-VSO is that there is no need to run a TFS instance. Additionally there is no need to run an RM server instance or Deployment Agents on target nodes since this is all taken care of, either behind the scenes in the case of the RM server or by using a different technique in the case of deploying to target nodes. Whilst the RM-VSO offering reduces the number of moving parts (which is good) it also imposes restrictions. As an example, RM-TFS allows us to reuse deployment VMs in different environments. In contrast RM-VSO doesn't allow this and consequently a multi-tenant model (eg one IIS machine hosting multiple websites) isn't possible, at least not without a substantial amount of jiggery-pokery. Does this matter? It depends... For a demo environment fewer VMs is preferable if you need to preserve your Azure credits, but in vivo you probably want separate VMs anyway. There is an easy -- if inelegant -- workaround for those that want to preserve Azure credits and I describe this below.

Configuring Azure to Work with RM

Our initial pipeline will consist of two environments: DAT (Development Automated Test) and DQA (Development Quality Assurance). Our Contoso University sample application has a web component and a database component so we'll need the services of IIS and SQL Server. With RM-TFS these can be dedicated web and database VMs that host multiple websites and databases but as mentioned above out of the box this isn't possible with RM-VSO. An additional requirement is a one-to-one mapping between RM-VSO environments and Azure cloud services. To work around all this we'll use VMs that host both IIS and SQL Server. A bit hacky for a demo setup but what to do? The procedure for setting all this up is as follows:

  • In the Azure portal create two new cloud services to host VMs for each RM-VSO environment. I called mine datcloudservice.cloudapp.net and dqacloudservice.cloudapp.net -- you'll need to choose unique names for your services.
  • Now create two new VMs -- one in each cloud service. I called mine ALMWEBDB01 and ALMWEBDB02. The good news is that despite being in different cloud services these servers can be in the same virtual network, affinity group and storage account. This keeps everything neat and tidy and also means the servers can be part of your domain if you have set one up.
  • Both of these servers need to have IIS and SQL Server installed. This is fairly standard stuff so I won't be covering this here. One note of caution is that to preserve Azure credits be sure to install SQL Server from scratch rather than use an image from the gallery with SQL Server pre-installed as the latter technique is much more costly.
  • These servers also need an account adding to the local administrators group that will be used in the deployment process. I used the RMDEPLOYER domain account that was set up for Deployment Agents to use in agent based deployments. In addition RMDEPLOYER will need a login for SQL Server and appropriate permissions. The easy path in a demo environment is to grant sysadmin but clearly that may be unwise in production.

The other VM which is core to all this is your developer workstation running Visual Studio, Release Management and Microsoft Test Manager. See above for the link to getting this machine configured if necessary.

Connect Release Management to VSO

I'm making the assumption here that you already have the RM client connected to TFS and want to connect it to VSO. If you have a new install of RM client the steps will be similar. You'll need to start an already configured RM client with your TFS instance up-an-running otherwise it just chokes. To switch over from TFS to VSO navigate to Administration > Settings > System Settings and click on the Edit link at the end of the Release management Server URL setting:

release-management-client-change-server-url

In the Configure Services dialog that appears add in the URL of your VSO account, ie https://myaccount.visualstudio.com. You'll probably be prompted to enter credentials after which you'll be prompted to allow the client to restart. When it does you have an instance of the client ‘re-branded' for VSO, by which I mean there are some changes to the user interface to reflect the difference between the features supported by TFS and VSO. One immediately obvious difference is that there is no place to specify SMTP settings as VSO handles all that.

Connect Release Management to Azure and Configure an Environment

One key difference between VSO and TFS is that VSO can only deploy to Azure VMs. In order to allow this you must configure RM with your Azure subscription:

  • Download a text file containing your Azure subscription settings from here.
  • From Administration > Manage Azure click on New and fill in the Name, Subscription ID and Management Certificate Key from the text file. Pay particular attention if you have more than one Azure subscription. For the Management Certificate Key you want everything between the quotes. Get the appropriate Storage Account Name from here. Consider deleting the Azure subscription settings file when you are finished with it for security purposes.
  • Create DAT and DQC stages from Administration > Manage Pick Lists. See here for my TFS equivalent post.
  • From Configure Paths > Environments click on New vNext: Azure to create a new environment and click Link Azure Environment to bring up the Azure Environments dialog. Select your Azure subscription and then use the Link button to link the DAT cloud service.
  • With the environment created click on Link Azure Servers to link the VM hosted in the DAT cloud service:
    release-management-client-link-azure-servers
  • Note that you can't change the name of the environment -- it is fixed as the name of the cloud service.
  • Now repeat the process for the DQA cloud service, after which you should have two environments:
    release-management-azure-environments
Configure a vNext Release Path

With the environments created we can create a release path. Navigate to Configure Paths > vNext Release Paths and create a new path called Contoso University\DAT>DQA. Add two stages to it (one for DAT and another for DQA) and configure with the respective environments. You will need to add yourself or another user to the approvals workflow as the concept of groups isn't available in RM-VSO. Additionally the DAT workflow should be automated. You should end up with something similar to this:

release-management-vnext-release-path

Again there are differences between the VSO version and the TFS version, since for some reason the toggle email notification icons are missing from the VSO version. Other than that createing a release path with RM-VSO is very similar to RM-TFS.

Until Next Time

That's as far as we are going in this post. Next time we'll configure the actual release template and get to grips with using PowerShell scripts to deploy our components.

Cheers -- Graham

Continuous Delivery with TFS: Behind the Scenes of the RM Deployment Agent

Posted by Graham Smith on March 8, 20152 Comments (click here to comment)

As with many aspects of technology understanding how something works behind the scenes can be a real boon when it comes to troubleshooting. In this post in my blog series on implementing continuous delivery with TFS we take a look at the Release Management for Visual Studio Deployment Agent, and specifically how it does its thing. Bear in mind that I don't have any inside knowledge about the Deployment Agent and this post is just based on my own experience and observations.

Basic Hook-Up

The first step in eliminating easy errors with the Deployment Agent is to ensure that it is installed correctly and can communicate with the RM Server. The key question is whether your servers are part of a domain. If they are then the easiest way to configure RM is to create a domain account (RMDEPLOYER for example) and add this to the Manage Users section of the RM Client as a Service User. On target nodes add this domain account to the Administrators group and then install the Deployment Agent specifying the domain RMDEPLOYER as the service account. See this post for a bit more detail. If your servers are not part of a domain you will need to use shadow accounts which are simply local accounts that all have the same name and password. The only difference is that you add the different shadow accounts of the different nodes to the Manage Users section of the RM Client as a Service User -- make sure you use the machine name as well as the account name ie the Windows Account field should be $MyNodeName$\RMDEPLOYER.

The test that all is working is to see the servers listed as Ready in the RM Client under Configure Paths > Servers. Something I have observed in my demo environment is that when deployment nodes boot up before my all-in-one TFS machine they don't seem to communicate. When that happens I use a PowerShell script to remotely restart the service (eg Start-AzureVM -ServiceName $cloudservicename -Name $SERVERNAME).

In a production environment your circumstances could be wildly different from a clean demo setup. I can't cover all the scenarios here but if you are experiencing problems and you have a complicated environment then this post could be a good troubleshooting starting point.

Package Deployment Mechanism

When the agent is installed and running it polls the RM server for packages to deploy on a schedule that can be set in the RM client under Administration > Settings > Deployer Settings > Look for packages to deploy every. On my installation it was set at 28 seconds but if time is critical you may want to shorten that.

When the agent detects that it has a package to deploy or actions to perform it copies the necessary components over to C:\Users\RMDEPLOYER\AppData\Local\Temp\RM\T\RM (where RMDEPLOYER is the name of the account the Deployment Agent is running under which might be different in your setup). There are at least two types of folder that get created:

  • Deployer Tools. This contains any tools and their dependencies that are needed to perform tasks. These could be executables, PowerShell scripts and so on. They are organised in a folder structure that relates to their Id and version number in the RM server database. For example in my database XCopy Deployer (irxcopy.cmd) has Id = 12 Version = 2 in dbo.DeployerTool and is thus deployed to C:\Users\RMDEPLOYER\AppData\Local\Temp\RM\T\RM\DeployerTools\12\2.
  • Action or Component. These folders correspond to the actions that will take place on that node. The names are the same as the names in the Release Management client Deployment Log (from Releases > Releases). A sub folder (whose name includes the date and time and other more mysterious numbers) contains the tool, the files it will deploy or otherwise work with and a file called IR_ProcessAutoOutput.log which is the one displayed when clicking the View Log link in the Deployment Log:
    release-management-deployment-log

Component folders warrant a little bit more analysis. What exactly gets deployed to the timestamped sub-folder is dependant on how the component is configured under Configure Apps > Components, specifically the Build Drop Location. If this is configured simply with a backslash (\) then all of the drop folder is deployed.  This can be further refined by specifying a specific folder, in which case the contents of that folder get deployed. For example the Contoso University\Deploy Web Site component specifies \_PublishedWebsites\ContosoUniversity.Web as the Build Drop Location folder which means that just the website files are deployed.

It's perhaps worth noting here that there are two mechanisms for the Deployment Agent to pull in files: UNC or HTTP(S). This is configured on a per-server basis in Configure Paths > Servers > Deployment Agent. UNC is much quicker than HTTP(S) but the latter method is the only choice if your node doesn't have access to the UNC path.

A final aspect to touch on is that over time the node would get choked with old deployments if no action were taken, and to guard against this the Deployment Agent runs a cleanup process on a schedule specified in Administration > Settings > Deployer Settings. This is something to look at if disk space is tight.

Debugging Package Deployment

Having described how package management works -- at least at a high level -- what are the troubleshooting options when a component is failing to deploy or run correctly? These are the logs that are available on a target node:

  • IR_ProcessAutoOutput.log -- saved to the action or component folder as above.
  • DeploymentAgent.log -- cumulative log saved to C:\Users\RMDEPLOYER\AppData\Local\Temp\Microsoft\ReleaseManagement\12.0\Logs.
  • $GUID$DeploymentAgent.log -- instance log saved to C:\Users\RMDEPLOYER\AppData\Local\Temp\Microsoft\ReleaseManagement\12.0\Logs. Not sure of the value of these since I've never seen them contain anything.

If between them these logs don't provide sufficient information for troubleshooting your problem you can increase the level of detail -- this post has the details but don't forget to restart the Microsoft Deployment Agent service. Additionally, if you have SMTP set up and working you will also receive a Deployment Failed! notification email. This can be particularly useful because it invariably contains the command line that failed. This leads on to a useful debugging technique where you rerun the failing command yourself. For example if the command was running a PowerShell script simply open the PowerShell console, switch to the correct folder and paste in the command from the email. Chances are that you will get a much more informative error message this way.

Final Thoughts

I know from personal experience that debugging RM components can be a frustrating experience. Typically it's a daft mistake with a parameter or something similar but sorting this type of problem out can really eat time. Do you have any tips for debugging components? Are there other error logs that I haven't mentioned? Please do share your experiences and findings using the comments.

Cheers -- Graham

Continuous Delivery with TFS: Save to a Folder for Stages You Can’t Yet Deploy to

Posted by Graham Smith on February 16, 2015No Comments (click here to comment)

In previous posts in this blog series on continuous delivery with TFS our activities have been geared up to deploying the sample application to target servers -- or nodes as they are sometimes referred to. But what happens when for some reason you have an environment that's just not ready to be a target for automated deployment? Maybe the business is just not ready to go that far right now. Maybe there is some technical hurdle you have yet to overcome. On the other hand you have already experienced how easy it is to manage the configuration of your application with Release Management and how it can bring config order where once there was chaos.

If you find yourself in this position a possible interim solution is to use Release Management to create what I call Release Ready Builds™ of your application. These are builds which have the correct configuration for the environment they are destined for but which are saved to a staging disk location rather than being actually deployed to target servers. Deployment of these builds is still likely to be a manual process but that's probably what you are doing already and at least the configuration and any other jiggery-pokery can be taken care of by Release Management.

Create a New Stage

In order to illustrate this technique I'm going to add a new stage to the deployment pipeline called PRD. This will represent the production environment that for whatever reason I'm unable to deploy to using Release Management automation. Carry out the following in Release Management:

  1. Add a stage type called PRD from Administration > Manage Pick Lists > Stage Type.
  2. Create a new Agent-based environment called Contoso University\PRD from Configure Paths > Environments. I linked my web server (ALMWEB01) for convenience but in a non demo context you would probably want to use something more permanent such as a build server. This would of course entail installing the Deployment Agent on that server.
  3. Add the new environment to the Contoso University\DAT>DQA release path from Configure Paths > Agent-based Release paths. The stage should be fully automated with email notifications turned off and this is probably a good time to change the name of the release path to Contoso University\DAT>DQA>PRD.
    release-management-prd-stage
  4. Whilst in the newly renamed Contoso University\DAT>DQA>PRD release path help to speed up the debugging process by making the DQA stage fully automated, removing any Approvers from the Approval Step and turning off email notifications.
  5. Open the Contoso University\DAT>DQA release template from Configure Apps > Agent-based Release Templates and in Properties change the name to Contoso University\DAT>DQA>PRD.
  6. On the web server (ALMWEB01 in my case) create a folder called C:\ReleaseReadyBuilds.
Configure New Components

With the PRD stage added we now need to create new components that will deploy to a folder structure. For the web application it's not really all that different because we're using XCopy anyway as the deployment method so it's just a case of specifying a new location. The database side of things is more interesting because we need to create scripts rather than run actions against a database. On top of all this we need a mechanism to ensure that each release is placed in a unique folder. To achieve this carry out the following steps in Release Management Configure Apps > Components:

  1. Create a new component called Contoso University\Script DACPAC and configure as follows:
    1. Source > Builds with application (= selected) > Path to package = \
    2.  Deployment:
      1. Tool = DACPAC Database Deployer
      2. Arguments = /Action:Script /SourceFile:"__FileName__"  /TargetServerName:"__ServerName__" /TargetDatabaseName:"__DatabaseName__" /OutputPath:"__OutputPath__"
      3. Parameters > OutputPath > Description = The location for the DACPAC script file
  2. Create a new component called Contoso University\Script Login & User SQL Script and configure as follows:
    1. Source > Builds with application (= selected) > Path to package = \Scripts
    2. Deployment:
      1. Tool = Windows Common IO
      2. Arguments = -File ./ManageWindowsIO.ps1 -Action __Action__ -FileFolderName "__FileFolderName__" -DestinationName "__DestinationName__"
      3. Parameters > DestinationName > Description = Location to copy the file to
    3. Configuration Variables:
      1. Variable Replacement Mode = Before Installation
      2. File Extension Filter = *.sql
      3. Parameter #1 = LOGIN_OR_USER | Standard | Name of login or user to create
      4. Parameter #2 = DB_NAME | Standard | Database to set security for
  3. Create a new component called Contoso University\RRB and configure as follows:
    1. Source > Builds with application (= selected) > Path to package = \
    2. Deployment:
      1. Tool = Windows Common IO
      2. Arguments = -File ./ManageWindowsIO.ps1 -Action __Action__ -FileFolderName "__FileFolderName__" -DestinationName "__DestinationName__"
      3. Parameters > DestinationName > Description = Destination name
Configure the PRD Stage

Next we need to configure the PRD stage. To achieve this carry out the following steps in the Contoso University\DAT>DQA>PRD agent-based release template:

  1. In order to speed up the debugging process visit the DAT and DQA stages and click the top left hand icon of every action or component to toggle the skip state:
    release-management-client-toggle-skip-state
  2. In the PRD stage drag the ALMWEB01 server to the Deployment Sequence.
  3. Drag the Contoso University\Deploy Web Site to ALMWEB01 and set the parameters as follows:
    1. Installation Path  = C:\ReleaseReadyBuilds\PRD\Web
    2. DATA_SOURCE = ALMSQL01
    3. INITIAL_CATALOG = CU-PRD
  4. Navigate to Toolbox > Windows OS and drag a Create Folder action below Contoso University\Deploy Web Site and set the FolderName to C:\ReleaseReadyBuilds\PRD\Database.
  5. Navigate to Toolbox > Components and right-click the Components node. Add Contoso University\Script DACPAC.
  6. Drag the Contoso University\Script DACPAC component below the Create Folder action and set the parameters as follows:
    1. FileName = ContosoUniversity.Database.dacpac
    2. ServerName = ALMSQL01
    3. DatabaseName = CU-PRD
    4. OutputPath = C:\ReleaseReadyBuilds\PRD\Database\DACPAC.sql
  7. Navigate to Toolbox > Components and right-click the Components node. Add Contoso University\Script Login & User SQL Script.
  8. Drag the Contoso University\Script Login & User SQL Script component below the Contoso University\Script DACPAC component and set the parameters as follows:
    1. Action = move
    2. FileFolderName = Create login and database user.sql
    3. DestinationName = C:\ReleaseReadyBuilds\PRD\Database\Create login and database user.sql
    4. LOGIN_OR_USER = ALM\CU-PRD
    5. DB_NAME = CU-PRD
  9. Navigate to Toolbox > Components and right-click the Components node. Add Contoso University\RRB.
  10. Drag the Contoso University\RRB component below the Contoso University\Script Login & User SQL Script component and set the parameters as follows:
    1. Action = create
    2. FileFolderName = C:\ReleaseReadyBuilds\$(BuildNumber)
  11. Drag a second Contoso University\RRB component below the first one and set the parameters as follows:
    1. Action = move
    2. FileFolderName = C:\ReleaseReadyBuilds\PRD
    3. DestinationName = C:\ReleaseReadyBuilds\$(BuildNumber)\PRD
Check the Stage Works

Navigate to Releases > Releases and manually create a release based on the latest build. Because the previous stages are in skip mode the PRD stage should finish almost immediately and you should very quickly be able to verify everything is working. Essentially you should be checking for a folder similar to C:\ReleaseReadyBuilds\ContosoUniversity_Main_Nightly_20150214.8 that contains a PRD folder with Web and Database folders containing our deployable artefacts. The magic that achieves this is the $(BuildNumber) configuration variable that is one of several that are only available in components. If that's all working then you should revisit the DAT and DQA stages and toggle all the component and actions so that they are not skipped, but leave DQA in automated mode. Now run a couple of complete builds from the nightly build definition in Visual Studio, confirming that each PRD build is placed in its own separate folder. Finally, when that's working you can revisit the DQA stage and return it to its manual status.

By way of finishing off I want to stress that you should be doing everything possible to use the full automation capabilities of Release Management to deploy your application along the whole delivery pipeline. But when you really do have a problem with a particular stage hopefully the technique I've illustrated here will tide you over.

Cheers -- Graham

Continuous Delivery with TFS: Promoting a Release to the DQA Stage

Posted by Graham Smith on February 14, 2015No Comments (click here to comment)

In a previous post in this series on implementing continuous delivery with TFS we arrived at the point of being able to run automated web tests on a build of the application deployed to the DAT environment. Any build that passes this stage is a candidate to be promoted to the next stage in the pipeline, and in my demo environment this is DQA (development quality assurance). The scenario I had in mind when building the demo environment was that it would be used by a Scrum team delivering features from a backlog of Product Backlog Items. Part of this development effort would include some manual testing, either of features that are impossible or impractical to test automatically ("text is a certain colour" for example), or for features where the effort of writing an automated test wouldn't make sense. In my demo environment the DQA stage is where this manual testing activity would take place.

Typically, not every build that passes the DAT stage will be promoted to the DQA stage as there may be insufficient change to warrant manual inspection. Additionally, if someone is in the middle of running some manual tests in DQA they probably don't want to find that the build they are testing has suddenly been replaced by a new one. Consequently the deployment of builds that do need to be promoted to DQA will need to be triggered manually. This leads on to questions such as who should trigger a deployment to DQA, how can they decide which build to promote if there are choices to be made and which tools should be used? (Note that I'm assuming here that any build that passes the DAT stage is a candidate for promotion to DQA. It's perfectly possible though to build in an approval step at the end of the DAT stage to require someone to explicitly indicate that a build can be promoted to DQA.)

Revisiting the Release Management Approvals Workflow

In order to understand the options that are available for answering the questions I have just posed a good starting point is to revisit the Release Management approvals workflow for the DQA stage by opening Release Management and navigating to Configure Paths > Agent-based Release Paths and opening the Contoso University\DAT>DQA release path. In a previous post we configured the DQA stage so that each stage is manual and all activity is governed by members of the Quality Assurance security group:

release-management-dqa-approvals

I am well aware that The Scrum Guide "recognizes no titles for Development Team members other than Developer" and I'm not going to go in to the ins and outs of which team members should be allowed to govern the deployment of releases. Suffice it to say though that Release Management provides plenty of options for whatever scenario you are working with, and each step can be either a named individual or a security group consisting of one of more individuals.

Looking at the DQA stage image in closer detail there are some subtle details (highlighted in red) that are worth mentioning. Firstly, it's possible to toggle email notifications for each step by clicking the email icon, which changes to a brighter image when notifications are turned on. Secondly, it's possible to have multiple Approvers for the Approval Step. Note that this means that all Approvers need to approve before the build can become a candidate for the next stage. (A further stage hasn't been defined in my demo environment so the Approval Step concept doesn't make complete sense here, but you get the idea.)

Managing the Governance of the Release Process

Having looked at the options for setting up who should be allowed to govern the release to DQA process we can now turn to the actual process of managing it. Although the Release Management client allows for the management of the approvals process (Releases > My Approval Requests) there is another tool called Release Explorer (actually a web page) which is dedicated to this purpose. This simplifies things somewhat since you don't need to install the client on lots of PCs and you don't need to have lots of users needing to get to grips with the client user interface. The Release Explorer URL will be something like http://almtfsadmin:1000/ReleaseManagement -- obviously if your server name is different then so will the URL be. The first time you open Release Explorer can be a bit of a shock -- it's ugly and of limited functionality to say the least. The Release Management User Guide has instructions on how to use what functionality is present.

With DQA configured as above and with all email notifications turned on triggering a build of ContosoUniversity_Main_Nightly will result in the following process once the DAT stage is complete:

  1. Anyone in the Acceptance StepApprover role receives an Deployment Acceptance Required! email.
  2. Release Explorer lists the release as waiting for Acceptance. If you have multiple releases listed and you have a reasonably descriptive release template name chances are that you'll need to hover over the release name to show the date and time as a popup (highlighted in red below):
    release-explorer-waiting-for-acceptance
  3. Still in Release Explorer the release can be selected and clicking the Approve button displays a dialog for confirming the actual deployment to DQA with the ability to add a comment.
  4. At the start of the deployment anyone in the Deployment StepOwner role receives a Deployment in Progress email.
  5. At the end of the deployment anyone in the Validation Step > Validator role receives a Deployment Validation Required! email.
  6. Release Explorer lists the release as waiting for Validation. A person in the Validator role would presumably navigate to the application and make sure it is working as expected or in a more sophisticated setup start a series of automated validation smoke tests.
  7. Still in Release Explorer the release can be selected and clicking the Approve button displays a dialog for confirming the successful deployment to DQA with the ability to add a comment.
  8. Anyone in the Approval Step > Approver role receives a Deployment Approval Required! email.
  9. Release Explorer lists the release as waiting for Approval. All the people in the Approver role now need to assure themselves by whatever means that, in the words of the email, the "deployed application meets the minimum quality requirements needed to complete this stage".
  10. Still in Release Explorer the release can be selected and clicking the Approve button displays a dialog for confirming the successful release to DQA with the ability to add a comment.
  11. In Release Management under Releases > Releases the release now has the status Released.

The full process can involve a lot of emails and a lot of individual steps. It might be overkill for every stage of your deployment but it's good to know that there is a comprehensive set of options should you need them.

Which Release to Promote to DQA?

Unfortunately the workflow around determining which release to actually promote to DQA is somewhat clunky and I can only hope that this will be improved in a future version of Release Management. Of course if you just want the latest and greatest then it's easy to work that out from Release Explorer as it will be the first release in the list. Another scenario though is where the development team are coding features against PBI Task work items and fixing bugs against Bug work items and checking code in against those work items as appropriate. In this case someone who is responsible for manual testing may want to choose a specific release that contains certain features or fixed bugs. Each build report has a list of Associated work items and either Visual Studio or Team Web Access can be used to inspect build reports but unfortunately there is nothing in Release Explorer. Of course each build report only lists what has been associated since the last build so there may be some effort required to find the build that contains all the desired work items.

Regrettably once the desired build has been identified in Visual Studio or Team Web Access there doesn't appear to be an exact way to match it with a release listed in Release Explorer since Release Explorer identifies a release by date and time and the build report uses date and build number for that day. Of course if only one release can become a candidate for promotion each day (because you run DAT overnight for example) then disambiguation will be easy but in a more fast-paced environment that may not be the case.

That completes our examination of the workflow for promoting a release to the DQA environment. If you have any ideas for fine tuning it do let me know via the comments.

Cheers -- Graham

Continuous Delivery with TFS: Running Automated Web Tests with MTM

Posted by Graham Smith on February 3, 20152 Comments (click here to comment)

In this instalment of my series on implementing continuous delivery with TFS we pick up where we left off in the previous post and add the automated web tests we created to Microsoft Test Manager. We then look at how to schedule these tests for automatic execution through the deployment pipeline. Exciting stuff so lets get started...

Configure a Test Controller

Our starting point is to configure our TFS admin machine as a test controller. (You could create a dedicated machine of course but in our demo environment it's an unnecessary overhead.)

  1. Create a new domain account called TFSTEST and add to the Administrators group on your ALMTFSADMIN machine.
  2. Download the Agents for Microsoft Visual Studio 2013. See here for 2013.4 releases.
  3. Mount the Agents for Microsoft Visual Studio 2013 iso and from the TestController folder install the test controller on ALMTFSADMIN configuring it to run with the TFSTEST account and for the appropriate Team Project Collection:
    configure-test-controller
  4. Click Apply settings to ensure that the configuration steps were successful.
Configure a Web Client Test Machine

The big picture here is that we will deploy the Contoso University application to the DAT (Development Automated Test) environment and then run the automated tests against DAT. We won't be using our development machine and instead will make use of a dedicated web client test machine which should be configured as follows:

  1. Create an new Azure VM from a Windows 8.1 Enterprise (x64) image called ALMCLIENTWIN81. (Note that Windows desktop editions are only available to MSDN subscribers).
  2. Add the RMDEPLOYER and TFSTEST to the Administrators group.
  3. Install the Release Management Deployment Agent and ensure it is in the Ready status in the Release Management client. (See here for details.)
  4. Install Microsoft Test Manager 2013.
  5. Mount the Agents for Microsoft Visual Studio 2013 iso (see above) and from the TestAgent folder install the test agent configuring it to run with the TFSTEST account and to register with the test controller installed above:
    configure-test-agent
  6. Click Apply settings to ensure that the configuration steps were successful.
  7. Install FireFox.
Configure Microsoft Test Manager

As usual I'm assuming you have a degree of familiarity with the tooling, in this case Microsoft Test Manager 2013. If not then see my getting started with Microsoft Test Manager blog post here. You will need to have installed MTM on to your development machine and successfully connected to your ContosoUniversity Team Project. Now carry out the following steps.

  1. Open MTM and connect to Lab Center. Navigate to Controllers and ensure that ALMTFSADMIN is registered as the Test Controller and that the Test Agent on ALMCLIENTWIN81 is in the Ready status:
    lab-center-test-controller-manager
  2. Navigate to Lab and create a new Standard environment called ALMCLIENTWIN81. From the Machines page add ALMCLIENTWIN81 and give it the Web Client role and specify credentials which are in the Administrators group on ALMCLIENTWIN81. From the Verification page click Verify to initiate the verification procedure. If all verifications pass you should end up with a new environment in the ready state:
    lab-center-environments
  3. Navigate to Test Settings and create a new entry called Automated Test Run and choose Automated for the What type of tests do you want to run? question. From the Roles page choose the Web Client role which should match the environment created above. From the Data and Diagnostics page choose everything except Screen and Voice Recorder and feel free to configure each option as you see fit. Click on Finish to save the settings and close MTM.
  4. Open MTM and connect to Testing Center. If this is the first time you have connected you will need to Add a Test Plan:
    microsoft-test-manager-add-test-plan
  5. Add a test plan called Regression and then choose Select plan to open it at the Contents tab. Add a new suite called Department by right-clicking the Regression node and choosing New suite:
    microsoft-test-manager-new suite
  6. Now over in the right-hand pane create two new test cases called Can_Navigate_To_Departments and  Can_Create_Departmemt:
    microsoft-test-manager-create-new-test
  7. Note the IDs of these new test cases and switch to Visual Studio, opening the ContosoUniversity solution if it isn't already open. From Team Explorer > Work Items search for each of the two new test cases and in turn open them to carry out the next step for both test cases.
  8. Click on the Associated Automation tab and then choose the ellipsis at the right of the Automated test name field. This brings up the Choose Test dialog from where you can select the required test:
    visual-studio-associated-automation
  9. Make sure to save the test case work items after associating the automated tests.
  10. Back in Microsoft Test Manager navigate to the Run Settings tab of the test plan. Under Automated runs set Test settings = Automated Test Run and Test environmentALMCLIENTWIN81 -- the items we created above. Make sure you Save and Close.
Configure Release Management

With our automated web tests incorporated in to an MTM Test Plan the next step is to configure Release Management to run them as part of the DAT stage of the deployment pipeline. Carry out the following steps from within Release Management:

  1. From Configure Paths > Servers add ALMCLIENTWIN81 and ensure it is in the Ready status.
  2. From Configure Paths > Environments add ALMCLIENTWIN81 via the Link Existing button.
  3. From Configure Apps > Components create a new component called Contoso University\Run MTM Auto Tests and configure as follows:
    1. Source > Builds with application (= selected) > Path to package = \
    2. Deployment > Tool = MTM Automated Tests Manager
    3. Configuration Variables > Variable Replacement Mode = Only in Command
  4. From Configure Apps > Agent-based Release Templates open Contoso University\DAT>DQA and edit the Deployment Sequence for DAT as follows:
    1. From Toolbox > Servers drag ALMCLIENTWIN81 to the Deployment Sequence below ALMSQL01.
    2. From Toolbox > Components right click the node and add Contoso University\Run MTM Auto Tests.
    3. Now drag Contoso University\Run MTM Auto Tests to ALMCLIENTWIN81 and configure the compnent as follows:
      1. TestRun Title = Department Tests
      2. PlanId = 28 (or whatever your Plan ID is as it is likely to be different)
      3. SuiteId = 29 (or whatever your Suite ID is as it is likely to be different)
      4. ConfigId = 1 (or whatever your Configuration ID is as it is likely to be different)
      5. TfsCollection = http://almtfsadmin:8080/tfs/PrmCollection (or whatever your Team Collection URL is as it is likely to be different)
      6. TeamProject = ContosoUniversity
      7. TestEnvironment = ALMCLIENTWIN81
      8. Build Directory = $(PackageLocation)
Some Final Configuration

In order for automated tests to run I've found that as a minimum the Deployment Agent account (RMDEPLOYER) needs to be in the Project Collection Administrators group.  It may be possible to fine tune this requirement but it's a level of trial and error I haven't had time to perform. This permission can be granted on your TFS admin machine from Team Foundation Server Administration Console > $(TfsAdminMachineName) > Application Tier > Team Project Collections > $(TeamProjetName) > General > Group Membership > [$(TeamProjetName)]\Project Collection Administrators > Properties.

If your Visual Studio solution contains a mix of unit test and automated test projects and the build definition is configured to run unit tests you are going to want to ensure the automated tests are excluded as they will fail and cause the build to report it only partially succeeded. There are various ways to do this but the technique I've used is to modify all build definitions to change Process > Build Process Parameters > 3. Test > 1. Automated tests > 1. Test source > Test sources spec to unittest where it was previously just test. This obviously ties in with my project naming convention of ContosoUniversity.Web.UnitTests and ContosoUniversity.Web.AutoTests.

Start a Deployment

From Visual Studio manually queue a new build from ContosoUniversity_Main_Nightly. If everything is in place the build should succeed and you can open Microsoft Test Manager to check the results. Navigate to Testing Center > Test > Analyze Test Runs. You should see your test run listed and double-clicking it will open up the fruits of all our endeavours so far, the test run results:

microsoft-test-manager-tests-passed

Wrap-Up

If getting to this stage felt like it required a huge amount of detailed configuration you are probably right. There are a great many moving parts to get working correctly and any future simplification of the process by Microsoft in the in the future will be welcomed.

Although we've reached a major milestone on this continuous delivery journey there is still plenty more to talk about so do watch out for the next installation.

Cheers -- Graham

Continuous Delivery with TFS: Configuration Tweaks to Help Bake Quality In

Posted by Graham Smith on January 19, 2015No Comments (click here to comment)

In the previous post in my series on implementing continuous delivery with TFS we got to the point of being able to deploy our sample application to each stage of the delivery pipeline. That's a great point to get to because we've now automated our deployment process and have a mechanism (which isn't perfect since it's not version controlled) for managing configuration. However there is a lot more we can and should do to help ensure that our software and ALM processes have quality baked in from the outset rather than added as an afterthought.

Configure Code Analysis

A simple first step is to configure Code Analysis for the ContosoUniversity.Web project. (We've already done this for ContosoUniversity.Database.) From the properties page of ContosoUniversity.Web navigate to the Code Analysis tab and change the Configuration to All Configurations. and check Enable Code Analysis on Build. By default the rule set that is selected is Microsoft Managed Recommended Rules but this doesn't generate any code analysis issues -- at least on my project. That doesn't help illustrate the benefit so change the rule set to Microsoft All Rules. Save the setting and perform a build (ie F6) and the Code Analysis tool window should present us with a nice list of ‘issues'.

visual-studio-code-analysis-tool-window

Check the change in to version control and then edit the ContosoUniversity_Main_Checkin build definition setting Process > Build process parameters > 2. Build 5. Advanced > Perform code analysis = Always (belt and braces since the AsConfigured setting ought to work but this ensures it will if anyone meddles with the project settings). Save ContosoUniversity_Main_Checkin and queue a new build. The build report should now show the issues as warnings (abridged version in screenshot):

visual-studio-build-report-showing-warnings

To keep everything synchronised perform the same change to ContosoUniversity_Main_Nightly and check that it too generates the same warnings. The combination of running code analysis when the solution builds both locally and during continuous integration should give developers enough of a clue that they should be attending to these warnings. To be fair, just setting projects up to use Microsoft All Rules and forcing them all to be fixed isn't a likely recipe for success. Some rules might not apply or might not be appropriate. And if the development team are working on an inherited brownfield application all bets are off! If a more Draconian approach is genuinely needed there is another trick up TFS's sleeve which I mention later in this post. For now you may want to switch back to Microsoft Managed Recommended Rules to keep the noise down.

Configure Unit Test Results

Unit testing is hopefully by now an ingrained aspect of your team's software development practices. But how do you keep track of progress? This is pretty easy to do as part of the TFS build and our starting point is to add a Unit Test Project.

visual-studio-add-unit-test-project

I'm calling mine ContosoUniversity.Web.UnitTests to differentiate from the automated web tests that we'll be adding in a later post but the takehome message here is that you should have a naming convention that works for your scenario as these test projects can quickly get your solution in to a mess if you are not pro-active about managing them. With that out of the way I want to be quite clear that this post isn't going to be a lesson in how to write unit tests and I'm simply going to do some daft things with Assert to quickly illustrate what can be achieved as part of the TFS build process. With that in mind amend the UnitTest1 class so you have two test methods with simple Assert statements:

Run the tests from Test Explorer and we should now have two passing tests. Check everything in to version control and queue a new build from ContosoUniversity_Main_Checkin. Examining the build report you should see that a test run was completed with 100% pass rate.

visual-studio-unit-test-report

Now change one of the tests so that it will fail (Assert.AreEqual(1, 0);), check the change in to version control and queue another build from ContosoUniversity_Main_Checkin. The build report should now show that the build only partially succeeded:

visual-studio-build-report-partially-succeeded

Additionally the Summary section now advises on the specifics of how many tests passed:

visual-studio-unit-test-report-failure

We haven't had to do anything to get this information about our unit tests and that's because out of the box the build template comes configured to run any tests that are in projects conforming to a **\*test*.dll;**\*test*.appx file specification. All this can be configured from a build definition under Process > Build process parameters > 3. Test > 1. Automated tests. Hours of fun can be had tweaking all this but there are two other options which may be of interest which you can access by clicking on the ellipsis of the 1. Test source row. This brings up the Add/Edit Test Run dialog where it's possible to set Fail build on test failure and Enable Code Coverage from the Options dropdown:

visual-studio-build-definition-add-edit-test-run

To see the effect of these settings make the changes and save out of the build definition editing process. Queue a new build which should now fail. You should also see that we now have code coverage results:

visual-studio-code-coverage-report

To be very clear this isn't the end of this story because code coverage is reporting on our unit test project which isn't what we want. To exclude the unit test project itself we need to add a .runsettings file to the solution. That's out of scope in this post but you can find more details here if you want to try it out. If you want even more details about the state your code is in you should look at something like SonarQube.

 Source Control Settings

A final area that we'll look at in this post are the settings that can be accessed via Team Explorer > Settings > Team Project > Source Control. This brings up the Source Control Settings dialog where Check-in Policy > Add is of particular interest.

visua-studio-source-control-settings

This allows the following to be configured (descriptions shamelessly copied from the Description label):

  • Builds -- This policy requires that the last build was successful for each affected continuous integration build definition.
  • Changeset Comments Policy -- This policy will require users to provide check-in comments.
  • Code Analysis -- This policy requires that Code Analysis is run before check-in.
  • Custom Path Policy -- This policy scopes other policies to specific folders or file types.
  • Forbidden Patterns Policy -- This policy prevents users from checking in files with forbidden filename patterns.
  • Work Item Query Policy -- This policy allows you to specify a work item query whose results will be the only legal work items for a check-in to be associated with.
  • Work Items -- This policy requires that one or more work items be associated with every check-in.

I'm not going to explain these further than their descriptions as they are pretty self-explanatory, suffice to say that there's a lot one can do here to ensure a robust check-in process. A word of caution though: most of these policies can be overridden so use them wisely or your developers will start to do just that.

That's it for this post. Next up is the thrilling world of developing automated web tests with Selenium.

Cheers -- Graham

Continuous Delivery with TFS: Building the Deployment Pipeline using an Agent-based Release Template

Posted by Graham Smith on January 13, 2015No Comments (click here to comment)

In this instalment of my series on implementing continuous delivery with TFS we finally get to build the deployment pipeline with Release Management. This won't be a tutorial on how to use Release Management so if you need to get up to speed with it I have a getting started post here. One point to note is that there are two ways to implement a deployment pipeline with Release Management: using Agent-based Release Templates which use Release Management Actions, Tools and Components and using vNext Release Templates which leverage PowerShell DSC. This post focusses on Agent-based Release Templates and a future post will look at the vNext Release Templates.

Administration Settings

We are going to be deploying to DAT and DQA environments so our starting point in Release Management is to navigate to Administration > Manage Pick Lists and add entries for DAT and DQA. Now add two new groups from Administration > Manage Groups called Development and Quality Assurance. There is a lot that can be done here to lock down each group to certain stages and activities but for our demo environment it's probably overkill, although feel free to configure away if you would like. The only configuration needed is to add yourself to each group.

Configure Paths Settings -- Servers and Environments

Next we need to confirm that the two servers (ALMWEB01 and ALMSQL01) that we will be deploying to are registered and Ready via Configure Paths > Servers. Still in the Configure Paths tab move to the Environments page and configure two environments called Contoso University\DAT and Contoso University\DQA). For each environment link the ALMWEB01 and ALMSQL01 servers. The screenshot shows the configured DAT environment:

Configure Paths Settings -- Agent-based Release Paths

Now move to Configure Paths > Agent-based Release Paths and create a new path called Contoso University\DAT>DQA. Add two stages to it, one for DAT and another for DQA, and configure with the appropriate environments. Set all the approvals for the DAT stage to the Development group and make all the steps automated. Set all the approvals for the DQA stage to the Quality Assurance group and make all the steps manual. Additionally add Quality Assurance as an Approval Step. The result should be as follows:

release-management-agent-based-release-path-with-approver-step

This will have the effect of making the DAT stage completely automated, and if the DAT stage is successful leaving anyone in the Quality Assurance group able to manually accept a build in to DQA. (We want this to be manual because if Quality Assurance is part way through some manual testing in DQA they probably don't want to have a new build automatically overwrite the one they are working with.)

Configure Apps Settings -- Components

Next up we need to create three Components: one to deploy the web site, one to deploy the DACPAC and one to run an SQL script to create environment-specific logins and associated database users. (As an aside, we need to create components any time the build location is required and a component is based on -- or inherits if you like -- an existing tool. We need to perform other tasks as well such as deleting files and we do this using Actions. Actions are usually based on a tool but since they typically take a simple parameter they can be used directly.)

The first component we are going to make will deploy the web site. From Configure Apps > Components create a new component called Contoso University\Deploy Web Site and configure as follows:

  1. Source > Builds with application (= selected) > Path to package = \_PublishedWebsites\ContosoUniversity.Web
  2. Deployment > Tool = XCopy Deployer
  3. Configuration Variables:
    1. Variable Replacement Mode = After Installation
    2. File Extension Filter = *.config
    3. Parameter #1 = DATA_SOURCE | Standard | Connection String: Data Source
    4. Paramater #2 = INITIAL_CATALOG | Standard | Connection String: Initial Catalog

The second component we need will deploy the DACPAC. From Configure Apps > Components create a new component called Contoso University\Deploy DACPAC and configure as follows:

  1. Source > Builds with application (= selected) > Path to package = \
  2. Deployment > Tool = DACPAC Database Deployer

The third component will run an SQL script. From Configure Apps > Components create a new Component called Contoso University\Run Login & User SQL Script (not an ideal name but length is limited)  and configure as follows:

  1. Source > Builds with application (= selected) > Path to package = \Scripts
  2. Deployment > Tool = Database Deployer -- Execute Script
  3. Configuration Variables:
    1. Variable Replacement Mode = Before Installation
    2. File Extension Filter = *.sql
    3. Parameter #1 = LOGIN_OR_USER | Standard | Name of login or user to create
    4. Parameter #2 = DB_NAME | Standard | Database to set security for
Configure Apps Settings -- Agent-based Release Templates

We now use these three components in a Release Template. Create a new template called Contoso University\DAT >DQA from Configure Apps > Agent-based Release Templates and select the Release Path to Contoso University\DAT>DQA. Now edit the Build Definition and select ContosoUniversity as the Team Project and ContosoUniversity_Main_Nightly as the Build Definition. Lastly before closing this dialog check Can Trigger a Release from a Build?.

We are now presented with the ability to edit the Deployment Sequence for DAT as follows:

  1. Expand the Servers node of the Toolbox and drag ALMWEB01 to the Deployment Sequence area of the workflow designer.
  2. Expand the Windows OS node of the Toolbox and drag a Delete File(s) or Folder Action over to ALMWEB01, double-click it and set the FileFolderName parameter to C:\inetpub\wwwroot\CU-DAT\*.*. Use the breadcrumb trail to navigate back to ALMWEB01.
  3. Right-click the Component node of the Toolbox and Link the two components we created earlier and drag the Contoso University\Deploy Web Site to ALMWEB01 so it follows the delete action. Double-click it and set the parameters as follows:
    1. Installation Path  = C:\inetpub\wwwroot\CU-DAT
    2. DATA_SOURCE = ALMSQL01
    3. INITIAL_CATALOG = CU-DAT
  4. Use the breadcrumb trail to navigate back to Deployment Sequence and drag ALMSQL01 to the workflow designer so it follows ALMWEB01.
  5. Drag the Contoso University\Deploy DACPAC component to ALMSQL01, double-click and set the parameters as follows:
    1. FileName = ContosoUniversity.Database.dacpac
    2. ServerName = ALMSQL01
    3. DatabaseName = CU-DAT
  6. Use the breadcrumb trail to navigate back to ALMSQL01 and drag the Contoso University\Run Login & User SQL Script component so it follows the Deploy DACPAC component, double-click and set the parameters as follows:
    1. ServerName = ALMSQL01
    2. ScriptName = Create login and database user.sql
    3. LOGIN_OR_USER = ALM\CU-DAT
    4. DB_NAME = CU-DAT
  7. Next create the Create login and database user.sql as follows:
    1. In the ContosoUniversity solution navigate to the ContosoUniversity.Database project and add a folder called Scripts.
    2. Add a script file called Create login and database user.sql and set Copy to Output Directory = Copy always in the File Properties.
    3. Add the following tokensied T-SQL:
    4. Check this new file in to version control.
  8. With the DAT stage configured we now want to configure the DQA stage. There is great shortcut for this -- right click the DAT tab and choose Copy Deployment Sequence.
    release-management-copy-stage
  9. Now right-click the DQA tab and choose Paste Deployment Sequence. Accept the confirmation dialog and the sequence appears in DQA. Each part of the sequence now needs to be changed to replace all instances of DAT with DQA. Notice how the Configuration Variables link can be clicked to open a viewer to check the configuration of different components for different stages.
    release-management-configuration-variables
Ensuring Correct Permissions

With the DAT and DQA stages configured it's almost time to test the deployment but before we do we'll need to make sure that several permissions are in place:

  1. The first thing I'll discuss are the deployment targets, ie ALMWEB01 and ALMSQL01. They both have the deployment agent service account (ALM\RMDEPLOYER) in the local administrators group so performing operations in the file system and so on is all taken care of. However this won't be enough for SQL Server, where (for recent versions of SQL Server at least) local administrators do not automatically get rights in SQL Server. You'll need to add ALM\RMDEPLOYER as a login to SQL Server and then think very carefully what permissions you grant this login. If a database doesn't exist then the dbcreator role will fix that but then the login won't be able to do anything at a higher level. In this case the securityadmin role does the trick but if we were to add further functionality to the stage the permissions might need to change again. Clearly in a non-demo situation you will need to get your DBA involved right from the start of the continuous delivery adoption process so they fully understand what is trying to be achieved and can help to find the best way to manage the database permissions side of things.
  2. The second area for discussion is the accounts that need to be added to Release Management to make everything work. The Release Management server is running under the ALM\TFSSERVICE account so that will already be there as a Service User. This account does need the Make requests on behalf of others permission for the Project Collection -- details on how to set this are here. The two accounts that you will need to add are ALM\TFSBUILD (again in the Service User role) as that needs to be able to communicate with Release Management server to start the deployment after the build completes and also ALM\RMDEPLOYER in the Service User role as it also needs to communicate with the Release Management server.
Testing the Deployment

At long last it's time to test the deployment. In Visual Studio navigate to Team Explorer > Builds and queue a build of ContosoUniversity_Main_Nightly. If everything has been configured correctly you should be able to observe the deployment progressing in the Release Management client from Releases > Releases. If all the components deploy successfully (only to DAT at the moment since that is the only automated stage) you will want to check that the CU-DAT website works. From any server in the demo environment you should be able to browse to http://almweb01/CU-DAT and confirm the full working of Contoso University.

Testing the Approvals Workflow

The final piece of the jigsaw as far as this post is concerned is with the approvals workflow for the DQA stage. In the Release Management client navigate to Releases > My Approvals Requests and notice that your successful release is at the DQA stage waiting to be approved. Click on the Approve button and work your way past the confirmation dialog and the deployment then starts to the DQA environment. Back to Releases > My Approvals Requests and you will then see two more approvals requests -- one to validate the deployment and one to approve it. Finally back to Releases > Releases and we see that the release is Released!

It's been quite a journey but that's it for this post. Watch out for future posts where we add extra functionality and features to the pipeline.

Cheers -- Graham