Continuous Delivery with TFS / VSTS – Automated Acceptance Tests with SpecFlow and Selenium Part 2

Posted by Graham Smith on July 25, 2016One Comment (click here to comment)

In part-one of this two-part mini series I covered how to get acceptance tests written using Selenium working as part of the deployment pipeline. In that post the focus was on configuring the moving parts needed to get some existing acceptance tests up-and-running with the new Release Management tooling in TFS or VSTS. In this post I make good on my promise to explain how to use SpecFlow and Selenium together to write business readable web tests as opposed to tests that probably only make sense to a developer.

If you haven't used SpecFlow before then I highly recommend taking the time to understand what it can do. The SpecFlow website has a good getting started guide here however the best tutorial I have found is Jason Roberts' Automated Business Readable Web Tests with Selenium and SpecFlow Pluralsight course. Pluralsight is a paid-for service of course but if you don't have a subscription then you might consider taking up the offer of the free trial just to watch Jason's course.

As I started to integrate SpecFlow in to my existing Contoso University sample application for this post I realised that the way I had originally written the Selenium-based page object model using a fluent API didn't work well with SpecFlow. Consequently I re-wrote the code to be more in line with the style used in Jason's Pluralsight course. The versions are on GitHub -- you can find the ‘before' code here and the ‘after' code here. The instructions that follow are written from the perspective of someone updating the ‘before' code version.

Install SpecFlow Components

To support SpecFlow development, components need to be installed at two levels. With the Contoso University sample application open in Visual Studio (actually not necessary for the first item):

  • At the Visual Studio application level the SpecFlow for Visual Studio 2015 extension should be installed.
  • At the Visual Studio solution level the ContosoUniversity.Web.AutoTests project needs to have the SpecFlow NuGet package installed.

You may also find if using MSTest that the specFlow section of App.config in ContosoUniversity.Web.AutoTests needs to have an <unitTestProvider name="MsTest" /> element added.

Update the Page Object Model

In order to see all the changes I made to update my page object model to a style that worked well with SpecFlow please examine the ‘after' code here. To illustrate the style of my updated model, I created CreateDepartmentPage class in ContosoUniversity.Web.SeFramework with the following code:

The key difference is that rather than being a fluent API the model now consists of separate properties that more easily map to SpecFlow statements.

Add a Basic SpecFlow Test

To illustrate some of the power of SpecFlow we'll first add a basic test and then make some improvements to it. The test should be added to ContosoUniversity.Web.AutoTests -- if you are using my ‘before' code you'll want to delete the existing C# class files that contain the tests written for the earlier page object model.

  • Right-click ContosoUniversity.Web.AutoTests and choose Add > New Item. Select SpecFlow Feature File and call it Department.feature.
  • Replace the template text in Department.feature with the following:
  • Right-click Department.feature in the code editor and choose Generate Step Definitions which will generate the following dialog:
    visual-studio-specflow-generate-step-definitions
  • By default this will create a DepartmentSteps.cs file that you should save in ContosoUniversity.Web.AutoTests.
  • DepartmentSteps.cs now needs to be fleshed-out with code that refers back to the page object model. The complete class is as follows:

If you take a moment to examine the code you'll see the following features:

  • The presence of methods with the BeforeScenario and AfterScenario attributes to initialise the test and clean up afterwards.
  • Since we specified a value for Budget in Department.feature a method step with a (poorly named) parameter was created for reusability.
  • Although we specified a name for the Administrator the step method wasn't parameterised.

As things stand though this test is complete and you should see a NewDepartmentCreatedSuccessfully test in Test Explorer which when run (don't forget IIS Express needs to be running) should turn green.

Refining the SpecFlow Test

We can make some improvements to DepartmentSteps.cs as follows:

  • The GivenIEnterABudgetOf method can have its parameter renamed to budget.
  • The GivenIEnterAnAdministratorWithNameOfKapoor method can be parameterised by changing as follows:

In the preceding change note the change to both the attribute and the method name.

Updating the Build Definition

In order to start integrating SpecFlow tests in to the continuous delivery pipeline the first step is to update the build definition, specifically the AcceptanceTests artifact that was created in the previous post which needs amending to include TechTalk.SpecFlow.dll as a new item of the Contents property. A successful build should result in this dll appearing in the Artifacts Explorer window for the AcceptanceTests artifact:

web-portal-contosouniversity-artifacts-explorer

Update the Test Plan with a new Test Case

If you are running your tests using the test assembly method then you should find that they just run without and further amendment. If on the other hand you are using the test plan method then you will need to remove the test cases based on the old Selenium tests and add a new test case (called New Department Created Successfully to match the scenario name) and edit it in Visual Studio to make it automated.

And Finally

Do be aware that I've only really scratched the surface in terms of what SpecFlow can do and there's plenty more functionality for you to explore. Whilst it's not really the subject of this post it's worth pointing out that when deciding to adopt acceptance tests as part of your continuous delivery pipeline it's worth doing so in a considered way. If you don't it's all too easy to wake up one day to realise you have hundreds of tests which take may hours to run and which require a significant amount of time to maintain. To this end do have a listen to Dave Farley's QCon talk on Acceptance Testing for Continuous Delivery.

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Automated Acceptance Tests with SpecFlow and Selenium Part 1

Posted by Graham Smith on June 15, 20167 Comments (click here to comment)

In the previous post in this series we covered using Release Management to deploy PowerShell DSC scripts to target nodes that both configured the nodes for web and database roles and then deployed our sample application. With this done we are now ready to do useful work with our deployment pipeline, and the big task for many teams is going to be running automated acceptance tests to check that previously developed functionality still works as expected as an application undergoes further changes.

I covered how to create a page object model framework for running Selenium web tests in my previous blog series on continuous delivery here. The good news is that nothing much has changed and the code still runs fine, so to learn about how to create a framework please refer to this post. However one thing I didn't cover in the previous series was how to use SpecFlow and Selenium together to write business readable web tests and that's something I'll address in this series. Specifically, in this post I'll cover getting acceptance tests working as part of the deployment pipeline and in the next post I'll show how to integrate SpecFlow.

What We're Hoping to Achieve

The acceptance tests are written using Selenium which is able to automate ‘driving' a web browser to navigate to pages, fill in forms, click on submit buttons and so on. Whilst these tests are created on and thus able to run on developer workstations the typical scenario is that the number of tests quickly mounts making it impractical to run them locally. In any case running them locally is of limited use since what we really want to know is if checked-in code changes from team members have broken any tests.

The solution is to run the tests in an environment that is part of the deployment pipeline. In this blog series I call that the DAT (development automated test) environment, which is the first stage of the pipeline after the build process. As I've explained previously in this blog series, the DAT environment should be configured in such a way as to minimise the possibility of tests failing due to factors other than code issues. I solve this for web applications by having the database, web site and test browser all running on the same node.

Make Sure the Tests Work Locally

Before attempting to get automated tests running in the deployment pipeline it's a good idea to confirm that the tests are running locally. The steps for doing this (in my case using Visual Studio 2015 Update 2 on a workstation with FireFox already installed) are as follows:

  1. If you don't already have a working Contoso University sample application available:
    1. Download the code that accompanies this post from my GitHub site here.
    2. Unblock and unzip the solution to a convenient location and then build it to restore NuGet packages.
    3. In ContosoUniversity.Database open ContosoUniversity.publish.xml and then click on Publish to create the ContosoUniversity database in LocalDB.
  2. Run ContosoUniversity.Web (and in so doing confirm that Contoso University is working) and then leaving the application running in the browser switch back to Visual Studio and from the Debug menu choose Detatch All. This leaves IIS Express running which FireFox needs to be able to navigate to any of the application's URLs.
  3. From the Test menu navigate to Playlist > Open Playlist File and open AutoWebTests.playlist which lives under ContosoUniversity.Web.AutoTests.
  4. In Test Explorer two tests (Can_Navigate_To_Departments and Can_Create_Department) should now appear and these can be run in the usual way. FireFox should open and run each test which will hopefully turn green.

Edit the Build to Create an Acceptance Tests Artifact

The first step to getting tests running as part of the deployment pipeline is to edit the build to create an artifact containing all the files needed to run the tests on a target node. This is achieved by editing the ContosoUniversity.Rel build definition and adding a Copy Publish Artifact task. This should be configured as follows:

  • Copy Root = $(build.stagingDirectory)
  • Contents =
    • ContosoUniversity.Web.AutoTests.*
    • ContosoUniversity.Web.SeFramework.*
    • Microsoft.VisualStudio.QualityTools.UnitTestFramework.*
    • WebDriver.*
  • Artifact Name = AcceptanceTests
  • Artifact Type = Server

After queuing a successful build the AcceptanceTests artifact should appear on the build's Artifacts tab:

web-portal-contosouniversity-rel-build-artifacts-acceptance-tests

Edit the Release to Deploy the AcceptanceTests Artifact

Next up is copying the AcceptanceTests artifact to a target node -- in my case a server called PRM-DAT-AIO. This is no different from the previous post where we copied database and website artifacts and is a case of adding a Windows Machine File Copy task to the DAT environment of the ContosoUniversity release and configuring it appropriately:

web-portal-contosouniversity-release-definition-copy-acceptance-tests-files

Deploy a Test Agent

The good news for those of us working in the VSTS and TFS 2015 worlds is that test controllers are a thing of the past because Agents for Microsoft Visual Studio 2015 handle communicating with VSTS or TFS 2015 directly. The agent needs to be deployed to the target node and this is handled by adding a Visual Studio Test Agent Deployment task to the DAT environment. The configuration of this task is very straightforward (see here) however you will probably want to create a dedicated domain service account for the agent service to run under. The process is slightly different between VSTS and TFS 2015 Update 2.1 in that in VSTS the machine details can be entered directly in the task whereas in TFS there is a requirement to create a Test Machine Group.

Running Tests -- Test Assembly Method

In order to actually run the acceptance tests we need to add a Run Functional Tests task to the DAT pipeline directly after the Visual Studio Test Agent Deployment task. Examining this task reveals two ways to select the tests to be run -- Test Assembly or Test Plan. Test Assembly is the most straightforward and needs very little configuration:

  • Test Machine Group (TFS) or Machines (VSTS) = Group name or $(TargetNode-DAT-AIO)
  • Test Drop Location = $(TargetNodeTempFolder)\AcceptanceTests
  • Test Selection = Test Assembly
  • Test Assembly = **\*test*.dll
  • Test Run Title = Acceptance Tests

As you will see though there are many more options that can be configured -- see the help page here for details.

Before you create a build to test these setting out you will need to make sure that the node where the tests are to be run from is specified in Driver.cs which lives in ContosoUniversity.Web.SeFramework. You will also need to ensure that FireFox is installed on this node. I've been struggling to reliably automate the installation of FireFox which turned out to be just as well because I was trying to automate the installation of the latest version from the Mozilla site. This turns out to be a bad thing because the latest version at time of writing (47.0) doesn't work with the latest (at time of writing) version of Selenium (2.53.0). Automation installation efforts for FireFox therefore need to centre around installing a Selenium-compatible version which makes things easier since the installer can be pre-downloaded to a known location. I ran out of time and installed FireFox 46.1 (compatible with Selenium 2.53.0) manually but this is something I'll revisit. Disabling automatic updates in FireFox is also essential to ensure you don't get out of sync with Selenum.

When you finally get your tests running you can see the results form the web portal by navigating to Test > Runs. You should hopefully see something similar to this:

web-portal-contosouniversity-test-run-summary

Running Tests -- Test Plan Method

The first question you might ask about the Test Plan method is why bother if the Test Assembly method works? Of course, if the Test Assembly method gives you what you need then feel free to stick with that. However you might need to use the Test Plan method if a test plan already exists and you want to continue using it. Another reason is the possibility of more flexibility in choosing which tests to run. For example, you might organise your tests in to logical areas using static suites and then use query-based suites to choose subsets of tests, perhaps with the use of tags.

To use the Test Plan method, in the web portal navigate to Test > Test Plan and then:

  1. Use the green cross to create a new test plan called Acceptance Tests.
  2. Use the down arrow next to Acceptance Tests to create a New static suite called Department:
    web-portal-contosouniversity-create-test-suite
  3. Within the Department suite use the green cross to create two new test cases called Can_Navigate_To_Departments and Can_Create_Department (no other configuration necessary):
    web-portal-contosouniversity-create-test-case
  4. Making a note of the test case IDs, switch to Visual Studio and in Team Explorer > Work Items search for each test case in turn to open it for editing.
  5. For each test case, click on Associated Automation (screenshot below is VSTS and looks slightly different from TFS) and then click on the ellipsis to bring up the Choose Test dialogue where you can choose the correct test for the test case:
    visual-studio-test-case-associated-automation
  6. With everything saved switch back to the web portal Release hub and edit the Run Functional Tests task as follows:
    1. Test Selection = Test Plan
    2. Test Plan = Acceptance Tests
    3. Test Suite =Acceptance Tests\Department

With the configuration complete trigger a new release and if everything has worked you should be able to navigate to Test > Runs and see output similar to the Test Assembly method.

That's it for now. In the next post in this series I'll look at adding SpecFlow in to the mix to make the acceptance tests business readable.

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Server Configuration and Application Deployment with Release Management

Posted by Graham Smith on May 2, 20162 Comments (click here to comment)

At this point in my blog series on Continuous Delivery with TFS / VSTS we have finally reached the stage where we are ready to start using the new web-based release management capabilities of VSTS and TFS. The functionality has been in VSTS for a little while now but only came to TFS with Update 2 of TFS 2015 which was released at the end of March 2016.

Don't tell my wife but I'm having a torrid love affair with the new TFS / VSTS Release Management. It's flippin' brilliant! Compared to the previous WPF desktop client it's a breath of fresh air: easy to understand, quick to set up and a joy to use. Sure there are some improvements that could be made (and these will come in time) but for the moment, for a relatively new product, I'm finding the experience extremely agreeable. So let's crack on!

Setting the Scene

The previous posts in this series set the scene for this post but I'll briefly summarise here. We'll be deploying the Contoso University sample application which consists of an ASP.NET MVC website and a SQL Server database which I've converted to a SQL Server Database Project so deployment is by DACPAC. We'll be deploying to three environments (DAT, DQA and PRD) as I explain here and not only will we be deploying the application we'll first be making sure the environments are correctly configured with PowerShell DSC using an adaptation of the procedure I describe here.

My demo environment in Azure is configured as a Windows domain and includes an instance of TFS 2015 Update 2 which I'll be using for this post as it's the lowest common denominator, although I will point out any VSTS specifics where needed. We'll be deploying to newly minted Windows Server 2012 R2 VMs which have been joined to the domain, configured with WMF 5.0 and had their domain firewall turned off -- see here for details. (Note that if you are using versions of Windows server earlier than 2012 that don't have remote management turned on you have a bit of extra work to do.) My TFS instance is hosting the build agent and as such the agent can ‘see' all the machines in the domain. I'm using Integrated Security to allow the website to talk to the database and use three different domain accounts (CU-DAT, CU-DQA and CU-PRD) to illustrate passing different credentials to different environments. I assume you have these set up in advance.

As far as development tools are concerned I'm using Visual Studio 2015 Update 2 with PowerShell Tools installed and Git for version control within a TFS / VSTS team project. It goes without saying that for each release I'm building the application only once and as part of the build any environment-specific configuration is replaced with tokens. These tokens are replaced with the correct values for that environment as that same tokenised build moves through the deployment pipeline.

Writing Server Configuration Code Alongside Application Code

A key concept I am promoting in this blog post series is that configuring the servers that your application will run on should not be an afterthought and neither should it be a manual click-through-GUI process. Rather, you should be configuring your servers through code and that code should be written at the same time as you write your application code. Furthermore the server configuration code should live with your application code. To start then we need to configure Contoso University for this way of working. If you are following along you can get the starting point code from here.

  1. Open the ContosoUniversity solution in Visual Studio and add new folders called Deploy to the ContosoUniversity.Database and ContosoUniversity.Web projects.
  2. In ContosoUniversity.Database\Deploy create two new files: Database.ps1 and DbDscResources.ps1. (Note that SQL Server Database Projects are a bit fussy about what can be created in Visual Studio so you might need to create these files in Windows Explorer and add them in as new items.)
  3. Database.ps1 should contain the following code:
  4. DbDscResources.ps1 should contain the following code:
  5. In ContosoUniversity.Web\Deploy create two new files: Website.ps1 and WebDscResources.ps1.
  6. Website.ps1 should contain the following code:
  7. WebDscResources.ps1 should contain the following code:
  8. In ContosoUniversity.Database\Scripts move Create login and database user.sql to the Deploy folder and remove the Scripts folder.
  9. Make sure all these files have their Copy to Output Directory property set to Copy always. For the files in ContosoUniversity.Database\Deploy the Build Action property should be set to None.

The Database.ps1 and Website.ps1 scripts contain the PowerShell DSC to both configure servers for either IIS or SQL Server and then to deploy the actual component. See my Server Configuration as Code with PowerShell DSC post for more details. (At the risk of jumping ahead to the deployment part of this post, the bits to be deployed are copied to temp folders on the target nodes -- hence references in the scripts to C:\temp\$whatever$.)

In the case of the database component I'm using the xDatabase custom DSC resource to deploy the DACPAC. I came across a problem with this resource where it wouldn't install the DACPAC using domain credentials, despite the credentials having the correct permissions in SQL Server. I ended up having to install SQL Server using Mixed Mode authentication and installing the DACPAC using the sa login. I know, I know!

My preferred technique for deploying website files is plain xcopy. For me the requirement is to clear the old files down and replace them with the new ones. After some experimentation I ended up with code to stop IIS, remove the web folder, copy the new web folder from its temp location and then restart IIS.

Both the database and website have files with configuration tokens that needed replacing as part of the deployment. I'm using the xReleaseManagement custom DSC resource which takes a hash table of tokens (in the __TOKEN_NAME__ format) to replace.

In order to use custom resources on target nodes the custom resources need to be in place before attempting to run a configuration. I had hoped to use a push server technique for this but it was not to be since for this post at least I'm running the DSC configurations on the actual target nodes and the push server technique only works if the MOF files are created on a staging machine that has the custom resources installed. Instead I'm copying the custom resources to the target nodes just prior to running the DSC configurations and this is the purpose of the DbDscResources.ps1 and WebDscResources.ps1 files. The custom resources live on a UNC that is available to target nodes and get there by simply copying them from a machine where they have been installed (C:\Program Files\WindowsPowerShell\Modules is the location) to the UNC.

Create a Release Build

With the Visual Studio now configured (don't forget to commit the changes) we now need to create a build to check that initial code quality checks have passed and if so to publish the database and website components ready for deployment. Create a new build definition called ContosoUniversity.Rel and follow this post to configure the basics and this post to create a task to run unit tests. Note that for the Visual Studio Build task the MSBuild Arguments setting is /p:OutDir=$(build.stagingDirectory) /p:UseWPP_CopyWebApplication=True /p:PipelineDependsOnBuild=False /p:RunCodeAnalysis=True. This gives us a _PublishedWebsites\ContosoUniversity.Web folder (that contains all the web files that need to be deployed) and also runs the transformation to tokensise Web.config. Additionally, since we are outputting to $(build.stagingDirectory) the Test Assembly setting of the Visual Studio Test task needs to be $(build.stagingDirectory)\**\*UnitTests*.dll;-:**\obj\**. At some point we'll want to version our assemblies but I'll return to that in a another post.

One important step that has changed since my earlier posts is that the Restore NuGet Packages option in the Visual Studio Build task has been deprecated. The new way of doing this is to add a NuGet Installer task as the very first item and then in the Visual Studio Build task (in the Advanced section in VSTS) uncheck Restore NuGet Packages.

To publish the database and website as components -- or Artifacts (I'm using the TFS spelling) as they are known -- we use the Copy and Publish Build Artifacts tasks. The database task should be configured as follows:

  • Copy Root = $(build.stagingDirectory)
  • Contents =
    • ContosoUniversity.Database.d*
    • Deploy\Database.ps1
    • Deploy\DbDscResources.ps1
    • Deploy\Create login and database user.sql
  • Artifact Name = Database
  • Artifact Type = Server

Note that the Contents setting can take multiple entries on separate lines and we use this to be explicit about what the database artifact should contain. The website task should be configured as follows:

  • Copy Root = $(build.stagingDirectory)\_PublishedWebsites
  • Contents = **\*
  • Artifact Name = Website
  • Artifact Type = Server

Because we are specifying a published folder of website files that already has the Deploy folder present there's no need to be explicit about our requirements. With all this done the build should look similar to this:

web-portal-contosouniversity-rel-build

In order to test the successful creation of the artifacts, queue a build and then -- assuming the build was successful -- navigate to the build and click on the Artifacts link. You should see the Database and Website artifact folders and you can examine the contents using the Explore link:

web-portal-contosouniversity-rel-build-artifacts

Create a Basic Release

With the artifacts created we can now turn our attention to creating a basic release to get them copied on to a target node and then perform a deployment. Switch to the Release hub in the web portal and use the green cross icon to create a new release definition. The Deployment Templates window is presented and you should choose to start with an Empty template. There are four immediate actions to complete:

  1. Provide a Definition name -- ContosoUniversity for example.
  2. Change the name of the environment that has been added to DAT.
    web-portal-contosouniversity-release-definition-initial-tasks
  3. Click on Link to a build definition to link the release to the ContosoUniversity.Rel build definition.
    web-portal-contosouniversity-release-definition-link-to-build-definition
  4. Save the definition.

Next up we need to add two Windows Machine File Copy tasks to copy each artifact to one node called PRM-DAT-AIO. (As a reminder the DAT environment as I define it is just one server which hosts both the website and the database and where automated testing takes place.) Although it's possible to use just one task here the result of selecting artifacts differs according to the selected node in the artifact tree. At the node root, folders are created for each artifact but go one node lower and they aren't. I want a procedure that works for all environments which is as follows:

  1. Click on Add tasks to bring up the Add Tasks window. Use the Deploy link to filter the list of tasks and Add two Windows Machine File Copy tasks:
    web-portal-contosouniversity-release-definition-add-task
  2. Configure the properties of the tasks as follows:
    1. Edit the names (use the pencil icon) to read Copy Database files and Copy Website files respectively.
    2. Source = $(System.DefaultWorkingDirectory)/ContosoUniversity.Rel/Database or $(System.DefaultWorkingDirectory)/ContosoUniversity.Rel/Website accordingly (use the ellipsis to select)
    3. Machines = PRM-DAT-AIO.prm.local
    4. Admin login = Supply a domain account login that has admin privileges for PRM-DAT-AIO.prm.local
    5. Password = Password for the above domain account
    6. Destination folder = C:\temp\Database or C:\temp\Website accordingly
    7. Advanced Options > Clean Target = checked
  3. Click the ellipsis in the DAT environment and choose Deployment conditions.
    web-portal-contosouniversity-release-definition-environment-settings-deployment-conditions
  4. Change the Trigger to After release creation and click OK to accept.
  5. Save the changes and trigger a release using the green cross next to Release. You'll be prompted to select a build as part of the process:
    web-portal-contosouniversity-release-definition-environment-create-release
  6. If the release succeeds a C:\temp folder containing the artifact folders will have been created on on PRM-DAT-AIO.
  7. If the release fails switch to the Logs tab to troubleshoot. Permissions and whether the firewall has been configured to allow WinRM are the likely culprits. To preserve my sanity I do everything as domain admin and I have the domain firewall turned off. The usual warnings about these not necessarily being best practices in non-test environments apply!

Whilst you are checking the C:\temp folder on the target node have a look inside the artifact folders. They should both contain a Deploy folder that contains the PowerShell scripts that will be executed remotely using the PowerShell on Target Machines task. You'll need to configure two of each for the two artifacts as follows:

  1. Add two PowerShell on Target Machines tasks to alternately follow the Windows Machine File Copy tasks.
  2. Edit the names (use the pencil icon) to read Configure Database and Configure Website respectively.
  3. Configure the properties of the task as follows:
    1. Machines = PRM-DAT-AIO.prm.local
    2. Admin login = Supply a domain account that has admin privileges for PRM-DAT-AIO.prm.local
    3. Password = Password for the above domain account
    4. Protocol = HTTP
    5. Deployment > PowerShell Script = C:\temp\Database\Deploy\Database.ps1 or C:\temp\Website\Deploy\Website.ps1 accordingly
    6. Deployment > Initialization Script = C:\temp\Database\Deploy\DbDscResources.ps1 or C:\temp\Website\Deploy\WebDscResources.ps1 accordingly
  4. With reference to the parameters required by C:\temp\Database\Deploy\Database.ps1 configure Deployment > Script Arguments for the Database task as follows:
    1. $domainSqlServerSetupLogin = Supply a domain login that has privileges to install SQL Server on PRM-DAT-AIO.prm.local
    2. $domainSqlServerSetupPassword = Password for the above domain login
    3. $sqlServerSaPassword = Password you want to use for the SQL Server sa account
    4. $domainUserForIntegratedSecurityLogin = Supply a domain login to use for integrated security (PRM\CU-DAT in my case for the DAT environment)
    5. The finished result will be similar to: ‘PRM\Graham' ‘YourSecurePassword' ‘YourSecurePassword' ‘PRM\CU-DAT'
  5. With reference to the parameters required by C:\temp\Website\Deploy\Website.ps1 configure Deployment > Script Arguments for the Website task as follows:
    1. $domainUserForIntegratedSecurityLogin = Supply a domain login to use for integrated security (PRM\CU-DAT in my case for the DAT environment)
    2. $domainUserForIntegratedSecurityPassword = Password for the above domain account
    3. $sqlServerName = machine name for the SQL Server instance (PRM-DAT-AIO in my case for the DAT environment)
    4. The finished result will be similar to: ‘PRM\CU-DAT' ‘YourSecurePassword' ‘PRM-DAT-AIO'

At this point you should be able to save everything and the release should look similar to this:

web-portal-contosouniversity-release-definition-environment-create-release-all-tasks-added

Go ahead and trigger a new release. This should result in the PowerShell scripts being executed on the target node and IIS and SQL Server being installed, as well as the Contoso University application. You should be able to browse the application at http://prm-dat-aio. Result!

Variable Quality

Although we now have a working release for the DAT environment it will hopefully be obvious that there are serious shortcomings with the way we've configured the release. Passwords in plain view is one issue and repeated values is another. The latter issue is doubly of concern when we start creating further environments.

The answer to this problem is to create custom variables at both a ‘release' level and and at the ‘environment' level. Pretty much every text box seems to take a variable so you can really go to town here. It's also possible to create compound values based on multiple variables -- I used this to separate the location of the C:\temp folder from the rest of the script location details. It's worth having a bit of a think about your variable names in advance of using them because if you change your mind you'll need to edit every place they were used. In particular, if you edit the declaration of secret variables you will need to click the padlock to clear the value and re-enter it. This tripped me up until I added Write-Verbose statements to output the parameters in my DSC scripts and realised that passwords were not being passed through (they are asterisked so there is no security concern). (You do get the scriptArguments as output to the console but I find having them each on a separate line easier.)

Release-level variables are created in the Configuration section and if they are passwords can be secured as secrets by clicking the padlock icon. The release-level variables I created are as follows:

web-portal-contosouniversity-release-definition-release-variables

Environment-level variables are created by clicking the ellipsis in the environment and choosing Configure Variables. I created the following:

web-portal-contosouniversity-release-definition-environment-variables

The variables can then be used to reconfigure the release as per this screen shot which shows the PowerShell on Target Machines Configure Database task:

web-portal-contosouniversity-release-definition-tasks-using-variables

The other tasks are obviously configured in a similar way, and notice how some fields use more than one variable. Nothing has a actually changed by replacing hard-coded values with variables so triggering another release should be successful.

Environments Matter

With a successful deployment to the DAT environment we can now turn our attention to the other stages of the deployment pipeline -- DQA and PRD. The good news here is that all the work we did for DAT can be easily cloned for DQA which can then be cloned for PRD. Here's the procedure for DQA which don't forget is a two-node deployment:

  1. In the Configuration section create two new release level variables:
    1. TargetNode-DQA-SQL = PRM-DQA-SQL.prm.local
    2. TargetNode-DQA-IIS = PRM-DQA-IIS.prm.local
  2. In the DAT environment click on the ellipsis and select Clone environment and name it DQA.
  3. Change the two database tasks so the Machines property is $(TargetNode-DQA-SQL).
  4. Change the two website tasks so the Machines property is $(TargetNode-DQA-IIS).
  5. In the DQA environment click on the ellipsis and select Configure variables and make the following edits:
    1. Change DomainUserForIntegratedSecurityLogin to PRM\CU-DQA
    2. Click on the padlock icon for the DomainUserForIntegratedSecurityPassword variable to clear it then re-enter the password and click the padlock icon again to make it a secret. Don't miss this!
    3. Change SqlServerName to PRM-DQA-SQL
  6. In the DQA environment click on the ellipsis and select Deployment conditions and set Trigger to No automated deployment.

With everything saved and assuming the PRM-DQA-SQL and PRM-DQA-SQL nodes are running the release can now be triggered. Assuming the deployment to DAT was successful the release will wait for DQA to be manually deployed (almost certainly what is required as manual testing could be going on here):

web-portal-contosouniversity-release-definition-manual-deploy-of-DQA

To keep things simple I didn't assign any approvals for this release (ie they were all automatic) but do bear in mind there is some rich and flexible functionality available around this. If all is well you should be able to browse Contoso University on http://prm-dqa-iis. I won't describe cloning DQA to create PRD as it's very similar to the process above. Just don't forget to re-enter cloned password values! Do note that in the Environment Variables view of the Configuration section you can view and edit (but not create) the environment-level variables for all environments:

web-portal-contosouniversity-release-definition-all-environment-variables

This is a great way to check that variables are the correct values for the different environments.

And Finally...

There's plenty more functionality in Release Management that I haven't described but that's as far as I'm going in this post. One message I do want to get across is that the procedure I describe in this post is not meant to be a statement on the definitive way of using Release Management. Rather, it's designed to show what's possible and to get you thinking about your own situation and some of the factors that you might need to consider. As just one example, if you only have one application then the Visual Studio solution for the application is probably fine for the DSC code that installs IIS and SQL Server. However if you have multiple similar applications then almost certainly you don't want all that code repeated in every solution. Moving this code to the point at which the nodes are created could be an option here -- or perhaps there is a better way!

That's it for the moment but rest assured there's lots more to be covered in this series. If you want the final code that accompanies this post I've created a release here on my GitHub site.

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Infrastructure as Code with Azure Resource Manager Templates

Posted by Graham Smith on February 25, 2016No Comments (click here to comment)

So far in this blog post series on Continuous Delivery with TFS / VSTS we have gradually worked our way to the position of having a build of our application which is almost ready to be deployed to target servers (or nodes if you prefer) in order to conduct further testing before finally making its way to production. This brings us to the question of how these nodes should be provisioned and configured. In my previous series on continuous delivery deployment was to nodes that had been created and configured manually. However with the wealth of automation tools available to us we can -- and should -- improve on that.  This post explains how to achieve the first of those -- provisioning a Windows Server virtual machine using Azure Resource Manager templates. A future post will deal with the configuration side of things using PowerShell DSC.

Before going further I should point out that this post is a bit different from my other posts in the sense that it is very specific to Azure. If you are attempting to implement continuous delivery in an on premises situation chances are that the specifics of what I cover here are not directly usable. Consequently, I'm writing this post in the spirit of getting you to think about this topic with a view to investigating what's possible for your situation. Additionally, if you are not in the continuous delivery space and have stumbled across this post through serendipity I do hope you will be able to follow along with my workflow for creating templates. Once you get past the Big Picture section it's reasonably generic and you can find the code that accompanies this post at my GitHub repository here.

The Infrastructure Big Picture

In order to understand where I am going with this post it's probably helpful to understand the big picture as it relates to this blog series on continuous delivery. Our final continuous delivery pipeline is going to consist of three environments:

  • DAT -- development automated test where automated UI testing takes place. This will be an ‘all in one' VM hosting both SQL Server and IIS. Why have an all-in-one VM? It's because the purpose of this environment is to run automated tests, and if those tests fail we want a high degree of certainty that it was because of code and not any other factors such as network problems or a database timeout. To achieve that state of certainty we need to eliminate as many influencing variables as possible, and the simplest way of achieving that is to have everything running on the same VM. It breaks the rule about early environments reflecting production but if you are in an on premises situation and your VMs are on hand-me-down infrastructure and your network is busy at night (when your tests are likely running) backing up VMs and goodness knows what else then you might come to appreciate the need for an all-in-one VM for automated testing.
  • DQA -- development quality assurance where high-value manual testing takes place. This really does need to reflect production so it will consist of a database VM and a web server VM.
  • PRD -- production for the live code. It will consist of a database VM and a web server VM.

These environments map out to the following infrastructure I'll be creating in Azure:

  • PRM-DAT -- resource group to hold everything for the DAT environment
    • PRM-DAT-AIO -- all in one VM for the DAT environment
  • PRM-DQA -- resource group to hold everything for the DQA environment
    • PRM-DQA-SQL -- database VM for the DQA environment
    • PRM-DQA-IIS -- web server VM for the DQA environment
  • PRM-PRD -- resource group to hold everything for the DQA environment
    • PRM-PRD-SQL -- database VM for the PRD environment
    • PRM-PRD-IIS -- web server VM for the PRD environment

The advantage of using resource groups as containers is that an environment can be torn down very easily. This makes more sense when you realise that it's not just the VM that needs tearing down but also storage accounts, network security groups, network interfaces and public IP addresses.

Overview of the ARM Template Development Workflow

We're going to be creating our infrastructure using ARM templates which is a declarative approach, ie we declare what we want and some other system ‘makes it so'. This is in contrast to an imperative approach where we specify exactly what should happen and in what order. (We can use an imperative approach with ARM using PowerShell but we don't get any parallelisation benefits.) If you need to get up to speed with ARM templates I have a Getting Started blog post with a collection useful useful links here. The problem -- for me at least -- is that although Microsoft provide example templates for creating a Windows Server VM (for instance) they are heavily parametrised and designed to work as standalone VMs, and it's not immediately obvious how they can fit in to an existing network. There's also the issue that at first glance all that JSON can look quite intimidating! Fear not though, as I have figured out what I hope is a great workflow for creating ARM templates which is both instructive and productive. It brings together a number of tools and technologies and I make the assumption that you are familiar with these. If not I've blogged about most of them before. A summary of the workflow steps with prerequisites and assumptions is as follows:

  • Create a Model VM in Azure Portal. The ARM templates that Microsoft provide tend to result in infrastructure that have different internal names compared with the same infrastructure created through the Azure Portal. I like how the portal names things and in order to help replicate that naming convention for VMs I find it useful to create a model VM in the portal whose components I can examine via the Azure Resource Explorer.
  • Create a Visual Studio Solution. Probably the easiest way to work with ARM templates is in Visual Studio. You'll need the Azure SDK installed to see the Azure Resource Group project template -- see here for more details. We'll also be using Visual Studio to deploy the templates using PowerShell and for that you'll need the PowerShell Tools for Visual Studio extension. If you are new to this I have a Getting Started blog post here. We'll be using Git in either TFS or VSTS for version control but if you are following this series we've already covered that.
  • Perform an Initial Deployment. There's nothing worse than spending hours coding only to find that what you're hoping to do doesn't work and that the problem is hard to trace. The answer of course is to deploy early and that's the purpose of this step.
  • Build the Deployment Template Resource by Resource Using Hard-coded Values. The Microsoft templates really go to town when it comes to implementing variables and parameters. That level of detail isn't required here but it's hard to see just how much is required until the template is complete. My workflow involves using hard-coded values initially so the focus can remain on getting the template working and then refactoring later.
  • Refactor the Template with Parameters, Variables and Functions. For me refactoring to remove the hard-coded values is one of most fun and rewarding parts of the process. There's a wealth of programming functionality available in ARM templates -- see here for all the details.
  • Use the Template to Create Multiple VMs. We've proved the template can create a single VM -- what about multiple VMs? This section explores the options.

That's enough overview -- time to get stuck in!

Create a Model VM in Azure Portal

As above, the first VM we'll create using an ARM template is going to be called PRM-DAT-AIO in a resource group called PRM-DAT. In order to help build the template we'll create a model VM called PRM-DAT-AAA in a resource group called PRM-DAT via the Azure Portal. The procedure is as follows:

  • Create a resource group called PRM-DAT in your preferred location -- in my case West Europe.
  • Create a standard (Standard-LRS) storage account in the new resource group -- I named mine prmdataaastorageaccount. Don't enable diagnostics.
  • Create a Windows Server 2012 R2 Datacenter VM (size right now doesn't matter much -- I chose Standard DS1 to keep costs down) called PRM-DAT-AAA based on the PRM-DAT resource group, the prmdataaastorageaccount storage account and the prmvirtualnetwork that was created at the beginning of this blog series as the common virtual network for all VMs. Don't enable monitoring.
  • In Public IP addresses locate PRM-DAT-AAA and under configuration set the DNS name label to prm-dat-aaa.
  • In Network security groups locate PRM-DAT-AAA and add the following tag: displayName : NetworkSecurityGroup.
  • In Network interfaces locate PRM-DAT-AAAnnn (where nnn represents any number) and add the following tag: displayName : NetworkInterface.
  • In Public IP addresses locate PRM-DAT-AAA and add the following tag: displayName : PublicIPAddress.
  • In Storage accounts locate prmdataaastorageaccount and add the following tag: displayName : StorageAccount.
  • In Virtual machines locate PRM-DAT-AAA and add the following tag: displayName : VirtualMachine.

You can now explore all the different parts of this VM in the Azure Resource Explorer. For example, the public IP address should look similar to:

azure-resource-explorer-public-ip-address

Create a Visual Studio Solution

We'll be building and running our ARM template in Visual Studio. You may want to refer to previous posts (here and here) as a reminder for some of the configuration steps which are as follows:

  • In the Web Portal navigate to your team project and add a new Git repository called Infrastructure.
  • In Visual Studio clone the new repository to a folder called Infrastructure at your preferred location on disk.
  • Create a new Visual Studio Solution (not project!) called Infrastructure one level higher then the Infrastructure folder. This effectively stops Visual Studio from creating an unwanted folder.
  • Add .gitignore and .gitattributes files and perform a commit.
  • Add a new Visual Studio Project to the solution of type Azure Resource Group called DeploymentTemplates. When asked to select a template choose anything.
  • Delete the Scripts, Templates and Tools folders from the project.
  • Add a new project to the solution of type PowerShell Script Project called DeploymentScripts.
  • Delete Script.ps1 from the project.
  • In the DeploymentTemplates project add a new Azure Resource Manager Deployment Project item called WindowsServer2012R2Datacenter.json (spaces not allowed).
  • In the DeploymentScripts project add a new PowerShell Script item for the PowerShell that will create the PRM-DAT resource group with a PRM-DAT-AIO server -- I called my file Create PRM-DAT.ps1.
  • Perform a commit and sync to get everything safely under version control.

With all that configuration you should have a Visual Studio solution looking something like this:

visual-studio-infrastructure-solution

Perform an Initial Deployment

It's now time to write just enough code in Create PRM-DAT.ps1 to prove that we can initiate a deployment from PowerShell. First up is the code to authenticate to Azure PowerShell. I have the authentication code which was the output of this post wrapped in a function called Set-AzureRmAuthenticationForMsdnEnterprise which in turn is contained in a PowerShell module file called Authentication.psm1. This file in turn is deployed to C:\Users\Graham\Documents\WindowsPowerShell\Modules\Authentication which then allows me to call Set-AzureRmAuthenticationForMsdnEnterprise from anywhere on my development machine. (Although this function could clearly be made more generic with the use of some parameters I've consciously chosen not to so I can check my code in to GitHub without worrying about exposing any authentication details.) The initial contents of Create PRM-DAT.ps1 should end up looking as follows:

Running this code in Visual Studio should result in a successful outcome, although admittedly not much has happened because the resource group already existed and the deployment template is empty. Nonetheless, it's progress!

Build the Deployment Template Resource by Resource Using Hard-coded Values

The first resource we'll code is a storage account. In the DeploymentTemplates project open WindowsServer2012R2Datacenter.json which as things stand just contains some boilerplate JSON for the different sections of the template that we'll be completing. What you should notice is the JSON Outline window is now available to assist with editing the template. Right-click resources and choose Add New Resource:

visual-studio-json-outline-add-new-resource

In the Add Resource window find Storage Account and add it with the name (actually the display name) of  StorageAccount:

visual-studio-json-outline-add-new-resource-storage-account

This results in boilerplate JSON being added to the template along with a variable for actual storage account name and a parameter for account type. We'll use a variable later but for now delete the variable and parameter that was added -- you can either use the JSON Outline window or manually edit the template.

We now need to edit the properties of the resource with actual values that can create (or update) the resource. In order to understand what to add we can use the Azure Resource Explorer to navigate down to the storageAccounts node of the MSDN subscription where we created prmdataaastorageaccount:

azure-resource-explorer-storage-accounts-prmdataaastorageaccount

In the right-hand pane of the explorer we can see the JSON that represents this concrete resource, and although the properties names don't always match exactly it should be fairly easy to see how the ‘live' values can be used as a guide to populating the ones in the deployment template:

azure-resource-explorer-storage-accounts-prmdataaastorageaccount-json

So, back to the deployment template the following unassigned properties can be assigned the following values:

  • "name": "prmdataiostorageaccount"
  • "location": "West Europe"
  • "accountType": "Standard_LRS"

The resulting JSON should be similar to:

Save the template and switch to Create PRM-DAT.ps1 to run the deployment script which should create the storage account. You can verify this either via the portal or the explorer.

The next resource we'll create is a NetworkSecurityGroup, which has an extra twist in that at the time of writing adding it to the template isn't supported by the JSON Outline window. There's a couple of ways to go here -- either type the JSON by hand or use the Create function in the Azure Resource Explorer to generate some boilerplate JSON. This latter technique actually generates more JSON than is needed so in this case is something of a hindrance. I just typed the JSON directly and made use of the IntelliSense options in conjunction with the PRM-DAT-AAA network security group values via the Azure Resource Explorer. The JSON that needs adding is as follows:

Note that you'll need to separate this resource from the storage account resource with a comma to ensure the syntax is valid. Save the template, run the deployment and refresh the Azure Resource Explorer. You can now compare the new PRM-DAT-AIO and PRM-DAT-AAA network security groups in the explorer to validate the JSON that creates PRM-DAT-AIO. Note that by zooming out in your browser you can toggle between the two resources and see that it is pretty much just the etag values that are different.

The next resource to add is a public IP address. This can be added from the JSON Outline window using PublicIPAddress as the name but it also wants to add a reference to itself to a network interface which in turn wants to reference a virtual network. We are going to use an existing virtual network but we do need a network interface, so give the new network interface a name of NetworkInterface and the new virtual network can be any temporary name. As soon as the new JSON components have been added delete the virtual network and all of the variables and parameters that were added. All this makes sense when you do it -- trust me!

Once edited with the appropriate values the JSON for the public IP address should be as follows:

The edited JSON for the network interface should look similar to the code that follows, but note I've replaced my MSDN subscription GUID with an ellipsis.

It's worth remembering at this stage that we're hard-coding references to other resources. We'll fix that up later on, but for the moment note that the network interface needs to know what virtual network subnet it's on (created in an earlier post), and which public IP address and network security group it's using. Also note the dependsOn section which ensures that these resources exist before the network interface is created. At this point you should be able to run the deployment and confirm that the new resources get created.

Finally we can add a Windows virtual machine resource. This is supported from the JSON Outline window, however this resource wants to reference a storage account and virtual network. The storage account exists and that should be selected, but once again we'll need to use a temporary name for the virtual network and delete it and the variables and parameters. Name the virtual machine resource VirtualMachine. Edit the JSON with appropriate hard-coded values which should end up looking as follows:

Running the deployment now should result in a complete working VM which you can remote in to.

The final step before going any further is to tear-down the PRM-DAT resource group and check that a fully-working PRM-DAT-AIO VM is created. I added a Destroy PRM-DAT.ps1 file to my DeploymentScripts project with the following code:

Refactor the Template with Parameters, Variables and Functions

It's now time to make the template reusable by refactoring all the hard-coded values. Each situation is likely to vary but in this case my specific requirements are:

  • The template will always create a Windows Server 2012 R2 Datacenter VM, but obviously the name of the VM needs to be specified.
  • I want to restrict my VMs to small sizes to keep costs down.
  • I'm happy for the VM username to always be the same so this can be hard-coded in the template, whilst I want to pass the password in as a parameter.
  • I'm adding my VMs to an existing virtual network in a different resource group and I'm making a concious decision to hard-code these details in.
  • I want the names of all the different resources to be generated using the VM name as the base.

These requirements gave rise to the following parameters, variables and a resource function:

  • nodeName parameter -- this is used via variable conversions throughout the template to provide consistent naming of objects. My node names tend to be of the format used in this post and that's the only format I've tested. Beware if your node names are different as there are naming rules in force.
  • nodeNameToUpper variable -- used where I want to ensure upper case for my own naming convention preferences.
  • nodeNameToLower variable -- used where lower case is a requirement of ARM eg where nodeName forms part of a DNS entry.
  • vmSize parameter -- restricts the template to creating VMs that are not going to burn Azure credits too quickly and which use standard storage.
  • storageAccountName variable -- creates a name for the storage account that is based on a lower case nodeName.
  • networkInterfaceName variable -- creates a name for the network interface based on a lower case nodeName with a number suffix.
  • virtualNetworkSubnetName variable -- used to create the virtual network subnet which exists in a different resource group and requires a bit of construction work.
  • vmAdminUsername variable -- creates a username for the VM based on the nodeName. You'll probably want to change this.
  • vmAdminPassword parameter -- the password for the VM passed-in as a secure string.
  • resourceGroup().location resource function -- neat way to avoid hard-coding the location in to the template.

Of course, these refactorings shouldn't affect the functioning of the template, and tearing down the PRM-DAT resource group and recreating it should result in the same resources being created.

What about Environments where Multiple VMs are Required?

The work so far has been aimed at creating just one VM, but what if two or more VMs are needed? It's a very good question and there are at least two answers. The first involves using the template as-is and calling New-AzureRmResourceGroupDeployment in a PowerShell Foreach loop. I've illustrated this technique in Create PRM-DQA.ps1 in the DeploymentScripts project. Whilst this works very nicely the VMs are created in series rather than in parallel and, well, who wants to wait? My first thought at creating VMs in parallel was to extend the Foreach loop idea with the -parallel switch in a PowerShell workflow. The code which I was hoping would work looks something like this:

Unfortunately it seems like this idea is a dud -- see here for the details. Instead the technique appears to be to use the copy, copyindex and length features of ARM templates as documented here. This necessitates a minor re-write of the template to pass in and use an array of node names, however there are complications where I've used variables to construct resource names. At the time of publishing this post I'm working through these details -- keep an eye on my GitHub repository for progress.

Wrap-Up

Before actually wrapping-up I'll make a quick mention of the template's outputs node. A handy use for this is debugging, for example where you are trying to construct a complicated variable and want to check its value. I've left an example in the template to illustrate.

I'll finish this post with a question that I've been pondering as I've been writing this post, which is whether just because we can create and configure VMs at the push of a button does that mean we should create and configure new VMs every time we deploy our application? My thinking at the moment is probably not because of the time it will add but as always it depends. If you want a clean start every time you deploy then you certainly have that option, but my mind is already thinking ahead to the additional amount of time it's going to take to actually configure these VMs with IIS and SQL Server. Never say never though, as who knows what's in store for the future? As Azure (presumably) gets faster and VMs become more lightweight with the arrival of Nano Server perhaps creating and configuring VMs from scratch as part of the deployment pipeline will be so fast that there would be no reason not to. Or maybe we'll all be using containers by then...

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Enhancing a CI Build to Help Bake Quality In

Posted by Graham Smith on February 16, 20164 Comments (click here to comment)

In the previous instalment of this blog post series on Continuous Delivery with TFS / VSTS we created a basic CI build. In this post we enhance the CI build with further configurations that can help bake quality in to the application. Just a reminder that I’m using TFS to create my CI build as it’s the lowest common denominator. If you are using VSTS you can obviously follow along but do note that screenshots might vary slightly.

Set Branch Policies

Although it's only marginally related to build this is probably a good point to set branch policies for the master branch of the ContosoUniversity repository. In the Web Portal for the team project click on the cog icon at the far right of the blue banner bar:

web-portal-control-panel-icon

This will open up the Control panel at the team project administration page. Navigate to the Version Control tab and in the Repositories pane navigate down to master. In the right pane select Branch Policies:

web-portal-control-panel-branch-policies

The branch policies window contains configuration settings that block poor code from polluting the code base. The big change is that the workflow now changes from being able to commit to the master branch directly to having to use pull requests to make commits. This is a great way of enforcing code reviews and I have more detail on the workflow here. In the screenshot above I've selected all the options, including selecting the ContosoUniveristy.CI build to be run when a pull request is created. This blocks any pull requests that would cause the build to fail. The other options are self explanatory, although enforcing a linked work item can be a nuisance when you are testing. If you are testing on your own make sure you Allow users to approve their own changes otherwise this will cause you a problem.

Testing Times

The Contoso University sample application contains MSTest unit tests and we want these to be run after the build to provide early feedback of any failing tests. This is a achieved by adding a new build step. On the Build tab in the Web Portal navigate to the ContosoUniversity.CI build and place it in edit mode. Click on Add build step and from the Add Tasks window filter on Test and choose Visual Studio Test.

For our simple scenario there are only three settings that need addressing:

  • Test Assembly -- we only want unit tests to run and ContosoUniversity contains other tests so changing the default setting to **\*UnitTests*.dll;-:**\obj\** fixes this.
  • Platform -- here we use the $(BuildPlatform) variable defined in the build task.
  • Configuration -- here we use the $(BuildConfiguration) variable defined in the build task.

web-portal-visual-studio-test-unit-test-configuration

With the changes saved queue the build and observe the build report advising that the tests were run and passed:

web-portal-build-build-succeeded-with-unit-tests-passing

Code Coverage

In the above screenshot you'll notice that there is no code coverage data available. This can be fixed by going back to the Visual Studio Test task and checking the Code Coverage Enabled box. Queueing a new build now gives us that data:

web-portal-build-build-succeeded-with-code-coverage-enabled

Of slight concern is that the code coverage reported from the build (2.92%) was marginally higher than that reported by analysing code coverage in Visual Studio (2.89%). Whilst the values are the same for all practical purposes the results suggest that there is something odd going on here that warrants further investigation.

Code Analysis

A further feedback item which is desirable to have in the build report is the results of code analysis. (As a reminder, we configured this in Visual Studio in this post so that the results are available after building locally.) Displaying code analysis results in the build report is straightforward for XAML builds as this is an out-of-the-box setting -- see here. I haven't found this to be quite so easy with the new build system.There's no setting as with XAML builds but that shouldn't be a problem since it's just an MSBuild argument. It feels like the correct argument should be /p:RunCodeAnalysis=Always (as this shouldn't care how code analysis is configured in Visual Studio) however in my testing I couldn't get this to work with any combination of the Visual Studio Build / MSBuild task and release / debug configurations. Next argument I tried was /p:RunCodeAnalysis=True. This worked with either Visual Studio Build or MSBuild task but to get it to work in a release configuration you will need to ensure that code analysis has been enabled for the release configuration in Visual Studio (and the change has been committed!). The biggest issue though was that I never managed to get more than 10 code analysis rules displayed in the build report when there were 85 reported in the build output. Perhaps I'm missing something here -- if you can shed any light on this please let me know!

Don't Ignore the Feedback!

Finally, it may sound obvious but there's little point in configuring the build report to contain feedback on the quality of the build if nobody is looking at the reports and is doing something to remedy problems. However you do it this needs to be part of your team's daily routine!

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Configuring a Basic CI Build with Team Foundation Build 2015

Posted by Graham Smith on February 4, 20162 Comments (click here to comment)

In this instalment of my blog post series on Continuous Delivery with TFS / VSTS we configure a continuous integration (CI) build for our Contoso University sample application using Team Foundation Build 2015. Although the focus of this post is on explaining how to configure a build in TFS or VSTS it is worth a few words on the bigger picture as far as builds are concerned.

One important aspect to grasp is that for a given application you are likely to need several different builds for different parts of the delivery pipeline. The focus in this post is a CI build where the main aim is the early detection of problems and additional configurations that help bake quality in. The output from the build is really just information, ie feedback. We're not going to do anything with the build itself so there is no need to capture the compiled output of the build. This is just as well since the build might run very frequently and consequently needs to have a low drain on build server resources.

In addition to a CI build a continuous delivery pipeline is likely to need other types of build. These might include one to support technical debt management (we'll be looking at using SonarQube for this in a later post but look here if you want a sneak preview of what's in store) and one or more that capture the compiled output of the build and kick-off a release to the pipeline.

Before we get going it's worth remembering that as far as new features are concerned VSTS is always a few months ahead of TFS. Consequently I'm using TFS to create my CI build as it's the lowest common denominator. If you are using VSTS you can obviously follow along but do note that screenshots might vary slightly. It's also worth pointing out that starting with TFS 2015 there is a brand new build system that is completely different from the (still supported) XAML system that has been around for the past few years. The new system is recommended for any new implementations and that's what we're using here. If you want to learn more I have a Getting Started blog post here.

Building Blocks

The new build system (which Microsoft abbreviates as TFBuild) is configured from the Web Portal rather than Visual Studio, so head over there and navigate to your team project and then to the Build tab. Click on the green plus icon in the left-hand pane which brings up the Definition Templates window. There's a couple of ways to go from here but for demonstration purposes select the Empty option:

web-portal-build-definition-templates-empty

This creates a new empty definition, which does involve a bit of extra work but is worth it the first time to help understand what's going on. Before proceeding lick on Save and provide a name (I chose ContosoUniversity.CI) and optionally a comment for the version control history. Next click the green plus icon next to Add build step to display the Add Tasks window. Take a minute to marvel at the array of possibilities before choosing Visual Studio Build. This gives us a skeleton build which needs configuring by working through the different tabs:

web-portal-build-skeleton-for-configuring

There are many items that can be configured across the different tabs but I'm restricting my explanation to the ones that are not pre-populated and which required. You can find out more about the Visual Studio Build task here.

On the Build tab:

  • Platform relates to whether the build should be x86, x64 or any cpu. Whilst you could specify a value here the recommendation is to use a build variable (defined under Variables -- see below) as Platform is a setting used in other build tasks. An additional advantage is that the value of the variable can be changed when the build is queued. As per the documentation I specified $(BuildPlatform) as the variable.
  • Configuration is the Visual Studio Solution Configuration you want to build -- typically debug or release. This is another setting used in other build tasks and which warrants a variable and I again followed the documentation and used $(BuildConfiguration).
  • Clean forces the code to be refreshed on every build and is recommended to avoid possible between-build discrepancies.

web-portal-visual-studio-build-build-tab

On the Repository tab:

  • Clean here appears to be the same as Clean on the Build tab. Not sure why there is duplication or why it is a check box on the Build tab and a dropdown on this tab but set it to true.

web-portal-visual-studio-build-repository-tab

On the Variables tab:

  • Add a variable named BuildPlatform, specify a value of any cpu and check Allow at Queue Time.
  • Add a variable named BuildConfiguration, specify a value of release and check Allow at Queue Time.

On the Triggers tab:

  • Continuous Integration should be checked.

web-portal-visual-studio-build-triggers-tab

That should be enough configuration to get the build to go green. Perform a save and click on Queue build (next to the save button). You will see the output of the build process which should result in the build succeeding:

web-portal-build-build-succeeded

It's All in the Name

At the moment our build doesn't have a name, so to fix that first head over to the Variables tab and add MajorVersion and MinorVersion variables and give them values of 1 and 0 respectively. Also check the Allow at Queue Time boxes. Now on the General tab enter $(BuildDefinitionName)_$(MajorVersion).$(MinorVersion).$(Year:yyyy)$(DayOfYear)$(Rev:.r) in the Build number format text box. Save the definition and queue a new build. The name should be something like ContosoUniversity.CI_1.0.2016019.2. One nice touch is that the revision number is reset on a daily basis providing an easy way keep track of the builds on a given day.

At this point we have got the basics of a CI build configured and working nicely. In the next post we look at further configurations focussed on helping to bake quality in to the application.

Cheers -- Graham

Getting Started with Team Foundation Build 2015

Posted by Graham Smith on February 4, 2016No Comments (click here to comment)

Hopefully by now everyone who works with TFS and / or VSTS knows that there is a new build system. There's no immediate panic as the XAML builds we've all been working with are still supported but for any new implementations using TFS 2015 or VSTS then TFBuild 2015 is the way forward. If you haven't yet had chance to investigate the new build system then I encourage you to check out my pick of the best links for getting started:

If you are interested in buying a book which covers TFBuild 2015 then I can recommend Continuous Delivery with Visual Studio ALM 2015. I've read it and it's excellent.

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Configuring a Sample Application for Git in Visual Studio 2015

Posted by Graham Smith on January 21, 2016No Comments (click here to comment)

In this instalment of my blog post series on Continuous Delivery with TFS / VSTS we configure a sample application to work with Git in Visual Studio 2015 and perform our first commit. This post assumes that nothing has changed since the last post in the series.

Configure Git in Visual Studio 2015

In order to start working with Git in Visual Studio there are some essential and non-essential settings to configure. From the Team Explorer -- Home page choose Settings > Git > Global Settings. Visual Studio will have filled in as many settings as possible and the panel will look similar to this:

visual-studio-git-settings

Complete the form as you see fit. The one change I make is to trim Repos from the Default Repository Location (to try and reduce max filepath issues) and then from Windows Explorer create the Source folder with child folders named GitHub, TFS and VSTS. This way I can keep repositories from the different Git hosts separate and avoid name clashes if I have a repository named the same in different hosts.

Clone the Repository

In the previous post we created ContosoUniversity (the name of the sample application we'll be working with) repositories in both TFS and VSTS. Before we go any further we need to clone the ContosoUniversity repository to our development workstation.

If you are working with TFS we finished-up last time with the Team Explorer -- Home page advising that we must clone the repository. However, at the time of writing Visual Studio 2015.1 seemed to have bug (TFS only) where it was still remembering the PRM repository that we deleted and wasn't allowing the ContosoUniversity repository to be cloned. Frustratingly, after going down several blind avenues the only way I found to fix this was to temporarily recreate the PRM repository from the Web Portal. With this done clicking on the Manage Connections icon on the Team Explorer toolbar now shows the ContosoUniversity repository:

visual-studio-display-repository-for-cloning

Double-click the repository which will return you to the Team Explorer home page with links to clone the repository. Clicking one of these links now shows the Clone Repository details with the correct remote settings:

visual-studio-clone-repository-correct-settings

Make sure you change the local settings to the correct folder structure (if you are following my convention this will be ...\Source\TFS\ContosoUniversity) and click on Clone. Finally, return to the Web Portal and delete the PRM repository.

The procedure for cloning the ContosoUniversity repository if you are working with VSTS is similar but without the bug. However in the previous post we hadn't got as far as connecting VSTS to Visual Studio. The procedure for doing this is the same as for TFS however when you get to the Add Team Foundation Server dialog you need to supply the URI for your VSTS subscription rather than the TFS instance name:

visual-studio-add-team-foundation-server-for-vsts

With your VSTS subscription added you will eventually end up back at the Team Explorer -- Home page where you can clone the repository as for TFS but with a local path of ...\Source\VSTS\ContosoUniversity.

Add the Contoso University Sample Application

The Contoso University sample application I use is an ASP.NET MVC application backed by a a SQL Server database. It's origins are the MVC Getting Started area of Microsoft's www.asp.net site however I've made a few changes in particular to how the database side of things is managed. I'll be explaining a bit more about all this in a later post but for now you can download the source code from my GitHub repository here using the Download ZIP button. Once downloaded unblock the file and extract the contents.

One of they key things about version control -- and Git in particular -- is that you want to avoid polluting it with files that are not actually part of the application (such as Visual Studio settings files) or files that can be recreated from the source code (such as binaries). There can be lots of this stuff but handily Git provides a solution by way of the .gitignore file and Microsoft adds icing to the cake by providing a version specifically tailored to Visual Studio. Although the sample code from my GitHub repository contains a .gitignore I've observed odd behaviours if this file isn't part of the repository before an existing Visual Studio solution is added so my technique is to add it first. From Team Explorer -- Home navigate to Settings > Git > Repository Settings. From there click on the Add links to add an ignore file and an attributes file:

visual-studio-ignore-and-attributes-files

Switch back to Team Explorer -- Home and click on Changes. These two files should appear under Included Changes. Supply a commit message and from the Commit dropdown choose Commit and Sync:

visual-studio-sync-ignore-and-attributes-files

Now switch back to Windows Explorer and the extracted Contoso University, and drill down to the folder that contains ContosoUniversity.sln. Delete .gitattributes and .gitignore and then copy the contents of this folder to ...\Source\TFS\ContosoUniversity or ...\Source\VSTS\ContosoUniversity depending on which system you are using. You should end up with ContosoUniversity.sln in the same folder as .gitattributes and .gitignore.

Back over to Visual Studio and the Team Explorer -- Changes panel and you should see that there are Untracked Files:

visual-studio-changes-untracked-files

Click on Add All and then Commit and Sync these changes as per the procedure described above.

That's it for this post. Next time we open up the Contoso University solution and get it configured to run.

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Creating a Team Project and Git Repository

Posted by Graham Smith on January 12, 2016No Comments (click here to comment)

In the previous post in this blog series on Continuous Delivery with TFS / VSTS we looked at how to commission either TFS or VSTS. In this seventh post in the series we continue from where we left off by creating a Team Project and then within that team project the Git repository that will hold the source code of our sample application. Before we do all that though we'll want to commission a developer workstation in Azure.

Commission a Developer Workstation

Although it's possible to connect an on premises developer workstation to either VSTS (straightforward) or our TFS instance (a bit trickier) to keep things realistic I prefer to create a Windows 10 VM in Azure that is part of the domain. Use the following guide to set up such a workstation:

  • If you are manually creating a VM in the portal it's not wholly clear where to find Windows 10 images. Just search for 10 though and they will appear at the top of the list. I chose Windows 10 Enterprise (x64) -- make sure you select to create in Resource Manager. (Although you can choose an image with Visual Studio already installed I prefer not to so I can have complete control.)
  • I created my VM as a Standard DS3 on premium storage in my PRM-CORE resource group, called PRM-CORE-DEV. Make sure you choose the domain's virtual network and the premium storage account.
  • With the VM created log in and join the domain in the normal way. Install any Windows Updates but note that since this in an Enterprise edition of Windows you will be stuck with the windows version that the image was created with. For me this was version 10.0 (build 10240) even though the latest version is 1511 (build 10586) (run winver at a command prompt to get your version). I've burned quite a few hours trying to upgrade 10.0 to 1511 (using the 1511 ISO) with no joy -- I think this is because Azure doesn't support in-place upgrades.
  • Install the best version of Visual Studio 2015 available to you (for me it's the Enterprise edition) and configure as required. I prefer to register with a licence key as well as logging in with my MSDN account from Help > Register Product.
  • Install any updates from Tools > Extensions and Updates.
Create a Team Project

The procedure for creating a Team Project differs depending on whether you are using VSTS or TFS. If you are using VSTS simply navigate to the Web Portal home page of your subscription and select New:

vsts-new-team-project

This will bring up the Create New Team Project window where you can specify your chosen settings (I'm creating a project called PRM using the Scrum process template and using Git version control):

vsts-create-new-team-project

Click on Create project which will do as it suggests. That button then turns in to a Navigate to project button.

If on the other hand you are using TFS you need to create a team project from within Visual Studio since a TFS project may have other setup work to perform for SSRS and SharePoint that doesn't apply to VSTS. The first step is to connect to TFS by navigating to Team Explorer and choosing the green Manage Connections button on the toolbar. This will display a Manage Connections link which in tun will bring up a popup menu allowing you to Connect to Team Project:

visual-studio-manage-connections

This brings up the Connect to Team Foundation Server dialog where you need to click the Servers button:

visual-studio-connect-to-team-foundation-server

This will bring up the Add/Remove Team Foundation Server dialog where you need to click the Add button:

visual-studio-add-remove-team-foundation-server

Finally we get to the Add Team Foundation Server dialog where we can add details of our TFS instance:

visual-studio-add-team-foundation-server

As long as you are using the standard path, port number and protocol you can simply type the server name and the dialog will build and display the URI in the Preview box. Click OK and work your way back down to the Connect to Team Foundation Server dialog where you can now connect to your TFS instance:

visual-studio-connect-to-team-foundation-server-instance

Ignore the message about Configure your workspace mappings (I chose Don't prompt again) and instead click on Home > Projects and My Teams > New Team Project:

visual-studio-new-team-project

This brings up a wizard to create a new team project. As with VSTS, I created a project called PRM using the Scrum process template and using Git version control. The finishing point of this process is a Team Explorer view that advises that You must clone the repository to open solutions for this project. Don't be tempted since there is another step we need to take!

Create a Repository with a Name of our Choosing

As things stand both VSTS and TFS have initialised the newly minted PRM team projects with a Git repository named PRM. In some cases this may be what you want (where the team project name is the same as the application for example) however that's not the case here since our application is called Contoso University. We'll be taking a closer look at Contoso University in the next post in this series but for now we just need to create a repository with that name. (In later posts we'll be creating other repositories in the PRM team project to hold different Visual Studio solutions.)

The procedure is the same for both VSTS and TFS. Start by navigating to the Web Portal home page for either VSTS or TFS and choose Browse under Recent projects & teams. Navigate to your team project and click on the Code link:

vsts-navigate-to-code-tab

This takes you to the Code tab which will entice you to Add some code!. Resist and instead click on the down arrow next to the PRM repository and then click on New repository:

vsts-create-new-repository

In the window that appears create a new Git repository called ContosoUniversity (no space).

In the interests of keeping things tidy we should delete the PRM repository so click the down arrow next to (the now selected) ContosoUniversity repository and choose Manage repositories. This takes you to the Version Control tab of the DefaultCollection‘s Control Panel. In the Repositories panel click on the down arrow next to the PRM repository and choose Delete repository:

vsts-delete-repository

And that's it! We're all set up for the next post in this series where we start working with the Contoso University sample application.

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Commissioning TFS or VSTS

Posted by Graham Smith on December 13, 2015No Comments (click here to comment)

[Please note that I've edited this post since first publishing it to reflect new information and / or better ways of doing things. See revision list at the end of the post.]

It's taken a few posts to get here but finally we have arrived at the point in my blog series on Continuous Delivery with TFS / VSTS where we actually get to start using TFS or VSTS -- depending on the route you choose to go down. The main focus of this post is on getting up-and-running with these tools but it's worth talking a bit first about the merits of each platform.

TFS is Microsoft's on premises ALM platform. It needs planning, provisioning of servers, and care and feeding when it's in production. With TFS you are in complete control and -- disasters not withstanding -- all data stays in your network. VSTS on the other hand is Microsoft's hosted ALM platform. Pretty much everything is taken care of and you can be using VSTS within minutes of signing up. However apart from being able to choose from a couple of datacentres you don't have the same control over your data as you do with TFS. Of course VSTS is secure but if the nature of the business you are in puts restrictions on where your data can live then VSTS may not be an option.

In terms of features it's a mixed story. VSTS gets new features about every three weeks and TFS eventually catches up a few months later -- as long as you upgrade of course. However VSTS is missing some components that are present in TFS, notably reports (as in SQL Server Reporting Services, although Power BI looks like it will fill this gap) and SharePoint integration.

If there is no clear factor that helps you decide something else to bear in mind is that there is increasing adoption of VSTS inside Microsoft itself. So if it's good enough for them...

Getting Started with VSTS

It's trivially easy to get started with VSTS and the first five users are free, so there is almost no reason to not give it a try. The process is described here and within minutes you'll have your first project created. I'll be covering connecting to and working with projects in the next post in this series but if you can't wait I'll tell you now that I'll be using the Scrum process template and choosing Git for version control.

Getting Started with TFS

In contrast to VSTS getting started with TFS is quite a lengthy process. For an on premises installation you'll need to do some planning however for a POC environment in the cloud we can simplify things somewhat by using one VM to host everything. For some time now Ben Day has produced an excellent guide to installing TFS and you can find the 2015 version here. The guide pretty much covers everything you need to know in great detail and there's no point me repeating it. A high-level view of the process is as follows:

  • Create a VM in Azure. I opted for a Standard DS4 VM running Windows Server 2012 R2 Datacentre which I created in my PRM-CORE resource group as PRM-CORE-TFS using the previously created storage account and virtual network.
  • Configure a few Windows settings. The first part of Ben's guide covers installing Windows but you can skip that as it's taken care of for us by Azure. The starting point for us is the bit where Ben describes configuring a new server. One part you can skip is the Remote Desktop configuration as that's already set up for Azure VMs.
  • Install pre-requisites. Nothing much to comment on here -- Ben describes what's required with excellent screenshots.
  • Install SQL Server. Choose the latest supported version. For me this was SQL Server 2014 with SP1.
  • Install TFS 2015. Choose the latest version which for me was TFS 2015 with SP1. Be sure to create (and use!) the three domain accounts for the different parts of TFS. For me this was PRM\TFSSERVICE, PRM\TFSREPORTS and PRM\TFSBUILD. Make sure to check the User cannot change password and Password never expires settings when these accounts are created in Active Directory Users and Computers. Go with Ben's advice and skip setting up SharePoint. It's not critical and is an extra complication that's not needed.
    • One point to be aware of when installing TFS on an Azure VM is the installation wizard might suggest the D volume as the place for the file cache (D:\TfsData\ApplicationTier\_fileCache). This is a very bad thing on Azure Windows VMs as the D volume is strictly for temporary storage and you shouldn't put anything on this volume you aren't willing to lose. Make sure you change to use the C volume or some other attached disk if that's the route you've gone down.
  • Install Visual Studio 2015. Ben's guide doesn't cover this, but the easiest way to ensure all the components are available to build .NET projects is to install VS 2015 -- obviously the latest version.
    • I'm not sure if it's strictly necessary on build servers but I always licence VS installations (Help > Register Product > Unlock with a Product Key) just in case bad things happen after the 30-day trial period expires.
    • Another post-installation task I carry out is to update all the extensions from Tools > Extensions and Updates.
  • Install TFS Build. TFS Build isn't enabled as part of the core setup, which makes sense for scaled-out installations where other servers will take the build strain. (Don't forget that build has completely changed for TFS 2015, and whilst XAML builds are still supported the new way of doing things is the way forward.) For a POC installation you can get away with running build on the application tier, and installation is initiated from the Team Foundation Server Administration Console > $ServerName$ > Build:
    tfs-admin-console-build
    In reality this just opens the Team Portal Control Panel at the Agent pools tab. From here you need to deploy a Windows build agent. The details are described here and it's pretty straightforward. In essence you download a zip (that contains the agent plus a settings file) and unzip to a permanent location eg C:\Agent. Run ConfigureAgent.cmd from an admin command prompt and respond to the prompts (TFS URL will be similar to http://prm-core-tfs:8080/tfs and you want to install as a Windows Service using the build credentials previously created) and you are done. If you chose default settings you should end up with a new agent in the Default pool.

    • Since you'll almost certainly be needing the services of Nuget.exe it's a good idea to make sure it's at the latest version. Open an admin command prompt at C:\Program Files\Microsoft Team Foundation Server 14.0\Tools and run Nuget.exe update -self.

Compared with VSTS, setting up TFS was quite a bit of work -- several hours including all the installations and updates. Much of this activity will of course need to be repeated at every update. Not everyone will be able to go down the VSTS hosted route but if you can it could end up being quite a time saver.

Cheers -- Graham


Revisions:

9/1/2016 -- Updated with type of VM to reflect adoption of premium storage and expanded installing TFS build instructions.