Continuous Delivery with TFS / VSTS – Automated Acceptance Tests with SpecFlow and Selenium Part 1
In the previous post in this series we covered using Release Management to deploy PowerShell DSC scripts to target nodes that both configured the nodes for web and database roles and then deployed our sample application. With this done we are now ready to do useful work with our deployment pipeline, and the big task for many teams is going to be running automated acceptance tests to check that previously developed functionality still works as expected as an application undergoes further changes.
I covered how to create a page object model framework for running Selenium web tests in my previous blog series on continuous delivery here. The good news is that nothing much has changed and the code still runs fine, so to learn about how to create a framework please refer to this post. However one thing I didn't cover in the previous series was how to use SpecFlow and Selenium together to write business readable web tests and that's something I'll address in this series. Specifically, in this post I'll cover getting acceptance tests working as part of the deployment pipeline and in the next post I'll show how to integrate SpecFlow.
What We're Hoping to Achieve
The acceptance tests are written using Selenium which is able to automate ‘driving' a web browser to navigate to pages, fill in forms, click on submit buttons and so on. Whilst these tests are created on and thus able to run on developer workstations the typical scenario is that the number of tests quickly mounts making it impractical to run them locally. In any case running them locally is of limited use since what we really want to know is if checked-in code changes from team members have broken any tests.
The solution is to run the tests in an environment that is part of the deployment pipeline. In this blog series I call that the DAT (development automated test) environment, which is the first stage of the pipeline after the build process. As I've explained previously in this blog series, the DAT environment should be configured in such a way as to minimise the possibility of tests failing due to factors other than code issues. I solve this for web applications by having the database, web site and test browser all running on the same node.
Make Sure the Tests Work Locally
Before attempting to get automated tests running in the deployment pipeline it's a good idea to confirm that the tests are running locally. The steps for doing this (in my case using Visual Studio 2015 Update 2 on a workstation with FireFox already installed) are as follows:
- If you don't already have a working Contoso University sample application available:
- Download the code that accompanies this post from my GitHub site here.
- Unblock and unzip the solution to a convenient location and then build it to restore NuGet packages.
- In ContosoUniversity.Database open ContosoUniversity.publish.xml and then click on Publish to create the ContosoUniversity database in LocalDB.
- Run ContosoUniversity.Web (and in so doing confirm that Contoso University is working) and then leaving the application running in the browser switch back to Visual Studio and from the Debug menu choose Detatch All. This leaves IIS Express running which FireFox needs to be able to navigate to any of the application's URLs.
- From the Test menu navigate to Playlist > Open Playlist File and open AutoWebTests.playlist which lives under ContosoUniversity.Web.AutoTests.
- In Test Explorer two tests (Can_Navigate_To_Departments and Can_Create_Department) should now appear and these can be run in the usual way. FireFox should open and run each test which will hopefully turn green.
Edit the Build to Create an Acceptance Tests Artifact
The first step to getting tests running as part of the deployment pipeline is to edit the build to create an artifact containing all the files needed to run the tests on a target node. This is achieved by editing the ContosoUniversity.Rel build definition and adding a Copy Publish Artifact task. This should be configured as follows:
- Copy Root = $(build.stagingDirectory)
- Contents =
- ContosoUniversity.Web.AutoTests.*
- ContosoUniversity.Web.SeFramework.*
- Microsoft.VisualStudio.QualityTools.UnitTestFramework.*
- WebDriver.*
- Artifact Name = AcceptanceTests
- Artifact Type = Server
After queuing a successful build the AcceptanceTests artifact should appear on the build's Artifacts tab:
Edit the Release to Deploy the AcceptanceTests Artifact
Next up is copying the AcceptanceTests artifact to a target node -- in my case a server called PRM-DAT-AIO. This is no different from the previous post where we copied database and website artifacts and is a case of adding a Windows Machine File Copy task to the DAT environment of the ContosoUniversity release and configuring it appropriately:
Deploy a Test Agent
The good news for those of us working in the VSTS and TFS 2015 worlds is that test controllers are a thing of the past because Agents for Microsoft Visual Studio 2015 handle communicating with VSTS or TFS 2015 directly. The agent needs to be deployed to the target node and this is handled by adding a Visual Studio Test Agent Deployment task to the DAT environment. The configuration of this task is very straightforward (see here) however you will probably want to create a dedicated domain service account for the agent service to run under. The process is slightly different between VSTS and TFS 2015 Update 2.1 in that in VSTS the machine details can be entered directly in the task whereas in TFS there is a requirement to create a Test Machine Group.
Running Tests -- Test Assembly Method
In order to actually run the acceptance tests we need to add a Run Functional Tests task to the DAT pipeline directly after the Visual Studio Test Agent Deployment task. Examining this task reveals two ways to select the tests to be run -- Test Assembly or Test Plan. Test Assembly is the most straightforward and needs very little configuration:
- Test Machine Group (TFS) or Machines (VSTS) = Group name or $(TargetNode-DAT-AIO)
- Test Drop Location = $(TargetNodeTempFolder)\AcceptanceTests
- Test Selection = Test Assembly
- Test Assembly = **\*test*.dll
- Test Run Title = Acceptance Tests
As you will see though there are many more options that can be configured -- see the help page here for details.
Before you create a build to test these setting out you will need to make sure that the node where the tests are to be run from is specified in Driver.cs which lives in ContosoUniversity.Web.SeFramework. You will also need to ensure that FireFox is installed on this node. I've been struggling to reliably automate the installation of FireFox which turned out to be just as well because I was trying to automate the installation of the latest version from the Mozilla site. This turns out to be a bad thing because the latest version at time of writing (47.0) doesn't work with the latest (at time of writing) version of Selenium (2.53.0). Automation installation efforts for FireFox therefore need to centre around installing a Selenium-compatible version which makes things easier since the installer can be pre-downloaded to a known location. I ran out of time and installed FireFox 46.1 (compatible with Selenium 2.53.0) manually but this is something I'll revisit. Disabling automatic updates in FireFox is also essential to ensure you don't get out of sync with Selenum.
When you finally get your tests running you can see the results form the web portal by navigating to Test > Runs. You should hopefully see something similar to this:
Running Tests -- Test Plan Method
The first question you might ask about the Test Plan method is why bother if the Test Assembly method works? Of course, if the Test Assembly method gives you what you need then feel free to stick with that. However you might need to use the Test Plan method if a test plan already exists and you want to continue using it. Another reason is the possibility of more flexibility in choosing which tests to run. For example, you might organise your tests in to logical areas using static suites and then use query-based suites to choose subsets of tests, perhaps with the use of tags.
To use the Test Plan method, in the web portal navigate to Test > Test Plan and then:
- Use the green cross to create a new test plan called Acceptance Tests.
- Use the down arrow next to Acceptance Tests to create a New static suite called Department:
- Within the Department suite use the green cross to create two new test cases called Can_Navigate_To_Departments and Can_Create_Department (no other configuration necessary):
- Making a note of the test case IDs, switch to Visual Studio and in Team Explorer > Work Items search for each test case in turn to open it for editing.
- For each test case, click on Associated Automation (screenshot below is VSTS and looks slightly different from TFS) and then click on the ellipsis to bring up the Choose Test dialogue where you can choose the correct test for the test case:
- With everything saved switch back to the web portal Release hub and edit the Run Functional Tests task as follows:
- Test Selection = Test Plan
- Test Plan = Acceptance Tests
- Test Suite =Acceptance Tests\Department
With the configuration complete trigger a new release and if everything has worked you should be able to navigate to Test > Runs and see output similar to the Test Assembly method.
That's it for now. In the next post in this series I'll look at adding SpecFlow in to the mix to make the acceptance tests business readable.
Cheers -- Graham