Archives by Graham Smith

Continuous Delivery with TFS: Track Technical Debt with SonarQube

Posted by Graham Smith on June 18, 20154 Comments (click here to comment)

So far in this blog post series on building continuous delivery pipelines with the TFS ecosystem the focus on baking quality in to the application has centred mainly on static code analysis, unit tests and automated web tests. But just because you have no broken static code analysis rules and all your various types of tests are green isn't a guarantee that there aren't problems lurking in your codebase. On the contrary, you may well be unwittingly accumulating technical debt.  And unless you go looking for it chances are you won't find out that you have a technical debt problem until it starts to cause you major problems.

For some years now the go-to tool for analysing technical debt has been SonarQube (formerly Sonar). However SonarQube hails from the open source world and it hasn't exactly been a seamless fit in to the C# and TFS world. All that changed around the time of Build 2015 with the announcement that Microsoft had joined forces with the makers of SonarQube to start to address the situation. The video from Build 2015 which tells the story is well worth watching. To coincide with this announcement the ALM Rangers published a SonarQube installation guide aimed at TFS users. I used this guide to assist me in writing this blog post to see how SonarQube can be set up to work with our continuous delivery pipeline. It's worth noting that the guide mentions that it's possible to use integrated security with the jTDS driver that SonarQube uses to connect to SQL Server but I struggled for several hours before throwing in the towel. Please share in the comments if you have had success in doing this. Another difference between the guide and my setup is that the guide uses the all-in-one Brian Keller VM whereas I'm installing on distributed VMs.

Create New SonarQube Items

SonarQube requires a running Java instance and needs quite a bit of horsepower so the recommendation is to run it on a dedicated server. You'll need to create the following:

  • A new domain service account -- I created ALM\SONARQUBE.
  • A new VM running in your cloud service -- I called mine ALMSONARQUBE. As always in a demo Azure environment there is a desire to preserve Azure credits so I created mine as a basic A4 running Windows Server 2012 R2. Ensure the server is joined to your domain and that you add ALM\SONARQUBE to the Local Administrators group.
Install SonarQube

The following steps should be performed on ALMSONARQUBE :

  1. Download and install a Java SE Runtime Environment appropriate to your VMs OS. There are myriad download options and it can be confusing to the untrained Java eye but on the index page look out for the JRE download button:
    java-se-jre-download
  2. Download and unblock the latest version of SonarQube from the downloads page. There isn't a separate download for Windows -- the zip contains files that allow SonorQube to run on a variety of operating systems. Unzip the download to a temp location and copy the bin, conf and other folders to an installation folder. I chose to create C:\SonarQube\Main as the root for the bin, conf and other folders however this is slightly at odds with the ALM guide where they have a version folder under the main SonarQube folder. As this is my first time installing SonarQube I'm not sure how upgrades are handled but my guess is that everything apart from the conf folder can be overwritten with a new version.
  3. At this point you can run C:\SonarQube\Main\bin\windows-x86-64\StartSonar.bat (you may have to shift right-click and Run as administrator) to start SonarQube against its internal database and browse to http://localhost:9000 on ALMSONARQUBE to confirm that the basic installation is working. To stop SonarQube simply close the command window opened by StartSonar.bat.
Confirm SQL Server Connectivity

If you are intending to connect to a remote instance of SQL Server I highly recommend confirming connectivity from ALMSONARQUBE as the next step:

  1. On the ALMSONARQUBE machine create a new text file somewhere handy and rename the extension to udl.
  2. Open this Data Link Properties file and you will be presented with the ability to make a connection to SQL Server via a dialog that will be familiar to most developers. Enter connection details that you know work and use Test Connection to confirm connectivity.
  3. Possible remedies if you don't have connectivity are:
    1. The domain firewall is on for either or both machines. Consider turning it off as I do in my demo environment or opening up port 1433 for SQL Sever.
    2. SQL Sever has not been configured for the TCP/IP protocol. Open Sql Server Configuration Manager [sic] and from SQL Server Network Configuration > Protocols for MSSQLSERVER enable the TCP/IP protocol. Note that you'll need to restart the SQL Server service.
Create a SonarQube Database

Carry out the following steps to create and configure a database:

  1. Create a new blank SQL Server database on 2008 or 2012 -- I created SonarQube. I created my database on the same instance of SQL Server that runs TFS and Release Management. That's fine in a demo environment but in a production environment where you may be using the complimentary SQL Server licence for running TFS that may cause a licensing issue.
  2. SonarQube needs the database collation to be case-sensitive (CS) and accent-sensitive (AS). You can actually set this when you create the database but if it needs doing afterwards right-click the database in SSMS and choose Properties. On the Options page change the collation to SQL_Latin1_General_CP1_CS_AS.
  3. Still in SSMS, create a new SQL Server login from Security > Logins, ensuring that the Default language is English. Under User Mapping grant the login access to the SonarQube database and grant access to the db_owner role.
  4. On ALMSONARQUBE navigate to C:\SonarQube\Main and open sonar.properties from the conf folder in Notepad or similar. Make the follwoing changes:
    1. Find and uncomment sonar.jdbc.username and sonar.jdbc.password and supply the credentials created in the step above.
    2. Find the Microsoft SQLServer 2008/2012 section and uncomment sonar.jdbc.url. Amend the connection string so it includes the name of the database server and the database. The final result should be something like sonar.jdbc.url=jdbc:jtds:sqlserver://ALMTFSADMIN/SonarQube;SelectMethod=Cursor.
  5. Now test connectivity by running SonarStart.bat and confirming that the database schema has been created and that browsing to http://localhost:9000 is still successful.
Run SonarQube as a Service

The next piece of the installation is to configure SonarQube as a Windows service:

  1. Run C:\SonarQube\Main\bin\windows-x86-64\InstallNTService.bat (you may have to shift right-click and Run as administrator) to install the service.
  2. Run services.msc and find the SonarQube service. Open its Properties and from the Log On tab change the service to log on as the ALM\SONARQUBE domain account.
  3. Again test all is working as expected by browsing to http://localhost:9000.
Configure for C#

With a working SonarQube instance the next piece of the jigsaw is to enable it to work with C#:

  1. Head over to the C# plugin page and download and unblock the latest sonar-csharp-plugin-X.Y.jar.
  2. Copy the sonar-csharp-plugin-X.Y.jar to C:\SonarQube\Main\extensions\plugins and restart the SonarQube service.
  3. Log in to the SonarQube portal (http://localhost:9000 or http://ALMSONARQUBE:9000 if on a remote machine) as Administrator -- the default credentials are admin and admin.
  4. Navigate to SettingsSystem > Update Center and verify that the C# plugin is installed:
    sonarqube-update-center
Configure the SonarQube Server as a Build Agent

In order to integrate with TFS a couple of SonarQube components we haven't installed yet need access to a TFS build agent. The approach I've taken here is to have the build agent running on the actual SonarQube server itself. This keeps everything together and ensures that your build agents that might be servicing checkins are not bogged down with other tasks. From ALMSONARQUBE:

  1. Run Team Foundation Server Setup (typically by mounting the iso and running tfs_server.exe) and perform the install.
  2. At the Team Foundation Server Configuration Center dialog chose Configure Team Foundation Build Service > Start Wizard.
  3. Use the usual dialogs to connect to the appropriate Team Project Collection and then at the Build Services tab choose the Scale out build services option to add more build agents to the existing build controller on the TFS administration server.
  4. In the Settings tab supply the details of the domain service account that should be used to run the build agents.
  5. Install Visual Studio 2013.4 as it's the easiest way to get all the required bits on the build server.
  6. From within Visual Studio navigate to Tools > Extensions and Updates and then from the Updates tab update Microsoft SQL Server Update for database tooling.
  7. Update nuget.exe by opening an Administrative command prompt at C:\Program Files\Microsoft Team Foundation Server 12.0\Tools and running nuget.exe update -self.
  8. Finally, clone an existing Contoso University build definition that is based on the TfvcTemplate.12.xaml template, or create and configure a new build definition for Contoso University. I called mine ContosoUniversity_SonarQube. Queue a new build based on this template and make sure that the build is successful. You'll want to fix any glitches at this stage before proceeding.
Install the SonarQube Runner Component.

The SonarQube Runner is the is recommended as the default launcher to analyse a project with SonarQube. Installation to ALMSONARQUBE is as follows:

  1. Create a Runner folder in C:\SonarQube.
  2. Download and unlock the latest version of sonar-runner-dist-X.Y.zip from the downloads page.
  3. Unzip the contents of sonar-runner-dist-X.Y.zip to C:\SonarQube\Runner so that the bin, conf and lib folders are in the root.
  4. Edit C:\SonarQube\Runner\conf\sonar-runner.properties by uncommenting and amending as required the following values:
    1. sonar.host.url=http://ALMSONARQUBE:9000
    2. sonar.jdbc.url=jdbc:jtds:sqlserver://ALMTFSADMIN/SonarQube;SelectMethod=Cursor
    3. sonar.jdbc.username=SonarQube
    4. sonar.jdbc.password=$PasswordForSonarQube$
  5. Create a new system variable called SONAR_RUNNER_HOME with the value C:\SonarQube\Runner.
  6. Amend the Path system variable adding in C:\SonarQube\Runner\bin.
  7. Restart the build service -- the Team Foundation Server Administration Console is just one place you can to do this.
Integration with Team Build

In order to call the SonarQube runner from a TFS build definition a component called Sonar.MSBuild.Runner has been developed. This needs installing on ALMSONARQUBE is as follows:

  1. Create an MSBuild folder in C:\SonarQube.
  2. Download and unlock the latest version of SonarQube.MSBuild.Runner-X.Y.zip from the C# downloads page.
  3. Unzip the contents of SonarQube.MSBuild.Runner-X.Y.zip to C:\SonarQube\MSBuild so that the files are in the root.
  4. Copy SonarQube.Integration.ImportBefore.targets to C:\Program Files (x86)\MSBuild\12.0\Microsoft.Common.Targets\ImportBefore. (This folder structure may have been created as part of the Visual Studio installation. If not you will need to create it manually.)
  5. The build definition cloned/created earlier (ContosoUniversity_SonarQube) should be amended as follows:
    1. Process > 2.Build > 5. Advanced > Pre-build script arguments = /key:ContosoUniversity /name:ContosoUniversity /version:1.0
    2. Process > 2.Build > 5. Advanced > Pre-build script path = C:\SonarQube\MSBuild\SonarQube.MSBuild.Runner.exe
    3. Process > 3. Test > 2. Advanced > Post-test script path = C:\SonarQube\MSBuild\SonarQube.MSBuild.Runner.exe
  6. Configure the build definition for unit test results as per this blog post. Note though that Test assembly file specification should be set to **\*unittest*.dll;**\*unittest*.appx to avoid the automated web tests being classed as unit tests.
Show Time

With all the configuration complete it's time to queue a new build. If all is well you should see that the build report contains a SonarQube Analysis Summary section:

tfs-build-report-with-sonarqube-section

Clicking on the Analysis results link in the build report should take you to the dashboard for the newly created ContosoUniversity project in SonarQube:

sonarqube-dashboard

This project was created courtesy of the Pre-build script arguments in the build definition (/key:ContosoUniversity /name:ContosoUniversity /version:1.0). If for some reason you prefer to create the project yourself the instructions are here. Do note that the dashboard is reporting 100% unit tests coverage only because my Contoso University sample action uses quick and dirty unit tests for demo purposes.

And Finally...

Between the ALM Rangers guide and the installation walkthrough above I hope you will find getting started with SonarQube and TFS reasonably straightforward. Do be aware that I found the ALM Ranger's guide to be a little confusing in places. There is the issue of integrated security with SQL Server that I never managed to crack and then a strange reference on page 22 about sonar-runner.properties not being needed after integrating with team build which had me scratching my head since how else can the components know how to connect to the SonarQube portal and database? It's early days though and I'm sure the documentation will improve and mature with time.

Performing the installation is just the start of the journey of course. There is a lot to explore, so do take time to work through the resources at sonarqube.org.

Cheers -- Graham

Continuous Delivery with TFS: Enable Test Impact Analysis

Posted by Graham Smith on May 28, 2015No Comments (click here to comment)

Test Impact Analysis is a feature that first appeared with Visual Studio / Microsoft Test Manager 2010 and provides for the ability to recommend tests that should be re-run in response to changes that have been made at the code level. It's a very useful feature but it does need some configuration before it can be used. In this post in my series on continuous delivery with TFS we look at the steps that need to be taken to enable TIA in our development pipeline.

Setting the Scene

The scenario I'm working with is where a new nightly build of the sample ASP.NET application has been deployed to the DAT stage and all of the automated Selenium web tests have passed. This now leaves the build ready to deploy in to the DQA stage so that any manual tests (including manual tests that have been automated to run from MTM) can be run from a browser on a client workstation. With TIA configured there are then at least two places to check for any tests that are recommended for running again.

Whilst working through the configuration for TIA I discovered that TIA doesn't seem to work in a multi-tenant web server, which is what I've set up for this blog post series to keep the number of VMs to a minimum. More correctly, I suspect that TIA doesn't work where there is a separate application pool for each website in a multi-tenant web server. I haven't investigated this thoroughly but it's something to bear in mind if you are trying to get TIA working and something I'll address in my future blog post series on continuous delivery with TFS 2015. The MSDN guidance for setting up TIA is here but it's slightly at odds with the latest version of MTM and in any case doesn't have all the details.

Create a Lab Center Environment and a Test Settings Configuration

As good a starting point as any for TIA configuration is to create a new environment in the MTM Lab Center. I covered this here so I won't go through all the steps again, however the environment needs to contain the web server that hosts the DQA stage's web site and it should be configured for the Web Server role.

Now move over to Test Settings and create a new entry -- I called mine Manual Test Run. Choose Manual for the What type of tests do you want to run? question and then in the Roles page select Web Server to join the Local role which is pre-selected and mandatory. In the Data and Diagnostics page the Local role needs to be configured for ASP.NET Client Proxy for IntelliTrace and Test Impact as a minimum and the Web Server role needs to be configured for Test Impact as a minimum. Additionally, after selecting Test Impact click on Configure at the far right. In the dialog's Advanced tab ensure Collect data from ASP.NET applications running on Internet Information Services is checked.

microsoft-test-manager-manual-test-settings

Configure the Test Plan

Either use an existing test plan or create a new one and in Testing Center > Plan > Contents add a new suite called Instructor. Create a new Test Case in Instructor called Can Navigate and then add the following steps:

  1. Launch IE
  2. Type URL and hit enter
  3. Click on Instructors

Now from Testing Center > Test > Run Tests run the Can Navigate test:

microsoft-test-manager-run-test

The aim of running the test this first time is to record each step so the whole test can be replayed on future runs. This sort of automation isn't to be confused with deep automation using tools such as Selenium or CodedUI, and is instead more akin to recording macros in Microsoft Office applications. Nevertheless, the technique is very powerful due to the repeatability it offers and is also a big time saver. The process of recording the steps is a little fiddly and I recommend you follow the MSDN documentation here. The main point to remember is to mark each step as passed after successful completion so that MTM correctly associates the action with the step. Hopefully the build steps are obvious, the aim being simply to display the list of instructors.

The final configuration step for the test plan is to configure Run Settings from Testing Center > Plan:

  1. Manual runs > Test settings = Manual Test Run (created above)
  2. Manual runs > Test environment = ALMWEB01 (created above, your environment name may differ)
  3. Builds > Filter for builds = ContosoUniversity_Main_Nightly (Ready for Deployment) (or whatever build you are using)
  4. Builds > Build in use = Latest build you have marked with the Build Quality of Ready for Deployment. (Build quality is arbitrary -- just needs to be consistent.)

microsoft-test-manager-run-settings

Web Server Configuration

In order for TIA data to be collected on the web server there are several configuration steps to be completed:

  1. When environments are created by MTM Lab Center they are configured to use the Network Service account. We need to use the dedicated ALM\TFSTEST domain account (or whatever you have called yours) so on the web server run the Test Agent Configuration Tool and make the change. Additionally the TFSTEST account needs to be in the Local Administrators group on the web server.
  2. The domain account (ALM\CU-DQA) that is used as the identity for the application pool (CU-DQA) for the DQA website needs to have a local profile. You can either log on to the server as ALM\CU-DQA or use the runas /user:domain\name /profile cmd.exe command, supplying the appropriate credentials.
  3. The CU-DQA application pool needs to have the Load User Profile property set to True.
Web Client Configuration

The web client machine (I'm using Windows 8.1) needs to be running MTM and also has to have the Microsoft Test Agent installed. This should be configured with the ALM\TFSTEST account (which should be in the Local Administrators group) and be registered to the test controller.

Putting it All Together

With the configuration out of the way it's now time to generate TIA data. The Can Navigate test needs one successful run on the web client against the currently selected build in order to generate a baseline. As noted above it seems that TIA doesn't play nicely with multiple application pools and I found I needed to stop the CU-DAT and CU-PRD application pools before running the test.

  1. In MTM navigate to Testing Center > Test > Run Tests and run Can Navigate as earlier.
  2. The Test Runner will fire up and you should click Start Test.
  3. The Test Runner will display the steps of the test and the VCR style controls. From the Play dropdown choose Play all:
    test-runner-play-all
  4. When the test has successfully completed use the Mark test case result dropdown to mark the test as a Pass:
    test-runner-mark-test-case-result
  5. A dialog will pop up advising that impact data is being collected and when it closes you should Save and Close the test.

Back at Testing Center > Test > Run Tests, in order to check that TIA data was generated click on View results with Can Navigate selected. In the attachments section you should see a file ending in testimpact.xml:

microsoft-test-manager-test-result-attachments

We now need to create a change to the method that displays the list of instructors. Typically this change will arise as part of fixing a bug however it will be sufficient here to fake the change in order to get TIA to work. To achieve this open up the Contoso University demo app and navigate to ContosoUniversity.Web > Controllers > InstructorController > Index. Make any non-breaking change -- I changed the OrderBy of the query that returns the instructors.

Check the code in to version control and then start a new build. If the build is successful you can confirm that Can Navigate is now flagged as an impacted test. Firstly you should see this in the build report either from Visual Studio or Team Web Access:

visual-studio-build-report-impacted-tests

(Whilst you are examining the build report mark the Build Quality as Ready for Deployment, but note that you would typically do this after confirming a successful DAT stage). Secondly, in MTM navigate to Testing Center > Track > Recommended Tests. Change the Build in use to the build that has just passed and then change Previous build to compare to the build that was in use at the time of creating the baseline. A dialog should pop up advising that there may be tests that need to be re-run. After dismissing the dialog Can Navigate should be listed under Recommended tests:

microsoft-test-manager-recommended-tests

Test Case Closed

As is often the case with continuous delivery pipeline work it seems like there is a great deal of configuration required to get a feature working and TIA is no exception. One valuable lesson is that whilst a multi-tenant web application configuration certainly saves on the number of VMs required for a demo environment it does cause problems and should almost certainly be avoided for an on-premises installation. I'll definitely be using separate web servers when I refresh my demo setup for TFS 2015. And when Windows Nano Server becomes available we won't be thinking twice about trying to save on the number of running VMs. Exciting times ahead...

Cheers -- Graham

Continuous Delivery with TFS: Configure Application Insights

Posted by Graham Smith on May 17, 20155 Comments (click here to comment)

If you get to the stage where you are deploying your application on a very frequent basis and you are relying on automated tests for the bulk of your quality assurance then a mechanism to alert you when things go wrong becomes crucial. You should have something in place anyway of course but in practice I suspect that application monitoring is either frequently overlooked or remains stubbornly on the to-do list.

A successful continuous delivery pipeline implementation shouldn't rely on the telephone or email as the alerting system and in this post in my blog series on implementing continuous delivery with TFS we look at how to integrate relevant parts of Microsoft's Application Insights (AI) tooling in to the pipeline. If you need to get up to speed with its capabilities I have a Getting Started blog post here. As a quick refresher AI is a suite of components that integrate with your application and servers and which sends telemetry to the Azure Portal. As a bonus, not only do you get details of diagnostic issues but also rich analytics on how you application is being used.

Big Picture

AI isn't just one component and in fact there are at least three main ways in which AI can be configured to provide diagnostic and analytic information:

  • Adding the Application Insights SDK to your application.
  • Installing Status Monitor on an IIS server.
  • Creating Web Tests that monitor the availability of an HTTP endpoint available on the public Internet.

One key point to appreciate with AI and continuous delivery pipelines is that unless you do something about it AI will put the data it collects from the different stages of your pipeline in one ‘bucket' and you won't easily be able to differentiate what came from where. Happily there is a way to address this as we'll see below. Before starting to configure AI there are some common preparatory steps that need to be addressed so let's start with those.

Groundwork

If you have been following along with this series of blog posts you will be aware that so far we have only created DAT and DQA stages of the pipeline. Although not strictly necessary I created a PRD stage of the pipeline to represent production: if nothing else it's handy for demonstrations where your audience may expect to see the pipeline endpoint. I won't detail all the configuration steps here as they are all covered by previous blog posts however the whole exercise only took a few minutes. As things stand none of these stages exposes our sample web application to the public Internet however this is necessary for the creation of Web Tests. We can fix this in the Azure portal by adding an HTTP endpoint to the VM that runs IIS:

azure-portal-vm-endpoint

Our sample application is now available using a URL that begins with the cloud service name and includes the website name, for example http://mycloudservice.cloudapp.net/mywebapp. Be aware that this technique probably falls foul of all manner of security best practices however given that my VMs are only on for a few hours each week and it's a pure demo environment it's one I'm happy to live with.

The second item of groundwork is to create the containers that will hold the AI data for each stage of the pipeline. You will need to use the new Azure portal for this at https://portal.azure.com. First of all a disclaimer. There are several techniques at our disposal for segregating AI data as discussed in this blog post by Victor Mushkatin, and the comments of this post are worth reading as well since there are some strong opinions. I tried the tagging method but couldn't get it to work properly and as Victor says in the post this feature is at the early stages of development. In his post Victor creates a new Azure Resource for each pipeline stage however that seemed overly complicated for a demo environment. Instead I opted to create multiple Application Insights Resources in one Azure Resource group. As an aside, resource groups are fairly new to Azure and for any new Azure deployment they should be carefully considered as part of the planning process. For existing deployments you will find that your cloud service is listed as a resource group (containing your VMs) and I chose to use this as the group to contain the Application Insights Resources. Creating new AI resources is very straightforward. Start with the New button and then choose Developer Services > Application Insights. You'll need to provide a name and then use the arrow selectors to choose Application Type and Resource Group:

azure-portal-new-application-insights-resource

I created the following resource groups which represent the stages of my pipeline: CU-DEV, CU-DAT, CU-DQA and CU-PRD. What differentiates these groups is their instrumentation keys (often abbreviated to ikey). You'll need to retrieve the ikeys for each group and the way to do that in the new portal is via Browse > Filter By > Application Insights > $ResourceGroup$ > Settings > Properties where you will see the Instrumentation Key selector.

Add the Application Insights SDK to ContosoUniversity

We can now turn our attention to adding the Application Insights SDK to our Contoso University web application:

  1. Right-click your web project (ContosoUniversity.Web) within your Visual Studio solution and choose Add Application Insights Telemetry.
    visual-studio-add-application-insights-telemetry
  2. The Add Application Insights to Project dialog opens and invites you to sign in to Azure:
    visual-studio-add-application-insights-to-project
  3. The first few times I tried to connect to Azure I got errors about not being able to find an endpoint but persistence paid off. I eventually arrived at a dialog that allowed me to choose my MSDN subscription via the Use different account link:
    visual-studio-add-application-insights-to-project-confirm-settings
  4. Having already created my AI resources I used Configure settings to choose the CU-DEV resource:
    visual-studio-add-application-insights-to-project-configure-settings
  5. Back in the Add Application Insights to Project dialog click on the Add Application Insights to Project link to have Visual Studio perform all the necessary configuration.

At this stage we can run the application and click around to generate telemetry. If you are in Debug mode you can see this in the Output window. After a minute or two you should also see the telemetry start to appear in the Azure portal (Browse > Filter By > Application Insights > CU-DEV).

Configure AI in Contoso University for Pipeline Stages

As things stand deploying Contoso University to other stages of the pipeline will cause telemetry for that stage to be added to the CU-DEV AI resource group. To remedy this carry out the following steps:

  1. Add an iKey attribute to the appSettings section of Web.config:
  2. Add a transform to Web.Release.config that consists of a token (__IKEY__) that can be used by Release Management:
  3. Add the following code to Application_Start in Global.asax.cs:
  4. As part of the AI installation Views.Shared._Layout.cshtml is altered with some JavaScript that adds the iKey to each page. This isn't dynamic and the JavaScript instrumentationKey line needs altering to make it dynamic as follows:
  5. Remove or comment out the InstrumentationKey section in ApplicationInsights.config.
  6. In the Release Management client at Configure Apps > Components edit the ContosoUniversity\Deploy Web Site component by adding an IKEY variable to Configuration Variables.
  7. Still in the Release Management client open the Contoso University\DAT>DQA>PRD release template from Configure Apps > Agent-based Release Templates and edit each stage supplying the iKey value for that stage (see above for how to get this) to the newly added IKEY configuration variable.

After completing these steps you should be able to deploy your application to each stage of the pipeline and see that the Web.config of each stage has the correct iKey. Spinning up the website for that stage and clicking around in it should cause telemetry to be sent to the respective AI resource.

Install Status Monitor on the IIS server

The procedure is quite straightforward as follows:

  1. On your IIS server (ALMWEB01 if you are following the blog series) download and run the Status Monitor installation package from here.
  2. With the installation complete you'll need to sign in to your Microsoft Account after which you'll be presented with a configuration panel where the CU-DAT, CU-DQA and CU-PRD websites should have been discovered. The control panel lets you specify a separate AI resource for each website after which you'll need to restart IIS:
    application-insights-status-monitor-configuration
  3. In order to ensure that the domain accounts that the websites are running under have sufficient permissions to collect data make sure that they have been added to the Performance Monitor Users Windows local group.

With this configuration complete you should click around in the websites to confirm that telemetry is being sent to the Azure portal.

Creating Web Tests to monitor HTTP Availability

The configuration for Web Tests takes place in the new Azure portal at https://portal.azure.com. There are two types of test -- URL ping and a more involved Multi-step test. I'm just describing the former here as follows:

  1. In the new portal navigate to the AI resource you want to create tests for and choose the Availability tile:
    azure-portal-web-tests-select
  2. This opens the Web Tests pane where you choose Add web test:
    azure-portal-web-tests-add-new
  3. In the Create test pane supply a name and a URL and then use the arrow on Test Locations to choose locations to test from:
    azure-portal-web-tests-create

After clicking Create you should start to see data being generated within a few seconds.

In Conclusion

AI is clearly a very sophisticated solution for providing rich telemetry about your application and the web server hosting and I'm exited about the possibilities it offers. I did encounter a few hurdles in getting it to work though. Initial connection to the Azure portal when trying to integrate the SDK with Contoso University was the first problem and this caused quite a bit of messing around as each failed installation had to be undone. I then found that with AI added to Contoso University the build on my TFS server failed every time. I'm using automatic package restore and I could clearly see what's happening: every AI NuGet package was being restored correctly with the exception of Microsoft.ApplicationInsights and this was quite rightly causing the build to fail. Locally on my development machine the package restore worked flawlessly. The answer turned out to be an outdated nuget.exe on my build server. The fix is to open an Administrative command prompt at C:\Program Files\Microsoft Team Foundation Server 12.0\Tools and run nuget.exe update -self. Instant fix! This isn't AI's fault of course, although it is a mystery why one of the AI NuGets brought this problem to light.

Cheers -- Graham

Getting Started with Application Insights

Posted by Graham Smith on May 4, 2015No Comments (click here to comment)

If the latest release of your application has a problem chances are you would prefer to know before your users flood your inbox or start complaining on social media channels. Additionally it is probably a good idea to monitor what your users get up to in your application to help you prioritise future development activities. And so to the world of diagnostics and analytics software. There are offerings from several vendors to consider in this area but as good a place to start as any is Microsoft's Application Insights. Here is a list of resources to help you understand what is can do for you:

At the time of writing this post Application Insights is in public preview -- see here for details. Do bear in mind that it's a chargeable service with full pricing due to take effect in June 2015.

Cheers -- Graham

Continuous Delivery with VSO: Executing Automated Web Tests with Microsoft Test Manager

Posted by Graham Smith on April 9, 20154 Comments (click here to comment)

In this fourth post in my series on continuous delivery with VSO we take a look at executing automated web tests with Microsoft Test Manager. There are quite a few moving parts involved in getting all this working so it's worth me explaining the overall aim before diving in with the specifics.

Overview

The tests we want to run are automated web tests written using the Selenium framework. I first wrote these tests for my Continuous Delivery with TFS blog posts series and you can read about how to create the tests here and how run run them using MTM and TFS here. The goal in this post is to run these tests using MTM and VSO, triggered as part of the DAT stage of the pipeline from RM. The tests are run from a client workstation that is configured with MTM (a requirement at the time of writing) and the Microsoft Test Agent. I've used Selenium's Firefox driver in the test code so Firefox is also required on the client machine.

In terms of what actually happens, firstly RM copies the complete build over to the client workstation and then executes a PowerShell script that runs TCM.exe which is a command-line utility that lets you run tests that are part of a test plan. Precisely what happens next is under the bonnet stuff but it's along the lines of the test controller is informed that there is work to be done and that in turn informs the test agent on the client machine that it needs to run tests. The test agent knows from the test plan which tests to run and in which DLL they live and has access to the DLLs in the local copy of the build folder. Each test first starts Firefox and then connects to the web server running the deployed Contoso University and performs the automation specified in the test.

In many ways the process of getting all this to work with VSO rather than TFS is very similar and because of that I don't go in to every detail in this post and instead refer back to my TFS blog post.

Configure a Test Controller

VSO doesn't offer a test controller facility so you'll need to configure this yourself. If you have a test controller already in use then it's simplicity itself to repurpose it to point to your VSO account using the Browse button. If you are starting from scratch see here for the details but obviously ensure you connect to VSO rather than TFS. One other difference is that in order to get past some permissions problems I found it necessary to specify credentials for the lab service account -- I used the same as the service logon account.

Although I started off by repurposing an existing controller, because of permissions problems I ended up creating a dedicated build and test server as I wanted to start with a clean sheet. One thing I found was that the Visual Studio Test Controller service wouldn't automatically start after booting the OS from the Stopped (deallocated) state. The application error log was clearly reporting that the test controller wasn't able to connect to VSO. Manually starting the service was fine so presumably there was some sort of timing issue with other OS components not being ready.

Configure Microsoft Test Manager

If MTM isn't already installed on your development workstation then that's the first step. The second step is to connect MTM to your VSO account. I already had MTM installed and when I went to connect it to VSO the website was already listed. If that's not the case you can use the Add server link from the Connect to Your Team Project dialog. Navigating down to your Team Project (ContosoUniversity) enables the Connect now link which then takes you to a screen that allows you to choose between Testing Center and Lab Center. Choose the latter and then configure Lab Center as per the instructions here.

Continue following these instructions to configure Testing Centre with a new test plan and test cases. Note that you need to have the Contoso University solution open in order to associate the actual tests with the test cases. You'll also need to ensure that when deployed the tests navigate to the correct URL. In the Contoso University demo application this is hard-coded and you need to make the change in Driver.cs located in the ContosoUniversity.Web.SeFramework project.

Configure a Web Client Test Machine

The client test machine needs to be created in the cloud service that was created for DAT and joined to the domain if you are using one. The required configuration is very similar to that required for TFS as described here with the exception that the Release Management Deployment Agent isn't required and nor is the RMDEPLOYER account. Getting permissions correctly configured on this machine proved critical and I eventually realised that the Windows account that the tests will run under needs to be configured so that MTM can successfully connect to VSO with the appropriate credentials. To be clear, these are not the test account credentials themselves but rather the normal credentials you use to connect to VSO. To configure all this, once the test account has been added to the Local Administrators group and MTM has been installed and the licence key applied you will need to log on to Windows as the test account and start MTM. Connect to VSO and supply your VSO credentials in the same way as you did for your development workstation and and verify that you can navigate down to the Contoso University team project and open the test plan that was created in the previous section.

Initially I also battled with getting the test agent to register correctly with the test controller. I eventually uninstalled the test agent (which I had installed manually) and let the test controller perform the install followed by the configuration. Whether that was the real solution to the problem I don't know but it got things working for me.

Executing TCM.exe with PowerShell

As mentioned above the code that starts the tests is a PowerShell script that executes TCM.exe. As a starting point I used the script that Microsoft developed for agent-based release templates but had to modify it to make it work with RM-VSO. In particular changes were made to accommodate the way variables are passed in to the script (some implicit such as $TfsUrl or $TeamProject and some explicit such as $PlanId or $SuiteId) and to remove the optional build definition and build number parameters which are not available to the vNext pipeline and caused errors when specified on the TCM.exe command line. The modified script (TcmExecvNext.ps1) and the original Microsoft script for comparison (TcmExec.ps1) are available in a zip here and TcmExecvNext.ps1 should be copied to the Deploy folder in your source control root. One point to note is that for agent-based pipelines to TFS Collection URL is passed as $TfsUrlWithCollection however in vNext pipelines it is passed in as $TfsUrl.

Configure Release Management

Because we are using RM-VSO this part of the configuration is completely different from the instructions for RM-TFS. However before starting any new configuration you'll need to make a change to the component we created in the previous post. This is because TCM.exe doesn't seem to like accepting the name of a build folder if it has a space in it. Some more fiddling with PowerShell might have found a solution but I eventailly changed the component's name from Drop Folder to DropFolder. Note that you'll need to visit the existing action and reselect the newly named component. Another issue which cropped-up is that TCM.exe choked when the build directory parameter was supplied with a local file path. The answer was to create a share at C:\Windows\DtlDownloads\DropFolder and configure with appropriate permissions.

The new configuration procedure for RM-VSO is as follows:

  1. From Configure Paths > Environments link the web client test machine to the DAT environment.
  2. From Configure Apps > vNext Release Templates open Contoso University\DAT>DQA.
  3. From the Toolbox drag a Deploy Using PS/DSC action to the deployment sequence to follow Deploy Web and Database and rename the action Run Automated Web Tests.
  4. Open up the properties of Run Automated Web Tests and set the Configuration Variables as follows:
    1. ServerName = choose the name of the web client test machine from the dropdown.
    2. UserName = this is the test domain account (ALM\TFSTEST in my case) that was configured for the web client test machine.
    3. Password = password for the UserName
    4. ComponentName = choose DropFolder from the dropdown.
    5. PSScriptPath = Deploy\TcmExecvNext.ps1
    6. SkipCaCheck = true
  5. Still in the properties of Run Automated Web Tests and set the Custom configuration as follows:
    1. PlanId = 8 (or whatever your Plan ID is as it is likely to be different)
    2. SuiteId = 10 (or whatever your Suite ID is as it is likely to be different)
    3. ConfigId = 1 (or whatever your Configuration ID is as it is likely to be different)
    4. BuildDirectory = \\almclientwin81b\DtlDownloads\DropFolder (your machine name may be different)
    5. TestEnvironment = ALMCLIENTWIN81B (yours may be different)
    6. Title = Automated Web Tests

Bearing in mind that the Deploy Using PS/DSC action doesn't allow itself to be resized to show all configuration values the result should look something like this:

release-management-run-automated-tests

Start a Build

From Visual Studio manually queue a new build from your build definition. If everything is in place the build should succeed and you can open Microsoft Test Manager to check the results. Navigate to Testing Center > Test > Analyze Test Runs. You should see your test run listed and double-clicking it will hopefully show the happy sight of passing tests:

microsoft-test-manager-tests-passed-vso

Testing Times

As I noted in the TFS version of this post there are a lot of moving parts to get configured and working in order to be able to trigger tests to run from RM. Making all this work with VSO took many hours working through all the details and battling with permissions problems and myriad other things that didn't work in the way I was expecting them to. With luck I've hopefully captured all the details you need to try this in your own environment. If you do encounter difficulties please post in the comments and I'll do what I can to help.

Cheers -- Graham

Continuous Delivery with VSO: Application Deployment with Release Management

Posted by Graham Smith on March 30, 20155 Comments (click here to comment)

In the previous post in my blog series on implementing continuous delivery with VSO we got as far as configuring Release Management with a release path. In this post we cover the application deployment stage where we'll create the items to actually deploy the Contoso University application. In order to achieve this we'll need to create a component which will orchestrate copying the build to a temporary location on target nodes and then we'll need to create PowerShell scripts to actually install the web files to their proper place on disk and run the DACPAC to deploy any database changes. Note that although RM supports PowerShell DSC I'm not using it here and instead I'm using plain PowerShell. Why is that? It's because for what we're doing here -- just deploying components -- it feels like an unnecessary complication. Just because you can doesn't mean you should...

Sort out Build

The first thing you are going to want to sort out is build. VSO comes with 60 minutes of bundled build which disappears in no time. You can pay for more by linking your VSO account to an Azure subscription that has billing activated or the alternative is to use your own build server. This second option turns out to be ridiculously easy and Anthony Borton has a great post on how to do this starting from scratch here. However if you already have a build server configured it's a moment's work to reconfigure it for VSO. From Team Foundation Server Administration Console choose the Build Configuration node and select the Properties of the build controller. Stop the service and then use the familiar dialogs to connect to your VSO URL. Configure a new controller and agent and that's it!

Deploying PowerShell Scripts

The next piece of the jigsaw is how to get the PowerShell scripts you will write to the nodes where they should run. Several possibilities present themselves amongst which is embedding the scripts in your Visual Studio projects. From a reusability perspective this doesn't feel quite right somehow and instead I've adopted and reproduced the technique described by Colin Dembovsky here with his kind permission. You can implement this as follows:

  1. Create folders called Build and Deploy in the root of your version control for ContosoUniversity and check them in.
  2. Create a PowerShell script in the Build folder called CopyDeployFiles.ps1 in and add the following code:
  3. Check CopyDeployFiles.ps1 in to source control.
  4. Modify the process template of the build definition created in a previous post as follows:

2.Build > 5. Advanced > Post-build script arguments = -pathToCopy Deploy
2.Build > 5. Advanced > Post-build script path = Build/CopyDeployFiles.ps1

To explain, Post-build script path specifies that CopyDeployFiles.ps1 created above should be run and Post-build script arguments feeds in the -pathToCopy argument which is the Deploy folder we created above. The net effect of all this is that the Deploy folder and any contents gets created as part of the build.

Create a Component

In a multi-server world we'd create a component in RM from Configure Apps > Components for each server that we need to deploy to since a component is involved in ensuring that the build is copied to the target node. Each component would then be associated with an appropriately named PowerShell script to do the actual work of installing/copying/running tests or whatever is needed for that node. Because we are hosting IIS and SQL Server on the same machine we only actually need one component. We're getting ahead of ourselves a little but a side effect of this is that we will use only one PowerShell script for several tasks which is a bit ugly. (Okay, we could use two components but that would mean two build copy operations which feels equally ugly.)

With that noted create a component called Drop Folder and add a backslash (\) to Source > Builds with application > Path to package. The net effect of this when the deployment has taken place is the existence a folder called Drop Folder on the target node with the contents of the original drop folder copied over to the remote folder. As long as we don't need to create configuration variables for the component it can be reused in this basic form. It probably needs a better name though.

Create a vNext Release Template

Navigate to Configure Apps > vNext Release Templates and create a new template called Contoso University\DAT>DQA based on the Contoso University\DAT>DQA release path. You'll need to specify the build definition and check Can Trigger a Release from a Build. We now need to create the workflow on the DAT design surface as follows:

  1. Right-click the Components node of the Toolbox and Add the Drop Folder component.
  2. Expand the Actions node of the Toolbox and drag a Deploy Using PS/DSC action to the Deployment Sequence. Click the pen icon to rename to Deploy Web and Database.
  3. Double click the action and set the Configuration Variables as follows:
    1. ServerName = choose the appropriate server from the dropdown.
    2. UserName = the name of an account that has permissions on the target node. I'm using the RMDEPLOYER domain account that was set up for Deployment Agents to use in agent based deployments.
    3. Password = password for the UserName
    4. ComponentName = choose Drop Folder from the dropdown.
    5. SkipCaCheck = true
  4. The Actions do not display very well so a complete screenshot is not possible but it should look something like this (note SkipCaCheck isn't shown):
    release-management-deploy-using-ps-dsc-action

At this stage we can save the template and trigger a build. If everything is working you should be able to examine the target node and observe a folder called C:\Windows\DtlDownloads\Drop Folder that contains the build.

Deploy the Bits

With the build now existing on the target node the next step is to actually get the web files in place and deploy the database. We'll do this from one PowerShell script called WebAndDatabase.ps1 that you should create in the Deploy folder created above. Every time you edit this and want it to run do make sure you check it in to version control. To actually get it to run we need to edit the Deploy Web and Database action created above. The first step is to add Deploy\WebAndDatabase.ps1 as the parameter to the PSScriptPath configuration variable. We then need to add the following custom configuration variables by clicking on the green plus sign:

  • destinationPathC:\inetpub\wwwroot\CU-DAT
  • websiteSourcePath = _PublishedWebsites\ContosoUniversity.Web
  • dacpacNameContosoUniversity.Database.dacpac
  • databaseServerALMWEBDB01
  • databaseNameCU-DAT
  • loginOrUserALM\CU-DAT

The first section of the script will deploy the web files to C:\inetpub\wwwroot\CU-DAT on the target node, so create this folder if you haven't already. Obviously we could get PowerShell to do this but I'm keeping things simple. I'm using functions in WebAndDatabase.ps1 to keep things neat and tidy and to make debugging a bit easier if I want to only run one function.

The first function is as follows:

The code clears out the current set of web files and then copies the new set over. The tokens in Web.config get changed in the copied set so the originals can be used for the DQA stage.  Note how I'm using Write-Verbose statements with the -Verbose switch at the end. This causes the RM Deployment Log grid to display a View Log link in the Command Output column. Very handy for debugging purposes.

The second function deploys the DACPAC:

The code is simply building the command to run sqlpackage.exe -- pretty straightforward. Note that the script is hardcoded to SQL Server 2014 -- more on that below.

The final function deals with the Create login and database user.sql script that lives in the Scripts folder of the ContosoUniversity.Database project. This script ensures that the necessary SQL Server login and database user exists and is tokenised so it can be used in different stages -- see this article for all the details.

The tokens in the SQL script are first swapped for passed-in values and then the code builds a command to run the script. Again, pretty straightforward.

Loose Ends

At this stage you should be able to trigger a build and have all of the components deploy. In order to fully test that everything is working you'll want to create and configure a web application in IIS -- this article has the details.

To create the stated aim of an initial pipeline with both a DAT and DQA stage the final step is to actually configure all of the above for DQA. It's essentially a repeat of DAT so I'm not going to describe it here but do note that you can copy and paste the Deployment Sequence:

release-management-copy-stage

One remaining aspect to cover is the subject of script reusability. With RM-TFS there is an out-of-the-box way to achieve reusability with tools and actions. This isn't available in RM-VSO and instead potential reusability comes via storing scripts outside of the Visual Studio solution. This needs some thought though since the all-in-one script used above (by necessity) only has limited reusability and in a non-demo environment you would want to consider splitting the script and co-ordinating everything from a master script. Some of this would happen anyway if the web and database servers were distinct machines but there is probably more that should be done. For example, tokens that are to be swapped-out are hard-coded in the script above which limits reusability. I've left it like that for readability but this certainly feels like the sort of thing that should be improved upon. In a similar vein the path to sqlpackage.exe is hard coded and thus tied to a specific version of SQL Server and probably needs addressing.

In the next post we'll look at executing automated web tests. Meantime if you have any thoughts on great ways to use PowerShell with RM-VSO please do share in the comments.

Cheers -- Graham

Blogging with WordPress

Posted by Graham Smith on March 21, 201514 Comments (click here to comment)

If you're the sort of IT professional who prefers to actively manage their career rather than just accept what comes along then chances are that deciding to blog could be one of the best career decisions you ever make. Technical blogging is a great focal point for all your learning efforts, a lasting way to give back to the community, a showcase for your talents to potential employers and a great way to make contact with other people in your field. Undecided? Seems most people initially feel that way. Try reading this, this and this.

If you do decide to make the leap one of the best sources of inspiration that I have found is John Sonmez's Free Blogging Course. It's packed full of tips on how to get started and keep going and since there is nothing to lose I wholeheartedly recommend signing up. This is just the beginning though, and it turns out that this is the start of many practical decisions you will need to make in order to turn out good quality blog posts that reach as wide an audience as possible. I thought I'd document my experience here to give anyone just starting out a flavour of what's involved.

Which Blogging Platform?

Once you have decided to blog the first question is likely to be which blogging platform to go for. The top hits for a best blogging platform Google search invariably recommend WordPress and since there seemed no point in ignoring the very advice I had sought that's what I chose. There's more to it than that though since there is a choice of WordPress.com and WordPress.org. The former allows you to host a blog for free at WordPress.com however there are some restrictions on customisation and complications with domain name. WordPress.org on the other hand is a software package that anyone can host and is the full customisable version. If you are an IT professional you are almost certainly going to want all the flexibility and despite the fact that modest costs are involved for me WordPress.org was the clear winner.

Hosting WordPress

Next up is where are you going to host WordPress? With super fast broadband and technical skills some people might choose to host themselves at home but on my rural 1.8 Mbps connection that wasn't an option. I started down the Microsoft Azure route and actually got a working WordPress up-an-running with my own private Azure account. Whilst I would have enjoyed the flexibility it would have given me I decided that cost was prohibitive as I wanted a PaaS solution which means paying for the Azure Website and a hosted MySQL database. It might be cheaper now so don't discount this option if Azure appeals, however I found the costs of a web hosting company to be much more reasonable. There are lots of these and I chose one off the back of a computer magazine review. Obviously you need a company that hosts WordPress at a price you are happy to pay and which offers the level of support you need.

What's in a name?

If you are looking to build a brand and market yourself then your domain name will be pretty important. It could be an amusing twist on your view or niche in the technology world or perhaps your name if it's interesting enough (mine isn't). My core interest is in the technology and processes around deploying software so pleasereleaseme tickled my fancy although I know the reference doesn't make sense to every culture.

As if choosing a second-level domain name is hard enough you also need to choose the top-level. Of course to some extent your choice might be limited by what's available, what makes sense or how much you're willing to pay. I chose .net since I have a background as a Microsoft.NET developer. Go figure!

Getting Familiar

I'm assuming that most people reading this blog post are technically minded and so I'm not going to go through the process of getting your WordPress site and domain name up-and-running, and in any case it will vary according to who you choose as a host. It's worth noting though that some companies might do some unwanted low-level configuration to the ‘template' used to create your WordPress site and if you need to reverse any of that you may need to use an FTP tool such as FileZilla to assist with the file editing process. In my case I found I couldn't edit themes and it turned out I need to make a change to wp-config (which lives in the file system) to turn this on.

With your site now live I recommend taking some time to familiarise yourself with WordPress and go through all the out-of-the-box configuration settings before you start installing any plugins. In particular Settings > General and Settings > Permalinks are two areas to check before you start writing posts. In the former don't forget to set your Site Title and a catchy Tagline. Permalinks in particular is very important for ensuring search engine friendliness. See here for more details but the take home seems to be to use Post name.

One of the bigger decisions you'll need to make is which theme to go for (Appearance > Themes). There's oodles of choice but do choose carefully as your theme will say a lot about your blog. If you find a free one then great however there is a cottage industry in paid-for themes if you can't. I'm not at all a flashy type of person so my theme is one of muted tones. It's not perfect but for free who can argue? One day I will probably ask if the author can make some changes for appropriate remuneration. In the meantime I use Appearance > Edit CSS (see Jetpack below) to make a few tweaks to my site after the theme stylesheet has been processed.

Fun with Widgets

In addition to displaying your pages and posts WordPress can also display Widgets which appear to the right of your main content. I change widgets around every so often but as a minimum always have a Text widget with some details about me, widgets from Jetpack (see below) so people can follow me via Twitter, email and RSS, and also the Tag Cloud widget.

Planning Ahead

There's still a little more to do before you begin writing posts but this probably is a good point to start planning how you will use posts and pages and categories and tags. At the risk of stating the obvious posts are associated with a publish date whilst pages are not. Consequently pages are great for static content and posts for, er, posts. The purpose of categories and tags is perhaps slightly less obvious. The best explanation I have come across is that categories are akin to the table of contents in a book and tags are akin to the index. The way I put all this together is by having themes for my posts. Some themes are tightly coupled for example my Continuous Delivery with TFS soup-to-nuts series of posts, or less so for example my Getting Started series. I use categories to organise my themes and I also use pages as index pages for each category. It's a bit of extra maintenance but useful to be able to link back to them. Using tags then becomes straightforward and I have a different tag for each technology I write about and posts will often have several tags.

Backing Up

Please do get a backup strategy in place before you sink too much effort in to configuring your site and writing posts. It's probably not enough to rely on your web hosting provider's arrangements and I thoroughly recommend implementing your own supplementary backup plan. There are plenty of plugins that will manage backup for you -- some free and some paid. To cut a long research story short I use BackUpWordPress which is free if all you want to do is backup to your web space. You don't of course -- you want to copy your backups to an offline location. For this I use their BackUpWordPress To Google Drive extension which costs USD 24 per year. It does what it ways on the tin and there are other flavours. Please don't skimp on backup and getting your backups to an offsite location!

Beating Spam

If you have comments turned on (and you probably should to get feedback and make connections with people who are interested in your posts) you will get a gazillion spam comments. As far as I can see the Akismet plugin is the way to go here. Don't forget to regularly clean out your spam (Comments > Spam). Your instinct will probably be to want to go through spam manually the first few times you clean it out but Akismet is so good that I don't bother any more and just use the Empty Spam button.

Search Engine Optimisation

This is one of those topics that is huge and makes my head hurt. In short it's all about trying to make sure your posts are ranked highly by the search engines when someone performs a search. For the long story I recommend reading some of the SEO guides that are out there -- I found this guide by Yoast in particular to be very useful. To help cope with the complexities there are several plugins that can manage SEO for you. I ended up choosing the free version of WordPress SEO by Yoast since the plugin and the guide complement each other. There is some initial one-time setup to perform such as registering with the Google and Bing webmaster tools and verifying your site and submitting an XML sitemap , after which it's a case of making sure each post is as optimised for SEO as it can be. The plugin guides you through everything and there is a paid-for version if you need more.

Tracking Visitors

In order to understand who is visiting your site you will want to sign up for a Google Analytics account. You then need to insert the code in to every page on your site and as you would expect a plugin can do this for you. There's a few to choose from and I went for Google Analytics Dashboard for WP.

Install Jetpack

Jetpack is a monster pack of ‘stuff' from WordPress.com that can help you with all sorts of things big and small. Some items are enabled in your site as soon as you install Jetpack such as the Edit CSS feature mentioned above and others become available when you link it to a WordPress.com account. There is far too much to cover here and it's a case of trawling through and working out what suits your needs. To give you a flavour though:

  • Enhanced Distribution -- shares published content with third party services such as search engines
  • Extra Sidebar Widgets -- give you extra sidebar widgets
  • Monitor -- checks your site for downtime and emails you if there is a problem
  • Photon -- loads your post images from WordPress.com's content delivery network
  • Protect -- guards against brute force attacks
  • Publicize -- allows you to connect your blog to popular social networking sites and automatically share new posts
  • Shortcode Embeds -- allow you to embed media from other sites such as YouTube
  • WP.me Shortlinks -- gives you a Get Shortlink button in the post editor
  • WordPress.com Stats -- collects statistics about your site visitors similar to Google Analytics

These are just a few of the options I've enabled -- there are many, many more. Could keep you busy for hours...

Image Matters

Although blogging is about words a picture can apparently paint a thousand of them and as a technical blogger you are undoubtedly going to want to include screenshots of applications in your posts. I spent quite some time researching this but ultimately decided to go with one of Scott Hanselman's recommendations and chose WinSnap. It's a paid for offering but you can use it on as many machines as you need and it's very feature-rich. I do most of my work via remote desktop connections and pretty much all of the time use WinSnap from an instance installed on my host PC. I do have it installed on the main machine I remote to but making it work to capture menu fly-outs and so on directly on the remote machine  is a work in progress. Whichever tool you choose please do try to take quality screenshots -- Scott has a guide here. I frequently find that my mouse is in the screenshot or that I've picked up some background at the edge of a portion of a dialog. I always discard these and start again. Don't forget to protect any personal data, licence keys and the like.

Code Quality

If you are bloggging about a technology that involves programming code you'll soon realise that the built-in WordPress feature for displaying code is lacking and that you need something better. As always there are several plugins that can come to the rescue -- here is just one review. I tried a couple and decided that Crayon Syntax Highlighter was the one for me. Whichever plugin you choose do take some time to understand all the options and do some experimenting to ensure your readers get the best experience.

Writing Quality Posts

Your blog says a lot about you and for that reason you probably want to pay close attention to the quality of your writing. I don't mean that you should get stressed over this and never publish anything, after all one of your reasons for blogging might be to improve your writing skills. Rather, pay attention to the basics so that they are a core feature of every post and then you'll have solid foundations to improve on. Here's a list of some of the things to consider:

  • Try not to let spelling mistakes to slip through. Browsers have spell checkers and highlighting these days -- do use them.
  • Proofread your posts before publishing -- and after. I'm forever writing form when I mean from and spell checkers don't catch this. WordPress has a preview feature and Jetpack has a Proofreading module -- why not try it out?
  • Try to adopt a consistent formatting style that uses white space and headings. Watch out for any extra white space that might creep in between paragraphs. Use the Text pane of the editor to check the HTML if necessary.
  • Watch out for extra spaces at the start of paragraphs (they creep in somehow) and also for double spaces.
  • If you are unsure about a word, phrase or piece of grammar Google for it to find out how it should be used or how others have used it, but only trust a reputable source or common consensus. If I have any nagging doubt I never assume I am right and will always check. I've been surprised more than a few times to find out that what I thought was correct usage was wrong.
  • Technical writing can be quite difficult because you need to refer to elements of an application as you describe how to do something, and you need to distinguish these from the other words in your sentences. I use bold to flag up the actions a reader needs to take if they are ‘following along at home' and usually also at the first mention of a core technology component and the like. have a look at some of my posts to see what I mean.
  • In my first career as a research scientist my writing (for academic journals for example) was strictly in the third person and quite formal. That doesn't work at all well for blogs where you want to write directly to the user. For most ‘how-to' blogs second person is probably best with a bit of first person thrown in on occasions and that's the style I use. Have a look here for more explanation.

If you are serous about improving your written work there are plenty of books and web pages you can read. Many years ago when I was an undergraduate at the University of Wales, Bangor, there was an amazing guide to writing called The Style and Presentation of Written Work by Agricultural and Forest Sciences lecturer Colin Price. I read it over and over again and it still stands me in good stead today. My paper copy has long since vanished but I was thrilled to find it available here. The focus is on academic writing but it's packed full of useful tips for everyone and well worth reading.

Marketing your Blog

So, your blog is up-and-running and you are putting great effort in to writing quality posts that you are hoping will be of use to others in your area. Initially you might be happy with the trickle of users finding your site but then you'll write a post that takes over 10 hours of research and writing and you'll wish you had more traffic for your efforts.

Say hello to the mysterious world of marketing your blog. I was -- and to some extent still am -- uneasy about all this however there is something satisfying about seeing your Google Analytics statistics go up. So how does it work? I'm still learning but here are some of the techniques I'm using and which you might want to try:

  • Answering questions on MSDN forums and StackOverflow. Create and maintain your profiles on these forums and when answering questions link to one of your blog posts if it's genuinely helpful. Answering questions is also a great way to understand where others are having problems and where a timely blog post might help.
  • Comment on other bloggers' posts. I follow about 70 blogs and several times a week there might be an opportunity to comment and link back to a post you have written.
  • Link to the CodeProject if you are writing a blog that fits that site -- instructions on how to link here. Once connected the quick way to have a post consumed is to have a WordPress category called CodeProject and use it in the post.
  • Use Jetpack's Publicize module to automatically post your blogs to Twitter, Facebook and LinkedIn and Google+. I'm still working out the value of doing this -- my family are bemused -- but it's automatic so what the heck? In all seriousness if you are looking to promote your blog in a big way then social media is probably going to be a big thing for you.

That's pretty much where I have got to on my blogging with WordPress journey to date. Looking back it's been a huge enjoyment getting all this configured and sorted out. Concepts that were once very hazy are now a little less so and I have learned a huge amount. I'm sure I've missed lots of important bits out and maybe you have your own thoughts as to which plugins are must-haves. Do share through the comments!

Cheers -- Graham

Getting Started with PowerShell DSC

Posted by Graham Smith on March 17, 2015No Comments (click here to comment)

Whenever I explain to people the common failure points for the deployment of an application I'll often draw a triangle. One point is for application code, another for application configuration and the other for server configuration. (Of course there are plenty of other ways for a deployment to fail but if it's because the power to your server room has failed you have a different class of problem.) Minimising the chances of application code being the culprit starts with good coding practices such as appropriate use of design patterns, test driven development or similar -- the list goes on and everyone will have their view. This continues with practising continuous integration and deploying code to a delivery pipeline using a tool such as Release Management for Visual Studio that can manage an application's configuration between environments. But how to manage server configuration? In many organisations initial sever configuration is typically done by hand -- possibly using a build list. Over time tweaks are made by different technicians until eventually the server becomes a work of art: a one-off that nobody could reliably reproduce.

The answer to all this is tooling that implements configuration as code. Typically this means declaring in a code file what you want a server's configuration to look like and then leaving some other component to figure out how to achieve that -- and to correct any deviations that might occur. This is in contrast to an imperative code build script where you would prescribe what would happen but where you would have to take care of error handling and other factors that could cause issues.

In the non-Windows world tools such as Puppet and Chef are commonly used to automate the configuration of servers. And whilst they do have something to offer the Windows folks it's not a completely happy story because both tools require a Linux machine as the master server. For a while there wasn't a ‘native' solution to the configuration as code problem for the Windows platform however all that changed with PowerShell 4 and the release of PowerShell DSC (Desired State Configuration). If you don't already have a configuration as code solution and you are a Windows shop then PowerShell DSC is almost certainly the route of choice. There is now a wealth of options for learning PowerShell DSC and my pick of some of the best places to start is as follows:

Although I haven't had chance to watch much of then yet Getting Started with PowerShell Desired State Configuration (DSC) and Advanced PowerShell Desired State Configuration (DSC) and Custom Resources are undoubtedly going to turn out to be unmissable. As I mention in my Getting Started with Windows PowerShell blog post the double act that is Jason Helmick and PowerShell inventor Jeffrey Snover is an enormously informative but but at the same time hugely entertaining combination. I chuckled and chortled all the way through their two PowerShell JumpStart series of videos and I'm expecting more of the same with these latest ones. Having fun whilst learning? What could be better?

Cheers -- Graham

Continuous Delivery with VSO: Configuring Release Management

Posted by Graham Smith on March 15, 20154 Comments (click here to comment)

In this post in my blog series on continuous delivery with VSO we look at configuring Release Management for Visual Studio. RM is part of the TFS ecosystem and is used to deploy our code to the different environments that constitute the delivery pipeline. It was originally built to work with TFS however the 2013.4 version released in November 2014 now works with VSO. Inevitably of course I'm going to be comparing how RM with VSO stacks up against RM with TFS.

Setting the Scene

From now on in this series of blog posts I'm going to assume that you are working in Azure and have a setup that resembles the one I created for my Continuous Delivery with TFS series of posts. If you are starting from scratch and need to catch up then these are the posts that can help:

One of the big advantages of RM-VSO is that there is no need to run a TFS instance. Additionally there is no need to run an RM server instance or Deployment Agents on target nodes since this is all taken care of, either behind the scenes in the case of the RM server or by using a different technique in the case of deploying to target nodes. Whilst the RM-VSO offering reduces the number of moving parts (which is good) it also imposes restrictions. As an example, RM-TFS allows us to reuse deployment VMs in different environments. In contrast RM-VSO doesn't allow this and consequently a multi-tenant model (eg one IIS machine hosting multiple websites) isn't possible, at least not without a substantial amount of jiggery-pokery. Does this matter? It depends... For a demo environment fewer VMs is preferable if you need to preserve your Azure credits, but in vivo you probably want separate VMs anyway. There is an easy -- if inelegant -- workaround for those that want to preserve Azure credits and I describe this below.

Configuring Azure to Work with RM

Our initial pipeline will consist of two environments: DAT (Development Automated Test) and DQA (Development Quality Assurance). Our Contoso University sample application has a web component and a database component so we'll need the services of IIS and SQL Server. With RM-TFS these can be dedicated web and database VMs that host multiple websites and databases but as mentioned above out of the box this isn't possible with RM-VSO. An additional requirement is a one-to-one mapping between RM-VSO environments and Azure cloud services. To work around all this we'll use VMs that host both IIS and SQL Server. A bit hacky for a demo setup but what to do? The procedure for setting all this up is as follows:

  • In the Azure portal create two new cloud services to host VMs for each RM-VSO environment. I called mine datcloudservice.cloudapp.net and dqacloudservice.cloudapp.net -- you'll need to choose unique names for your services.
  • Now create two new VMs -- one in each cloud service. I called mine ALMWEBDB01 and ALMWEBDB02. The good news is that despite being in different cloud services these servers can be in the same virtual network, affinity group and storage account. This keeps everything neat and tidy and also means the servers can be part of your domain if you have set one up.
  • Both of these servers need to have IIS and SQL Server installed. This is fairly standard stuff so I won't be covering this here. One note of caution is that to preserve Azure credits be sure to install SQL Server from scratch rather than use an image from the gallery with SQL Server pre-installed as the latter technique is much more costly.
  • These servers also need an account adding to the local administrators group that will be used in the deployment process. I used the RMDEPLOYER domain account that was set up for Deployment Agents to use in agent based deployments. In addition RMDEPLOYER will need a login for SQL Server and appropriate permissions. The easy path in a demo environment is to grant sysadmin but clearly that may be unwise in production.

The other VM which is core to all this is your developer workstation running Visual Studio, Release Management and Microsoft Test Manager. See above for the link to getting this machine configured if necessary.

Connect Release Management to VSO

I'm making the assumption here that you already have the RM client connected to TFS and want to connect it to VSO. If you have a new install of RM client the steps will be similar. You'll need to start an already configured RM client with your TFS instance up-an-running otherwise it just chokes. To switch over from TFS to VSO navigate to Administration > Settings > System Settings and click on the Edit link at the end of the Release management Server URL setting:

release-management-client-change-server-url

In the Configure Services dialog that appears add in the URL of your VSO account, ie https://myaccount.visualstudio.com. You'll probably be prompted to enter credentials after which you'll be prompted to allow the client to restart. When it does you have an instance of the client ‘re-branded' for VSO, by which I mean there are some changes to the user interface to reflect the difference between the features supported by TFS and VSO. One immediately obvious difference is that there is no place to specify SMTP settings as VSO handles all that.

Connect Release Management to Azure and Configure an Environment

One key difference between VSO and TFS is that VSO can only deploy to Azure VMs. In order to allow this you must configure RM with your Azure subscription:

  • Download a text file containing your Azure subscription settings from here.
  • From Administration > Manage Azure click on New and fill in the Name, Subscription ID and Management Certificate Key from the text file. Pay particular attention if you have more than one Azure subscription. For the Management Certificate Key you want everything between the quotes. Get the appropriate Storage Account Name from here. Consider deleting the Azure subscription settings file when you are finished with it for security purposes.
  • Create DAT and DQC stages from Administration > Manage Pick Lists. See here for my TFS equivalent post.
  • From Configure Paths > Environments click on New vNext: Azure to create a new environment and click Link Azure Environment to bring up the Azure Environments dialog. Select your Azure subscription and then use the Link button to link the DAT cloud service.
  • With the environment created click on Link Azure Servers to link the VM hosted in the DAT cloud service:
    release-management-client-link-azure-servers
  • Note that you can't change the name of the environment -- it is fixed as the name of the cloud service.
  • Now repeat the process for the DQA cloud service, after which you should have two environments:
    release-management-azure-environments
Configure a vNext Release Path

With the environments created we can create a release path. Navigate to Configure Paths > vNext Release Paths and create a new path called Contoso University\DAT>DQA. Add two stages to it (one for DAT and another for DQA) and configure with the respective environments. You will need to add yourself or another user to the approvals workflow as the concept of groups isn't available in RM-VSO. Additionally the DAT workflow should be automated. You should end up with something similar to this:

release-management-vnext-release-path

Again there are differences between the VSO version and the TFS version, since for some reason the toggle email notification icons are missing from the VSO version. Other than that createing a release path with RM-VSO is very similar to RM-TFS.

Until Next Time

That's as far as we are going in this post. Next time we'll configure the actual release template and get to grips with using PowerShell scripts to deploy our components.

Cheers -- Graham

Continuous Delivery with VSO: Configuring the Basics

Posted by Graham Smith on March 12, 2015No Comments (click here to comment)

In this first post on my series on implementing continuous delivery with Visual Studio Online we look at configuring the basics, including setting up an account and linking in to Visual Studio. As usual I assume a degree of familiarity with the tooling and if you need to get up to speed with VSO I have a getting started post here. I also assume that you already have a Microsoft account and I'll be writing the series from the perspective of someone with an MSDN subscription who has access to Microsoft software and Azure credits. If that's not you then all is not lost since much of the tooling is available for free or as trial versions.

Create a VSO Account and Configure a Project

Our journey begins by creating a new VSO account. Head over to this page and sign in with your Microsoft account. Under the Accounts list there is a Create a free account now link which allows you to create a new account using a unique URL ending in visualstudio.com. A fairly recent addition is the ability to have the account hosted in West Europe by clicking Change options. Once created you should see your account listed with any other accounts that you have created or have been invited to join.

visual-studio-online-create-account

The first time you visit your account (analogous to a Team Project Collection in on-premise TFS) you will need to Create your first project which is analogous to a Team Project. I created a project called ContosoUniversity based on the Microsoft Visual Studio Scrum 2013.4 process template and using Team Foundation Version Control.

Link the VSO Account to Visual Studio 2013

Once your new project is created the next step is to hook it up to Visual Studio 2013. You can do this from the Overview page of your new project if you have the account open in a browser running on your development machine or you can do as I did and manually connect in Visual Studio via Team Explorer -- Connect. I added a new server using https://pleasereleaseme.visualstudio.com and that was all that was required for Visual Studio to prompt me for credentials.

With the account added the next step is to map a workspace. I'd previously mapped ContosoUniversity to the TFS version of the project and the filepath was already in use so I added a VSO folder before the project name to keep everything tidy and avoid a ContosoUniversity2 folder. Next up is to add the ContosoUniversity source code to version control under a Main folder that is configured as a branch -- see this post for fuller details. If you have your own version of ContosoUniversity from my TFS blog post series that you want to use then go ahead (see here for a utility to unbind the solution from version control prior to copying it over) or you can download a zip of the code from here. At this point you should be able to publish the database to LocalDb and run the application.

Create and Run a Build

As a final step to getting the basics configured we'll create and run a build. Although there is a Build area within VSO you can't actually create a build here, and you need to do that from within Visual Studio. From Team Explorer choose Builds and then New Build Definition. The process is very similar to the one for the full-blown TFS I describe here. The main differences are that in Build Defaults I left Staging Location set to Copy build output to server and in Process I chose the TfvcTemplate.12.xaml build process template and in Automated Tests I changed test to unittests to stop the automated web tests from running. I also set 5. Advanced > MSBuild arguments to /p:UseWPP_CopyWebApplication=true /p:PipelineDependsOnBuild=false to ensure that the web.config transform that gives the tokenised version takes place.

With the build running successfully I did notice one immediate different compared with TFS: it can take substantially longer for the build to wait in the queue. I can't find a reference but I'm pretty sure I've read or heard that a build from cold is going to be longer because VSO has to stand up the infrastructure for your build. I've also found that the first build from cold fails with missing assembly reference errors -- presumably package download not working. Inexplicably subsequent builds work fine. I still need to verify this with more testing but if you're finding this do let me know via the comments. On the plus side once your build is created you can queue it from the build section of VSO:

visual-studio-online-queue-build

To finish off, the initial impression of VSO is that it's very slick and extremely well integrated with Visual Studio. It's certainly orders of magnitude easier to set up than TFS. Does it have all the flexibility of TFS when it comes to continuous delivery? We'll start to find out over the next few posts.

Cheers -- Graham