Post Deployment Configuration with the PowerShell DSC Extension for Azure Resource Manager Templates

Posted by Graham Smith on April 28, 20162 Comments (click here to comment)

As part of a forthcoming blog post I'm writing for my series about Continuous Delivery with TFS / VSTS I want to be able to deploy PowerShell DSC scripts to Windows Server target nodes that both configure servers and deploy my application components. Separately, I want to automate the creation of target nodes so I can easily destroy and recreate them -- great for testing. In this previous post I explained how to do this with Azure Resource Manager templates, however the journey didn't end there since I also wanted to join the nodes to a domain and also install Windows Management Framework 5.0 in order to get the latest version of PowerShell DSC installed. Despite all that the journey still wasn't over because my server configuration and application deployment technique with PowerShell DSC uses WinRM which requires target nodes to have their firewalls configured to allow WinRM.

The solution to this problem lies with harnessing the true intended functionality of the PowerShell DSC Extension. Although you can just use it to install WMF it's real purpose is to run DSC configurations after the VM has been deployed. The configuration I used was as follows:

As you can see, rather than create any firewall rules I chose to simply turn the domain firewall off. The main reason is simplicity: creating firewall rules with DSC needs a custom resource which adds another layer of complexity to the problem. Although another option is to use netsh commands to create firewall rules in my case I have no issues with turning the firewall off.

The next step is to package this config in to a zip file and make it available on a publicly available URL. GitHub is one possible location that can be used to host the zip but I chose Azure blob storage. The Publish-AzureVMDscConfiguration cmdlet exists to help here, and can create the zip locally for onward transfer to GitHub (for example) or it can push it straight to Azure blob storage. I was using the latter route of course although I found that couldn't get the cmdlet to work with premium storage and ended up creating a standard storage account. The code is as follows:

The storage account key is copied from the Azure Portal via Storage account > $StorageAccount$ >Settings > Access keys. Don't try using mine as I've invalidated it. I should point out that I couldn't get this command to work consistently and it would sometimes error. I did get it to work eventually but I didn't manage to pin down the problem. The net effect of successfully running this code is a file called PostDeploymentConfig.ps1.zip in blob storage. As things stand though this file isn't accessible and its container (windows-powershell-dsc is created as a default) needs to have its access policy changed from Private to Blob.

With that done it's time to amend the JSON template. The dscExtension resource that was added in this post should now look as follows:

I've chosen to hard code the ModulesUrl and ConfigurationFunction settings because I won't need to change them but they can of course be parameterised. That's all there is to it, and the result is a VM that is completely ready to have its internals configured by PowerShell DSC scripts over WinRM. If you want to download the code that accompanies this post it's on my GitHub site as a release here.

Cheers -- Graham

The 2015/2016 Simple-Talk Awards – I Won my Category!

Posted by Graham Smith on April 10, 20165 Comments (click here to comment)

Huge thanks to everyone who voted for my Continuous Delivery with TFS / VSTS – Configuring a Basic CI Build with Team Foundation Build 2015 blog post which was nominated in The 2015/2016 Simple-Talk Awards in the The most useful technical article published category.

I'm thrilled to say that I won my category! Once again many thanks to everyone who voted for me and congratulations to all the other winners and nominees.

Cheers -- Graham

 

Install Windows Management Framework 5.0 with Azure Resource Manager Templates

Posted by Graham Smith on April 9, 2016No Comments (click here to comment)

In a recent post on my blog series about Continuous Delivery with TFS / VSTS I mentioned that I was having to manually install Windows Management Framework 5.0 after creating a Windows server via ARM templates as it was a necessary precursor to running my PowerShell DSC configuration.  I also mentioned that automating the install was on my do-do list. But no more!

It turns out that the the PowerShell DSC extension for ARM templates will perform the installation, and that there's no need to actually run a DSC configuration if you don't need to -- just specify "WmfVersion": "5.0" in the settings section. The JSON to add to your ARM template should look similar to this:

I say similar because the code is configured to use the variables in my template, however you can see the full template to get the context on my GitHub Infrastructure repo here.

Many thanks to Zach Alexander and the PowerShell Team for pointing me in the right direction!

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Server Configuration as Code with PowerShell DSC

Posted by Graham Smith on April 7, 201610 Comments (click here to comment)

I suspect I'm on reasonably safe ground when I venture to suggest that most software engineers developing applications for Windows servers (and the organisations they work for) have yet to make the leap from just writing the application code to writing both the application code and the code that will configure the servers the application will run on. Why do I suggest this? It's partly from experience in that I've never come across anyone developing for the Windows platform who is doing this (or at least they haven't mentioned it to me) and partly because up until fairly recently Microsoft haven't provided any tooling for implementing configuration as code (as this engineering practice is sometimes referred to). There are products from other vendors of course but they tend to have their roots in the Linux world and use languages such as Ruby (or DSLs based on Ruby) which is probably going to seriously muddy the waters for organisations trying to get everyone up to speed with PowerShell.

This has all changed relatively recently with the introduction of PowerShell DSC, Microsoft's solution for implementing configuration as code on Windows (and other platforms as it happens). With PowerShell DSC (and related technologies) the configuration of servers is expressed as programming code that can be versioned in source control. When a change is required to a server the code is updated and the new configuration is then applied to the server. This process is usually idempotent, ie the configuration can be applied repeatedly and will always give the same result. It also won't generate errors if the configuration is already in the desired state. Through version control we can audit how a configuration changes over time and being code it can be applied as required to ensure server roles in different environments, or multiple instances of the same server role in the same environment, have a consistent configuration.

So ostensibly Windows server developers now have no excuse not to start implementing configuration as code. But if we've managed so far without this engineering practice why all the fuss now? What benefit is it going to bring to the table? The key benefit is that it's a cure for that age-old problem of servers that might start life from a build script, but over the months (and possibly years) different technicians make necessary tweaks here and there until one day the server becomes a unique work of art that nobody could ever hope to reproduce. Server backups become critical and everyone dreads the day that the server will need to be upgraded or replaced.

If your application is very simple you might just get away with this state of affairs -- not that it makes it right or a best practice. However if your application is constantly evolving with concomitant configuration changes and / or you are going down the microservices route then you absolutely can't afford to hand-crank the configuration of your servers. Not only is the manual approach very error prone it's also hugely time-consuming, and has no place in a world of continuous delivery where shortening lead times and increasing reliability and repeatability is the name of the game.

So if there's no longer an excuse to implement configuration as code on the Windows platform why isn't there a mad rush to adopt it? In my view, for most mid-size IT departments working with existing budgets and staffing levels and an existing landscape of hand-cranked servers it's going to be a real slog to switch the configuration of a live estate to being managed by code. Once you start thinking about the complexities of analysing exiting servers (some of which might have been around for years and which might have all sorts of bespoke applications running on them) combined with devising a system of managing scores or even hundreds of servers it's clear that a task of this nature is almost certainly going to require a dedicated team. And despite the potential benefits that configuration as code promises most mid-size IT departments are likely to struggle to stand-up such a team.

So if it's going to be hard how does an organisation get started with configuration as code and PowerShell DSC? Although I don't have anywhere near all of the answers it is already clear to me that if your organisation is in the business of writing applications for Windows servers then you need to approach the problem from both ends of the server spectrum. At the far end of the spectrum is the live estate where server ‘drift' needs to be controlled using PowerShell DSC's ‘pull' mode. This is where servers periodically reach out to a central repository to pull their ‘true' configuration and make any adjustments accordingly. At the near end of the spectrum are the servers that form the continuous delivery pipeline which need to have configuration changes applied to them just before a new version of the application gets deployed to them. Happily PowerShell has a ‘push' mode which will work nicely for this purpose. There is also the live deployment situation. Here, live servers will need to have configuration changes pushed to them before application deployment takes place and then will need to switch over to pull mode to keep them true.

The way I see things at the moment is that PowerShell DSC pull mode is going to be hard to implement at scale because of the lack of tooling to manage it. Whilst you could probably manage a handful of servers in pull mode using PowerShell DSC script files, any more than a handful is going to cause serious pain without some kind of management framework such as the one that is available for Chef. The good news though is that getting started with PowerShell DSC push mode for configuring servers that comprise the deployment pipeline as part of application development activities is a much more realistic prospect.

Big Picture Time

I'm not going to be able to cover everything about making PowerShell DSC push mode work in one blog post so it's probably worth a few words about the bigger picture. One key concept to establish early on is that the code that will configure the server(s) that an application will reside on has to live and change alongside the application code. At the very least the server configuration code needs to be in the same version control branch as the application code and frequently it will make sense for it to be part of the same Visual Studio solution. I won't be presenting that approach in this blog post and instead will concentrate on the mechanics of getting PowerShell DSC push mode working and writing the configuration code that enables the Contoso University sample application (which requires IIS and SQL Server) to run. In a future post I'll have the code in the same Visual Studio solution as the Contoso University sample application and will explain how to build an artefact that is then deployed by the release management tooling in TFS / VSTS prior to deploying the application.

For anyone who has come across this post by chance it is part of my ongoing series about Continuous Delivery with TFS / VSTS, and you may find it helpful to refer to some of the previous posts to understand the full context of what I'm trying to achieve. I should also mention that this post isn't intended to be a PowerShell DSC tutorial and if you are new to the technology I have a Getting Started post here with a link collection of useful learning resources. With all that out of the way let's get going!

Getting Started

Taking the Infrastructure solution from this blog post as a starting point (available as a code release at my Infrastructure repo on GitHub, final version of this post's code here) add a new PowerShell Script Project called ConfigurationScripts. To this new project add a new PowerShell Script file called ContosoUniversity.ps1 and add a hash table and empty Configuration block called WebAndDatabase as follows:

We're going to need an environment to deploy in to so using the techniques described in previous posts (here and here) create a PRM-DAT-AIO server that is joined to the domain. This server will need to have Windows Management Framework 5.0 installed -- a manual process as far as this particular post is concerned but something that is likely to need automating in the future.

To test a basic working configuration we'll create a folder on PRM-DAT-AIO to act as the IIS physical path to the ContosoUniversity web files. Add the following lines of code to the beginning of the configuration block:

To complete the skeleton code add the following lines of code to the end of ContosoUniversity.ps1:

The code contained in ContosoUniversity.ps1 should now be as follows:

Although you can create this code from any developer workstation you need to ensure that you can run it from a workstation that is joined to the same domain as PRM-DAT-AIO and has a folder called C:\Dsc\Mof. In order to keep authentication simple I'm also assuming that you are logged on to your developer workstation with domain credentials that allow you to perform DSC operations on PRM-DAT-AIO. Running this code will create a PRM-DAT-AIO.mof file in C:\Dsc\Mof which will deploy to PRM-DAT-AIO and create the folder. Magic!

Installing Resource Modules Locally

To do anything much more sophisticated than create a folder we'll need to import resources to our local workstation from the PowerShell Gallery. We'll be working with xWebAdministration and xSQLServer and they can be installed locally as follows:

These same commands will also install the latest version of the resources if a previous version exists. Referencing these resources in our configuration script seems to have changed with the release of DSC 5.0 and versioning information is a requirement. Consequently, these resources are referenced in the configuration as follows:

Obviously change the above code to reference the version of the module that you actually install. The resources are continually being updated with new versions and this requires a strategy to upgrade on a periodic basis.

Making Resource Modules Available Remotely

Whilst the additions in the previous section allows us to create advanced configurations on our developer workstation these configurations are not going to run against target nodes since as things stand the target nodes don't know anything about custom resources (as opposed to resources such as PSDesiredStateConfiguration which ship with the Windows Management Framework). We can fix this by telling the Local Configuration Manager (LCM) of target nodes where to get the custom resources from. The procedure (which I've adapted from Nana Lakshmanan's blog post) is as follows:

  • Choose a server in the domain to host a fileshare. I'm using my domain controller (PRM-CORE-DC) as it's always guaranteed to be available under normal conditions. Create a folder called C:\Dsc\DscResources (Dsc purposefully repeated) and share it as Read/Write for Everyone as \\PRM-CORE-DC\DscResources.
  • Custom resources need to be zipped in a format required by DSC the pull protocol. The PowerShell to do this for version 1.10 of xWebAdministration and 1.5 of xSQLServer (using a local C:\Dsc\Resources folder) is as follows:

    Of course depending on the frequency of your having to do this to cope with updates and the number of resources you end up working with you probably want to re-write all this up in to some sort of reusable package.
  • With the packages now in the right format in the fileshare we need to tell the LCM of target nodes where to look. We do this by creating a new configuration decorated with the [DscLocalConfigurationManager()] attribute:

    The Settings block is used to set various properties of the LCM which are required in order for configurations we'll be writing to run. The ResourceRepositoryShare block obviously specifies the location of the zipped resource packages.
  • The final requirement is to add the line of code (Set-DscLocalConfigurationManager -Path C:\Dsc\Mof -Verbose) to apply the LCM settings.

The revised version of ContosoUniversity.ps1 should now be as follows:

At this stage we now have our complete working framework in place and we can begin writing the configuration blocks that colectively will leave us with a server that is capable of running our Contoso University application.

Writing Configurations for the Web Role

Configuring for the web role requires consideration of the following factors:

  • The server features that are required to run your application. For Contoso University that's IIS, .NET Framework 4.5 Core and ASP.NET 4.5.
  • The mandatory IIS configurations for your application. For Contoso University that's a web site with a custom physical path.
  • The optional IIS configurations for your application. I like things done in a certain way so I want to see an application pool called ContosoUniversity and the Contoso University web site configured to use it.
  • Any tidying-up that you want to do to free resources and start thinking like you are configuring NanoServer. For me this means removing the default web site and default application pools.

Although you'll know if your configurations have generated errors how will you know if they've generated the desired result? The following ‘debugging' options can help:

  • I know that the home page of Contoso University will load without a connection to a database, so I copied a build of the website to C:\inetpub\ContosoUniversity on PRM-DAT-AIO so I could test the site with a browser. You can download a zip of the build from here although be aware that AV software might mistakenly regard it as malware.
  • The IIS management tools can be installed on target nodes whilst you are in configuration mode so you can see graphically what's happening. The following configuration does the trick:
  • If you are testing with a local version of Internet Explorer make sure you turn off Compatibility View or your site may render with odd results. From the IE toolbar choose Tools > Compatibility View Settings and uncheck Display intranet sites in Compatibility View.

Whilst you are in configuration mode the following resources will be of assistance:

  • The xWebAdministration documentation on GitHub: https://github.com/PowerShell/xWebAdministration.
  • The example files that ship with xWebAdministration: C:\Program Files\WindowsPowerShell\Modules\xWebAdministration\n.n.n.n\Examples.
  • A Google search for xWebAdministration.

The configuration settings required to meet my requirements stated above are as follows:

There is one more piece of the jigsaw to finish the configuration and that's amending the application pool to use a domain account that has permissions to talk to SQL Server. That's a more advanced topic so I'm dealing with it later.

Writing Configurations for the Database Role

Configuring for the SQL Server database role is slightly different from the web role since we need to install SQL Server which is a separate application. The installation files need to be made available as follows:

  • Choose a server in the domain to host a fileshare. As above I'm using my domain controller. Create a folder called C:\Dsc\DscInstallationMedia and and share it as Read/Write for Everyone as \\PRM-CORE-DC\DscInstallationMedia.
  • Download a suitable SQL Server ISO image to the server hosting the fileshare -- I used en_sql_server_2014_enterprise_edition_with_service_pack_1_x64_dvd_6669618.iso from MSDN Subscriber Downloads.
  • Mount the ISO and copy the contents of its drive to a folder called SqlServer2014 created under C:\Dsc\DscInstallationMedia.

In contrast to configuring for the web role there are fewer configurations required for the database role. There is a requirement to supply a credential though and for this I'm using the Key Vault technique described in this post. This gives rise to new code within and preceding the configuration hash table as follows:

For a server such as the one we are configuring where the database is on the same machine as the web server and only the database engine is required there are just two configuration blocks needed to install SQL Server. For more complicated scenarios the following resources will be of assistance:

  • The xSQLServer documentation on GitHub: https://github.com/PowerShell/xSQLServer.
  • The example files that ship with xSQLServer: C:\Program Files\WindowsPowerShell\Modules\xSQLServer\n.n.n.n\Examples.
  • A Google search for xSQLServer.

The configuration settings required for the single server scenario are as follows:

In order to assist with ‘debugging' activities I've included the installation of the SQL Server management tools but this can be omitted when the configuration has been tested and deemed fit for purpose. Later in this post we'll manually install the remaining parts of the Contoso University application to prove that the installation worked but for the time being you can run SQL Server Management Studio to see the database engine running in all its glory!

Amending the Application Pool Identity

The Contoso University website is granted access to the database via a domain account that firstly gets configured as the Identity for the website's application pool and then gets configured as a SQL Server login associated with a user which has the appropriate permissions to the database. The SQL Server configuration is taken care of by a permissions script that we'll come to shortly, and the immediate task is concerned with amending the Identity property of the ConsosoUniversity application pool so that it references a domain account.

Initially this looked like it was going to be painful since xWebAdministration doesn't currently have the ability to configure the inner workings of application pools. Whilst investigating the possibilities I had the good fortune to come across a fork of xWebAdministration on the PowerShell.org GitHub site where those guys have created a module which does what we want. I need to introduce a slight element of caution here since the fork doesn't look like it's under active development. On the other hand maybe there are no major issues that need fixing. And if there are and they aren't going to get fixed at least the code is there to be forked. Because this fork isn't in the PowerShell Gallery getting it to work locally is a manual process:

  • Download the code to C:\Dsc\Resources and unblock and extract it. Change the folder name from cWebAdministration-master to cWebAdministration and copy to C:\Program Files\WindowsPowerShell\Modules.
  • In the configuration block reference the module as Import-DscResource –ModuleName @{ModuleName="cWebAdministration";ModuleVersion="2.0.1″}.

The configuration required to make the resource available to target nodes has an extra manual step:

  • In the root of C:\DSC\Resources\cWebAdministration create a folder named 2.0.1 and copy the contents of C:\DSC\Resources\cWebAdministration to this folder.
  • The following code can now be used to package the resource and copy it to the fileshare:

I tend towards using a different domain account for the Identity properties of the website application pools in the different environments that make up the deployment pipeline. In doing so it protects the pipeline form a complete failure if something happens to that domain account -- it gets locked-out for example. To support this scenario the configuration block to configure the application pool identity needs to support dynamic configuration and takes the following form:

The dynamic configuration is supported by Key Vault code to retrieve the password of the domain account used to configure the application pool (not shown) and the following additions to the configuration hash table:

The code does of course rely on the existence of the PRM\CU-DAT domain account (set so the password doesn't expire). This is the last piece of configuration, and you can view the final result on GitHub here.

The Moment of Truth

After all that configuration, is it enough to make the Contoso University application work? To find out:

  • If you haven't already, download, unblock and unzip the ContosoUniversityConfigAsCode package from here, although as mentioned previously be aware that AV software might mistakenly regard it as malware.
  • The contents of the Website folder should be copied (if not already) to C:\inetpub\ContosoUniversity on the target node.
  • Edit the SchoolContext connection string in Web.config if required -- the download has the server set to localhost and the database to ContosoUniversity.
  • On the target node run SQL Server Management Studio and install the database as follows:
    • In Object Explorer right-click the Databases node and choose Deploy Data-tier Application.
    • Navigate through the wizard, and at Select Package choose ContosoUniversity.Database.dacpac from the database folder of the ContosoUniversityConfigAsCode download.
    • Move to the next page of the wizard (Update Configuration) and change the Name to ContosoUniversity.
    • Navigate past the Summary page and the DACPAC will be deployed:
      ssms-deploy-dacpac
  • Still in SSMS, apply the permissions script as follows:
    • Open Create login and database user.sql from the Database\Scripts folder in the ContosoUniversityConfigAsCode download.
    • If the pre-configured login/user (PRM\CU-DAT) is different from the one you are using update accordingly, then execute the script.

You can now navigate to http://prm-dat-aio (or whatever your server is called) and if all is well make a mental note to pour a well-deserved beverage of your choosing.

Looking Forward

Although getting this far is certainly an important milestone it's by no means the end of the journey for the configuration as code story. Our configuration code now needs to be integrated in to the Contoso University Visual Studio solution so that it can be built as an artefact alongside the website and database artefacts. We then need to be able to deploy the configuration before deploying the application -- all automated through the new release management tooling that has just shipped with TFS 2015 Update 2 or through VSTS if you are using that. Until next time...

Cheers -- Graham

The 2015/2016 Simple-Talk Awards – I’ve Been Nominated!

Posted by Graham Smith on March 31, 20162 Comments (click here to comment)

Much to my jaw-dropping amazement one of my blog posts has been nominated for a Simple-Talk award in the The most useful technical article published category. If you like my post more than the others I'd be very grateful if you would consider voting for me -- closing date is April 6. Many thanks!

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Join a VM to a Domain with Azure Resource Manager Templates

Posted by Graham Smith on March 20, 2016No Comments (click here to comment)

In the previous post in my blog post series on Continuous Delivery with TFS / VSTS we learned how to provision a Windows Server virtual machine using Azure Resource Manager templates. The next major step in this quest to automate the creation and configuration of the infrastructure to which we'll deploy our application is to configure server internals, starting with joining a VM to the domain. My initial thinking was that this would need to be some kind of PowerShell command, and whilst this is an option I was very pleased to find that there is an ARM template resource to do this. The resource in question goes by the name of JsonADDomainExtension; it's a VM extension and you can read about it (and the PowerShell commands to do the same thing) in this blog post.

I have to confess that I struggled to get the extension to work at first. I spent a whole afternoon fiddling with the settings and getting nowhere, and spent quite a bit of time reading forum posts from others who were having similar difficulties (mostly with the PowerShell commands though). I gave up in frustration, only to come back to it a few days later to try again to find it was all working! I describe the steps I took below -- please be aware that it's very much a direct continuation of this post so please do check that out first if you haven't done so already.

Adding the JsonADDomainExtension to the JSON Template

Getting starting with the extension is very easy, as it's just a case of dropping the JSON in to the resources part of the template. The code I initially used to make the extension work was as follows:

I added this code to the WindowsServer2012R2Datacenter.json file which has variables defined for use where the VM name is required. Note that OUPath can be an empty string, the requirement for the escaped backslash for the (domain) User and the use of the magic number 3 in Options (just go with it or see here for the details).

Whilst this (eventually) worked fine for me the big issue was how to hide the password for the account that will join the VM to the domain. I hard coded it in to the template to get the extension working but even when refactored as a parameter the password is still in plain view -- now just in the PowerShell calling script.

Say Hello to Azure Key Vault

As luck would have it around the time I was initially getting JsonADDomainExtension to work I watched Cloud Cover Episode 200: Azure Resource Manager Tooling with Brian Moore where Brian mentioned the forthcoming ability to use Azure Key Vault to supply secret values such as passwords to ARM templates. Following a very helpful email exchange Brian pointed me towards this page which is a partial answer to the solution I wanted to get working.

At the time of writing there was no portal interface for configuring Azure Key Vault so it's over to PowerShell (no bad thing) to create a new vault:

In the code above this creates a vault named prmkeyvault. Next we need to add our password as a secret:

This creates a new secret called DomainAdminPassword. Of course, the objects that have just been created can be examined with Azure Resource Explorer:

azure-resource-explorer-key-vault

Use the Secret in the JSON Template

The Microsoft guidance for passing secrets to templates is based on the use of an ARM parameters file. This wasn't quite what I wanted as I'm using a PowerShell script to supply my parameters. The way to access secrets using PowerShell is along the following lines:

You can see how I integrated the code above in to my PowerShell script by examining Create PRM-DAT.ps1 in the code release that accompanies this post on my Infrastructure repository on GitHub. It's not quite the full solution at the moment though because despite having a mechanism in place for automatically authenticating to Azure PowerShell the use of Azure Key Vault cmdlets in the script causes the authentication dialog to pop-up. I'm still working on how to stop that -- if you know please leave a message in the comments!

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Infrastructure as Code with Azure Resource Manager Templates

Posted by Graham Smith on February 25, 2016No Comments (click here to comment)

So far in this blog post series on Continuous Delivery with TFS / VSTS we have gradually worked our way to the position of having a build of our application which is almost ready to be deployed to target servers (or nodes if you prefer) in order to conduct further testing before finally making its way to production. This brings us to the question of how these nodes should be provisioned and configured. In my previous series on continuous delivery deployment was to nodes that had been created and configured manually. However with the wealth of automation tools available to us we can -- and should -- improve on that.  This post explains how to achieve the first of those -- provisioning a Windows Server virtual machine using Azure Resource Manager templates. A future post will deal with the configuration side of things using PowerShell DSC.

Before going further I should point out that this post is a bit different from my other posts in the sense that it is very specific to Azure. If you are attempting to implement continuous delivery in an on premises situation chances are that the specifics of what I cover here are not directly usable. Consequently, I'm writing this post in the spirit of getting you to think about this topic with a view to investigating what's possible for your situation. Additionally, if you are not in the continuous delivery space and have stumbled across this post through serendipity I do hope you will be able to follow along with my workflow for creating templates. Once you get past the Big Picture section it's reasonably generic and you can find the code that accompanies this post at my GitHub repository here.

The Infrastructure Big Picture

In order to understand where I am going with this post it's probably helpful to understand the big picture as it relates to this blog series on continuous delivery. Our final continuous delivery pipeline is going to consist of three environments:

  • DAT -- development automated test where automated UI testing takes place. This will be an ‘all in one' VM hosting both SQL Server and IIS. Why have an all-in-one VM? It's because the purpose of this environment is to run automated tests, and if those tests fail we want a high degree of certainty that it was because of code and not any other factors such as network problems or a database timeout. To achieve that state of certainty we need to eliminate as many influencing variables as possible, and the simplest way of achieving that is to have everything running on the same VM. It breaks the rule about early environments reflecting production but if you are in an on premises situation and your VMs are on hand-me-down infrastructure and your network is busy at night (when your tests are likely running) backing up VMs and goodness knows what else then you might come to appreciate the need for an all-in-one VM for automated testing.
  • DQA -- development quality assurance where high-value manual testing takes place. This really does need to reflect production so it will consist of a database VM and a web server VM.
  • PRD -- production for the live code. It will consist of a database VM and a web server VM.

These environments map out to the following infrastructure I'll be creating in Azure:

  • PRM-DAT -- resource group to hold everything for the DAT environment
    • PRM-DAT-AIO -- all in one VM for the DAT environment
  • PRM-DQA -- resource group to hold everything for the DQA environment
    • PRM-DQA-SQL -- database VM for the DQA environment
    • PRM-DQA-IIS -- web server VM for the DQA environment
  • PRM-PRD -- resource group to hold everything for the DQA environment
    • PRM-PRD-SQL -- database VM for the PRD environment
    • PRM-PRD-IIS -- web server VM for the PRD environment

The advantage of using resource groups as containers is that an environment can be torn down very easily. This makes more sense when you realise that it's not just the VM that needs tearing down but also storage accounts, network security groups, network interfaces and public IP addresses.

Overview of the ARM Template Development Workflow

We're going to be creating our infrastructure using ARM templates which is a declarative approach, ie we declare what we want and some other system ‘makes it so'. This is in contrast to an imperative approach where we specify exactly what should happen and in what order. (We can use an imperative approach with ARM using PowerShell but we don't get any parallelisation benefits.) If you need to get up to speed with ARM templates I have a Getting Started blog post with a collection useful useful links here. The problem -- for me at least -- is that although Microsoft provide example templates for creating a Windows Server VM (for instance) they are heavily parametrised and designed to work as standalone VMs, and it's not immediately obvious how they can fit in to an existing network. There's also the issue that at first glance all that JSON can look quite intimidating! Fear not though, as I have figured out what I hope is a great workflow for creating ARM templates which is both instructive and productive. It brings together a number of tools and technologies and I make the assumption that you are familiar with these. If not I've blogged about most of them before. A summary of the workflow steps with prerequisites and assumptions is as follows:

  • Create a Model VM in Azure Portal. The ARM templates that Microsoft provide tend to result in infrastructure that have different internal names compared with the same infrastructure created through the Azure Portal. I like how the portal names things and in order to help replicate that naming convention for VMs I find it useful to create a model VM in the portal whose components I can examine via the Azure Resource Explorer.
  • Create a Visual Studio Solution. Probably the easiest way to work with ARM templates is in Visual Studio. You'll need the Azure SDK installed to see the Azure Resource Group project template -- see here for more details. We'll also be using Visual Studio to deploy the templates using PowerShell and for that you'll need the PowerShell Tools for Visual Studio extension. If you are new to this I have a Getting Started blog post here. We'll be using Git in either TFS or VSTS for version control but if you are following this series we've already covered that.
  • Perform an Initial Deployment. There's nothing worse than spending hours coding only to find that what you're hoping to do doesn't work and that the problem is hard to trace. The answer of course is to deploy early and that's the purpose of this step.
  • Build the Deployment Template Resource by Resource Using Hard-coded Values. The Microsoft templates really go to town when it comes to implementing variables and parameters. That level of detail isn't required here but it's hard to see just how much is required until the template is complete. My workflow involves using hard-coded values initially so the focus can remain on getting the template working and then refactoring later.
  • Refactor the Template with Parameters, Variables and Functions. For me refactoring to remove the hard-coded values is one of most fun and rewarding parts of the process. There's a wealth of programming functionality available in ARM templates -- see here for all the details.
  • Use the Template to Create Multiple VMs. We've proved the template can create a single VM -- what about multiple VMs? This section explores the options.

That's enough overview -- time to get stuck in!

Create a Model VM in Azure Portal

As above, the first VM we'll create using an ARM template is going to be called PRM-DAT-AIO in a resource group called PRM-DAT. In order to help build the template we'll create a model VM called PRM-DAT-AAA in a resource group called PRM-DAT via the Azure Portal. The procedure is as follows:

  • Create a resource group called PRM-DAT in your preferred location -- in my case West Europe.
  • Create a standard (Standard-LRS) storage account in the new resource group -- I named mine prmdataaastorageaccount. Don't enable diagnostics.
  • Create a Windows Server 2012 R2 Datacenter VM (size right now doesn't matter much -- I chose Standard DS1 to keep costs down) called PRM-DAT-AAA based on the PRM-DAT resource group, the prmdataaastorageaccount storage account and the prmvirtualnetwork that was created at the beginning of this blog series as the common virtual network for all VMs. Don't enable monitoring.
  • In Public IP addresses locate PRM-DAT-AAA and under configuration set the DNS name label to prm-dat-aaa.
  • In Network security groups locate PRM-DAT-AAA and add the following tag: displayName : NetworkSecurityGroup.
  • In Network interfaces locate PRM-DAT-AAAnnn (where nnn represents any number) and add the following tag: displayName : NetworkInterface.
  • In Public IP addresses locate PRM-DAT-AAA and add the following tag: displayName : PublicIPAddress.
  • In Storage accounts locate prmdataaastorageaccount and add the following tag: displayName : StorageAccount.
  • In Virtual machines locate PRM-DAT-AAA and add the following tag: displayName : VirtualMachine.

You can now explore all the different parts of this VM in the Azure Resource Explorer. For example, the public IP address should look similar to:

azure-resource-explorer-public-ip-address

Create a Visual Studio Solution

We'll be building and running our ARM template in Visual Studio. You may want to refer to previous posts (here and here) as a reminder for some of the configuration steps which are as follows:

  • In the Web Portal navigate to your team project and add a new Git repository called Infrastructure.
  • In Visual Studio clone the new repository to a folder called Infrastructure at your preferred location on disk.
  • Create a new Visual Studio Solution (not project!) called Infrastructure one level higher then the Infrastructure folder. This effectively stops Visual Studio from creating an unwanted folder.
  • Add .gitignore and .gitattributes files and perform a commit.
  • Add a new Visual Studio Project to the solution of type Azure Resource Group called DeploymentTemplates. When asked to select a template choose anything.
  • Delete the Scripts, Templates and Tools folders from the project.
  • Add a new project to the solution of type PowerShell Script Project called DeploymentScripts.
  • Delete Script.ps1 from the project.
  • In the DeploymentTemplates project add a new Azure Resource Manager Deployment Project item called WindowsServer2012R2Datacenter.json (spaces not allowed).
  • In the DeploymentScripts project add a new PowerShell Script item for the PowerShell that will create the PRM-DAT resource group with a PRM-DAT-AIO server -- I called my file Create PRM-DAT.ps1.
  • Perform a commit and sync to get everything safely under version control.

With all that configuration you should have a Visual Studio solution looking something like this:

visual-studio-infrastructure-solution

Perform an Initial Deployment

It's now time to write just enough code in Create PRM-DAT.ps1 to prove that we can initiate a deployment from PowerShell. First up is the code to authenticate to Azure PowerShell. I have the authentication code which was the output of this post wrapped in a function called Set-AzureRmAuthenticationForMsdnEnterprise which in turn is contained in a PowerShell module file called Authentication.psm1. This file in turn is deployed to C:\Users\Graham\Documents\WindowsPowerShell\Modules\Authentication which then allows me to call Set-AzureRmAuthenticationForMsdnEnterprise from anywhere on my development machine. (Although this function could clearly be made more generic with the use of some parameters I've consciously chosen not to so I can check my code in to GitHub without worrying about exposing any authentication details.) The initial contents of Create PRM-DAT.ps1 should end up looking as follows:

Running this code in Visual Studio should result in a successful outcome, although admittedly not much has happened because the resource group already existed and the deployment template is empty. Nonetheless, it's progress!

Build the Deployment Template Resource by Resource Using Hard-coded Values

The first resource we'll code is a storage account. In the DeploymentTemplates project open WindowsServer2012R2Datacenter.json which as things stand just contains some boilerplate JSON for the different sections of the template that we'll be completing. What you should notice is the JSON Outline window is now available to assist with editing the template. Right-click resources and choose Add New Resource:

visual-studio-json-outline-add-new-resource

In the Add Resource window find Storage Account and add it with the name (actually the display name) of  StorageAccount:

visual-studio-json-outline-add-new-resource-storage-account

This results in boilerplate JSON being added to the template along with a variable for actual storage account name and a parameter for account type. We'll use a variable later but for now delete the variable and parameter that was added -- you can either use the JSON Outline window or manually edit the template.

We now need to edit the properties of the resource with actual values that can create (or update) the resource. In order to understand what to add we can use the Azure Resource Explorer to navigate down to the storageAccounts node of the MSDN subscription where we created prmdataaastorageaccount:

azure-resource-explorer-storage-accounts-prmdataaastorageaccount

In the right-hand pane of the explorer we can see the JSON that represents this concrete resource, and although the properties names don't always match exactly it should be fairly easy to see how the ‘live' values can be used as a guide to populating the ones in the deployment template:

azure-resource-explorer-storage-accounts-prmdataaastorageaccount-json

So, back to the deployment template the following unassigned properties can be assigned the following values:

  • "name": "prmdataiostorageaccount"
  • "location": "West Europe"
  • "accountType": "Standard_LRS"

The resulting JSON should be similar to:

Save the template and switch to Create PRM-DAT.ps1 to run the deployment script which should create the storage account. You can verify this either via the portal or the explorer.

The next resource we'll create is a NetworkSecurityGroup, which has an extra twist in that at the time of writing adding it to the template isn't supported by the JSON Outline window. There's a couple of ways to go here -- either type the JSON by hand or use the Create function in the Azure Resource Explorer to generate some boilerplate JSON. This latter technique actually generates more JSON than is needed so in this case is something of a hindrance. I just typed the JSON directly and made use of the IntelliSense options in conjunction with the PRM-DAT-AAA network security group values via the Azure Resource Explorer. The JSON that needs adding is as follows:

Note that you'll need to separate this resource from the storage account resource with a comma to ensure the syntax is valid. Save the template, run the deployment and refresh the Azure Resource Explorer. You can now compare the new PRM-DAT-AIO and PRM-DAT-AAA network security groups in the explorer to validate the JSON that creates PRM-DAT-AIO. Note that by zooming out in your browser you can toggle between the two resources and see that it is pretty much just the etag values that are different.

The next resource to add is a public IP address. This can be added from the JSON Outline window using PublicIPAddress as the name but it also wants to add a reference to itself to a network interface which in turn wants to reference a virtual network. We are going to use an existing virtual network but we do need a network interface, so give the new network interface a name of NetworkInterface and the new virtual network can be any temporary name. As soon as the new JSON components have been added delete the virtual network and all of the variables and parameters that were added. All this makes sense when you do it -- trust me!

Once edited with the appropriate values the JSON for the public IP address should be as follows:

The edited JSON for the network interface should look similar to the code that follows, but note I've replaced my MSDN subscription GUID with an ellipsis.

It's worth remembering at this stage that we're hard-coding references to other resources. We'll fix that up later on, but for the moment note that the network interface needs to know what virtual network subnet it's on (created in an earlier post), and which public IP address and network security group it's using. Also note the dependsOn section which ensures that these resources exist before the network interface is created. At this point you should be able to run the deployment and confirm that the new resources get created.

Finally we can add a Windows virtual machine resource. This is supported from the JSON Outline window, however this resource wants to reference a storage account and virtual network. The storage account exists and that should be selected, but once again we'll need to use a temporary name for the virtual network and delete it and the variables and parameters. Name the virtual machine resource VirtualMachine. Edit the JSON with appropriate hard-coded values which should end up looking as follows:

Running the deployment now should result in a complete working VM which you can remote in to.

The final step before going any further is to tear-down the PRM-DAT resource group and check that a fully-working PRM-DAT-AIO VM is created. I added a Destroy PRM-DAT.ps1 file to my DeploymentScripts project with the following code:

Refactor the Template with Parameters, Variables and Functions

It's now time to make the template reusable by refactoring all the hard-coded values. Each situation is likely to vary but in this case my specific requirements are:

  • The template will always create a Windows Server 2012 R2 Datacenter VM, but obviously the name of the VM needs to be specified.
  • I want to restrict my VMs to small sizes to keep costs down.
  • I'm happy for the VM username to always be the same so this can be hard-coded in the template, whilst I want to pass the password in as a parameter.
  • I'm adding my VMs to an existing virtual network in a different resource group and I'm making a concious decision to hard-code these details in.
  • I want the names of all the different resources to be generated using the VM name as the base.

These requirements gave rise to the following parameters, variables and a resource function:

  • nodeName parameter -- this is used via variable conversions throughout the template to provide consistent naming of objects. My node names tend to be of the format used in this post and that's the only format I've tested. Beware if your node names are different as there are naming rules in force.
  • nodeNameToUpper variable -- used where I want to ensure upper case for my own naming convention preferences.
  • nodeNameToLower variable -- used where lower case is a requirement of ARM eg where nodeName forms part of a DNS entry.
  • vmSize parameter -- restricts the template to creating VMs that are not going to burn Azure credits too quickly and which use standard storage.
  • storageAccountName variable -- creates a name for the storage account that is based on a lower case nodeName.
  • networkInterfaceName variable -- creates a name for the network interface based on a lower case nodeName with a number suffix.
  • virtualNetworkSubnetName variable -- used to create the virtual network subnet which exists in a different resource group and requires a bit of construction work.
  • vmAdminUsername variable -- creates a username for the VM based on the nodeName. You'll probably want to change this.
  • vmAdminPassword parameter -- the password for the VM passed-in as a secure string.
  • resourceGroup().location resource function -- neat way to avoid hard-coding the location in to the template.

Of course, these refactorings shouldn't affect the functioning of the template, and tearing down the PRM-DAT resource group and recreating it should result in the same resources being created.

What about Environments where Multiple VMs are Required?

The work so far has been aimed at creating just one VM, but what if two or more VMs are needed? It's a very good question and there are at least two answers. The first involves using the template as-is and calling New-AzureRmResourceGroupDeployment in a PowerShell Foreach loop. I've illustrated this technique in Create PRM-DQA.ps1 in the DeploymentScripts project. Whilst this works very nicely the VMs are created in series rather than in parallel and, well, who wants to wait? My first thought at creating VMs in parallel was to extend the Foreach loop idea with the -parallel switch in a PowerShell workflow. The code which I was hoping would work looks something like this:

Unfortunately it seems like this idea is a dud -- see here for the details. Instead the technique appears to be to use the copy, copyindex and length features of ARM templates as documented here. This necessitates a minor re-write of the template to pass in and use an array of node names, however there are complications where I've used variables to construct resource names. At the time of publishing this post I'm working through these details -- keep an eye on my GitHub repository for progress.

Wrap-Up

Before actually wrapping-up I'll make a quick mention of the template's outputs node. A handy use for this is debugging, for example where you are trying to construct a complicated variable and want to check its value. I've left an example in the template to illustrate.

I'll finish this post with a question that I've been pondering as I've been writing this post, which is whether just because we can create and configure VMs at the push of a button does that mean we should create and configure new VMs every time we deploy our application? My thinking at the moment is probably not because of the time it will add but as always it depends. If you want a clean start every time you deploy then you certainly have that option, but my mind is already thinking ahead to the additional amount of time it's going to take to actually configure these VMs with IIS and SQL Server. Never say never though, as who knows what's in store for the future? As Azure (presumably) gets faster and VMs become more lightweight with the arrival of Nano Server perhaps creating and configuring VMs from scratch as part of the deployment pipeline will be so fast that there would be no reason not to. Or maybe we'll all be using containers by then...

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Enhancing a CI Build to Help Bake Quality In

Posted by Graham Smith on February 16, 20164 Comments (click here to comment)

In the previous instalment of this blog post series on Continuous Delivery with TFS / VSTS we created a basic CI build. In this post we enhance the CI build with further configurations that can help bake quality in to the application. Just a reminder that I’m using TFS to create my CI build as it’s the lowest common denominator. If you are using VSTS you can obviously follow along but do note that screenshots might vary slightly.

Set Branch Policies

Although it's only marginally related to build this is probably a good point to set branch policies for the master branch of the ContosoUniversity repository. In the Web Portal for the team project click on the cog icon at the far right of the blue banner bar:

web-portal-control-panel-icon

This will open up the Control panel at the team project administration page. Navigate to the Version Control tab and in the Repositories pane navigate down to master. In the right pane select Branch Policies:

web-portal-control-panel-branch-policies

The branch policies window contains configuration settings that block poor code from polluting the code base. The big change is that the workflow now changes from being able to commit to the master branch directly to having to use pull requests to make commits. This is a great way of enforcing code reviews and I have more detail on the workflow here. In the screenshot above I've selected all the options, including selecting the ContosoUniveristy.CI build to be run when a pull request is created. This blocks any pull requests that would cause the build to fail. The other options are self explanatory, although enforcing a linked work item can be a nuisance when you are testing. If you are testing on your own make sure you Allow users to approve their own changes otherwise this will cause you a problem.

Testing Times

The Contoso University sample application contains MSTest unit tests and we want these to be run after the build to provide early feedback of any failing tests. This is a achieved by adding a new build step. On the Build tab in the Web Portal navigate to the ContosoUniversity.CI build and place it in edit mode. Click on Add build step and from the Add Tasks window filter on Test and choose Visual Studio Test.

For our simple scenario there are only three settings that need addressing:

  • Test Assembly -- we only want unit tests to run and ContosoUniversity contains other tests so changing the default setting to **\*UnitTests*.dll;-:**\obj\** fixes this.
  • Platform -- here we use the $(BuildPlatform) variable defined in the build task.
  • Configuration -- here we use the $(BuildConfiguration) variable defined in the build task.

web-portal-visual-studio-test-unit-test-configuration

With the changes saved queue the build and observe the build report advising that the tests were run and passed:

web-portal-build-build-succeeded-with-unit-tests-passing

Code Coverage

In the above screenshot you'll notice that there is no code coverage data available. This can be fixed by going back to the Visual Studio Test task and checking the Code Coverage Enabled box. Queueing a new build now gives us that data:

web-portal-build-build-succeeded-with-code-coverage-enabled

Of slight concern is that the code coverage reported from the build (2.92%) was marginally higher than that reported by analysing code coverage in Visual Studio (2.89%). Whilst the values are the same for all practical purposes the results suggest that there is something odd going on here that warrants further investigation.

Code Analysis

A further feedback item which is desirable to have in the build report is the results of code analysis. (As a reminder, we configured this in Visual Studio in this post so that the results are available after building locally.) Displaying code analysis results in the build report is straightforward for XAML builds as this is an out-of-the-box setting -- see here. I haven't found this to be quite so easy with the new build system.There's no setting as with XAML builds but that shouldn't be a problem since it's just an MSBuild argument. It feels like the correct argument should be /p:RunCodeAnalysis=Always (as this shouldn't care how code analysis is configured in Visual Studio) however in my testing I couldn't get this to work with any combination of the Visual Studio Build / MSBuild task and release / debug configurations. Next argument I tried was /p:RunCodeAnalysis=True. This worked with either Visual Studio Build or MSBuild task but to get it to work in a release configuration you will need to ensure that code analysis has been enabled for the release configuration in Visual Studio (and the change has been committed!). The biggest issue though was that I never managed to get more than 10 code analysis rules displayed in the build report when there were 85 reported in the build output. Perhaps I'm missing something here -- if you can shed any light on this please let me know!

Don't Ignore the Feedback!

Finally, it may sound obvious but there's little point in configuring the build report to contain feedback on the quality of the build if nobody is looking at the reports and is doing something to remedy problems. However you do it this needs to be part of your team's daily routine!

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Configuring a Basic CI Build with Team Foundation Build 2015

Posted by Graham Smith on February 4, 20162 Comments (click here to comment)

In this instalment of my blog post series on Continuous Delivery with TFS / VSTS we configure a continuous integration (CI) build for our Contoso University sample application using Team Foundation Build 2015. Although the focus of this post is on explaining how to configure a build in TFS or VSTS it is worth a few words on the bigger picture as far as builds are concerned.

One important aspect to grasp is that for a given application you are likely to need several different builds for different parts of the delivery pipeline. The focus in this post is a CI build where the main aim is the early detection of problems and additional configurations that help bake quality in. The output from the build is really just information, ie feedback. We're not going to do anything with the build itself so there is no need to capture the compiled output of the build. This is just as well since the build might run very frequently and consequently needs to have a low drain on build server resources.

In addition to a CI build a continuous delivery pipeline is likely to need other types of build. These might include one to support technical debt management (we'll be looking at using SonarQube for this in a later post but look here if you want a sneak preview of what's in store) and one or more that capture the compiled output of the build and kick-off a release to the pipeline.

Before we get going it's worth remembering that as far as new features are concerned VSTS is always a few months ahead of TFS. Consequently I'm using TFS to create my CI build as it's the lowest common denominator. If you are using VSTS you can obviously follow along but do note that screenshots might vary slightly. It's also worth pointing out that starting with TFS 2015 there is a brand new build system that is completely different from the (still supported) XAML system that has been around for the past few years. The new system is recommended for any new implementations and that's what we're using here. If you want to learn more I have a Getting Started blog post here.

Building Blocks

The new build system (which Microsoft abbreviates as TFBuild) is configured from the Web Portal rather than Visual Studio, so head over there and navigate to your team project and then to the Build tab. Click on the green plus icon in the left-hand pane which brings up the Definition Templates window. There's a couple of ways to go from here but for demonstration purposes select the Empty option:

web-portal-build-definition-templates-empty

This creates a new empty definition, which does involve a bit of extra work but is worth it the first time to help understand what's going on. Before proceeding lick on Save and provide a name (I chose ContosoUniversity.CI) and optionally a comment for the version control history. Next click the green plus icon next to Add build step to display the Add Tasks window. Take a minute to marvel at the array of possibilities before choosing Visual Studio Build. This gives us a skeleton build which needs configuring by working through the different tabs:

web-portal-build-skeleton-for-configuring

There are many items that can be configured across the different tabs but I'm restricting my explanation to the ones that are not pre-populated and which required. You can find out more about the Visual Studio Build task here.

On the Build tab:

  • Platform relates to whether the build should be x86, x64 or any cpu. Whilst you could specify a value here the recommendation is to use a build variable (defined under Variables -- see below) as Platform is a setting used in other build tasks. An additional advantage is that the value of the variable can be changed when the build is queued. As per the documentation I specified $(BuildPlatform) as the variable.
  • Configuration is the Visual Studio Solution Configuration you want to build -- typically debug or release. This is another setting used in other build tasks and which warrants a variable and I again followed the documentation and used $(BuildConfiguration).
  • Clean forces the code to be refreshed on every build and is recommended to avoid possible between-build discrepancies.

web-portal-visual-studio-build-build-tab

On the Repository tab:

  • Clean here appears to be the same as Clean on the Build tab. Not sure why there is duplication or why it is a check box on the Build tab and a dropdown on this tab but set it to true.

web-portal-visual-studio-build-repository-tab

On the Variables tab:

  • Add a variable named BuildPlatform, specify a value of any cpu and check Allow at Queue Time.
  • Add a variable named BuildConfiguration, specify a value of release and check Allow at Queue Time.

On the Triggers tab:

  • Continuous Integration should be checked.

web-portal-visual-studio-build-triggers-tab

That should be enough configuration to get the build to go green. Perform a save and click on Queue build (next to the save button). You will see the output of the build process which should result in the build succeeding:

web-portal-build-build-succeeded

It's All in the Name

At the moment our build doesn't have a name, so to fix that first head over to the Variables tab and add MajorVersion and MinorVersion variables and give them values of 1 and 0 respectively. Also check the Allow at Queue Time boxes. Now on the General tab enter $(BuildDefinitionName)_$(MajorVersion).$(MinorVersion).$(Year:yyyy)$(DayOfYear)$(Rev:.r) in the Build number format text box. Save the definition and queue a new build. The name should be something like ContosoUniversity.CI_1.0.2016019.2. One nice touch is that the revision number is reset on a daily basis providing an easy way keep track of the builds on a given day.

At this point we have got the basics of a CI build configured and working nicely. In the next post we look at further configurations focussed on helping to bake quality in to the application.

Cheers -- Graham

Getting Started with Team Foundation Build 2015

Posted by Graham Smith on February 4, 2016No Comments (click here to comment)

Hopefully by now everyone who works with TFS and / or VSTS knows that there is a new build system. There's no immediate panic as the XAML builds we've all been working with are still supported but for any new implementations using TFS 2015 or VSTS then TFBuild 2015 is the way forward. If you haven't yet had chance to investigate the new build system then I encourage you to check out my pick of the best links for getting started:

If you are interested in buying a book which covers TFBuild 2015 then I can recommend Continuous Delivery with Visual Studio ALM 2015. I've read it and it's excellent.

Cheers -- Graham