Continuous Delivery with TFS / VSTS – Server Configuration and Application Deployment with Release Management

Posted by Graham Smith on May 2, 20162 Comments (click here to comment)

At this point in my blog series on Continuous Delivery with TFS / VSTS we have finally reached the stage where we are ready to start using the new web-based release management capabilities of VSTS and TFS. The functionality has been in VSTS for a little while now but only came to TFS with Update 2 of TFS 2015 which was released at the end of March 2016.

Don't tell my wife but I'm having a torrid love affair with the new TFS / VSTS Release Management. It's flippin' brilliant! Compared to the previous WPF desktop client it's a breath of fresh air: easy to understand, quick to set up and a joy to use. Sure there are some improvements that could be made (and these will come in time) but for the moment, for a relatively new product, I'm finding the experience extremely agreeable. So let's crack on!

Setting the Scene

The previous posts in this series set the scene for this post but I'll briefly summarise here. We'll be deploying the Contoso University sample application which consists of an ASP.NET MVC website and a SQL Server database which I've converted to a SQL Server Database Project so deployment is by DACPAC. We'll be deploying to three environments (DAT, DQA and PRD) as I explain here and not only will we be deploying the application we'll first be making sure the environments are correctly configured with PowerShell DSC using an adaptation of the procedure I describe here.

My demo environment in Azure is configured as a Windows domain and includes an instance of TFS 2015 Update 2 which I'll be using for this post as it's the lowest common denominator, although I will point out any VSTS specifics where needed. We'll be deploying to newly minted Windows Server 2012 R2 VMs which have been joined to the domain, configured with WMF 5.0 and had their domain firewall turned off -- see here for details. (Note that if you are using versions of Windows server earlier than 2012 that don't have remote management turned on you have a bit of extra work to do.) My TFS instance is hosting the build agent and as such the agent can ‘see' all the machines in the domain. I'm using Integrated Security to allow the website to talk to the database and use three different domain accounts (CU-DAT, CU-DQA and CU-PRD) to illustrate passing different credentials to different environments. I assume you have these set up in advance.

As far as development tools are concerned I'm using Visual Studio 2015 Update 2 with PowerShell Tools installed and Git for version control within a TFS / VSTS team project. It goes without saying that for each release I'm building the application only once and as part of the build any environment-specific configuration is replaced with tokens. These tokens are replaced with the correct values for that environment as that same tokenised build moves through the deployment pipeline.

Writing Server Configuration Code Alongside Application Code

A key concept I am promoting in this blog post series is that configuring the servers that your application will run on should not be an afterthought and neither should it be a manual click-through-GUI process. Rather, you should be configuring your servers through code and that code should be written at the same time as you write your application code. Furthermore the server configuration code should live with your application code. To start then we need to configure Contoso University for this way of working. If you are following along you can get the starting point code from here.

  1. Open the ContosoUniversity solution in Visual Studio and add new folders called Deploy to the ContosoUniversity.Database and ContosoUniversity.Web projects.
  2. In ContosoUniversity.Database\Deploy create two new files: Database.ps1 and DbDscResources.ps1. (Note that SQL Server Database Projects are a bit fussy about what can be created in Visual Studio so you might need to create these files in Windows Explorer and add them in as new items.)
  3. Database.ps1 should contain the following code:
  4. DbDscResources.ps1 should contain the following code:
  5. In ContosoUniversity.Web\Deploy create two new files: Website.ps1 and WebDscResources.ps1.
  6. Website.ps1 should contain the following code:
  7. WebDscResources.ps1 should contain the following code:
  8. In ContosoUniversity.Database\Scripts move Create login and database user.sql to the Deploy folder and remove the Scripts folder.
  9. Make sure all these files have their Copy to Output Directory property set to Copy always. For the files in ContosoUniversity.Database\Deploy the Build Action property should be set to None.

The Database.ps1 and Website.ps1 scripts contain the PowerShell DSC to both configure servers for either IIS or SQL Server and then to deploy the actual component. See my Server Configuration as Code with PowerShell DSC post for more details. (At the risk of jumping ahead to the deployment part of this post, the bits to be deployed are copied to temp folders on the target nodes -- hence references in the scripts to C:\temp\$whatever$.)

In the case of the database component I'm using the xDatabase custom DSC resource to deploy the DACPAC. I came across a problem with this resource where it wouldn't install the DACPAC using domain credentials, despite the credentials having the correct permissions in SQL Server. I ended up having to install SQL Server using Mixed Mode authentication and installing the DACPAC using the sa login. I know, I know!

My preferred technique for deploying website files is plain xcopy. For me the requirement is to clear the old files down and replace them with the new ones. After some experimentation I ended up with code to stop IIS, remove the web folder, copy the new web folder from its temp location and then restart IIS.

Both the database and website have files with configuration tokens that needed replacing as part of the deployment. I'm using the xReleaseManagement custom DSC resource which takes a hash table of tokens (in the __TOKEN_NAME__ format) to replace.

In order to use custom resources on target nodes the custom resources need to be in place before attempting to run a configuration. I had hoped to use a push server technique for this but it was not to be since for this post at least I'm running the DSC configurations on the actual target nodes and the push server technique only works if the MOF files are created on a staging machine that has the custom resources installed. Instead I'm copying the custom resources to the target nodes just prior to running the DSC configurations and this is the purpose of the DbDscResources.ps1 and WebDscResources.ps1 files. The custom resources live on a UNC that is available to target nodes and get there by simply copying them from a machine where they have been installed (C:\Program Files\WindowsPowerShell\Modules is the location) to the UNC.

Create a Release Build

With the Visual Studio now configured (don't forget to commit the changes) we now need to create a build to check that initial code quality checks have passed and if so to publish the database and website components ready for deployment. Create a new build definition called ContosoUniversity.Rel and follow this post to configure the basics and this post to create a task to run unit tests. Note that for the Visual Studio Build task the MSBuild Arguments setting is /p:OutDir=$(build.stagingDirectory) /p:UseWPP_CopyWebApplication=True /p:PipelineDependsOnBuild=False /p:RunCodeAnalysis=True. This gives us a _PublishedWebsites\ContosoUniversity.Web folder (that contains all the web files that need to be deployed) and also runs the transformation to tokensise Web.config. Additionally, since we are outputting to $(build.stagingDirectory) the Test Assembly setting of the Visual Studio Test task needs to be $(build.stagingDirectory)\**\*UnitTests*.dll;-:**\obj\**. At some point we'll want to version our assemblies but I'll return to that in a another post.

One important step that has changed since my earlier posts is that the Restore NuGet Packages option in the Visual Studio Build task has been deprecated. The new way of doing this is to add a NuGet Installer task as the very first item and then in the Visual Studio Build task (in the Advanced section in VSTS) uncheck Restore NuGet Packages.

To publish the database and website as components -- or Artifacts (I'm using the TFS spelling) as they are known -- we use the Copy and Publish Build Artifacts tasks. The database task should be configured as follows:

  • Copy Root = $(build.stagingDirectory)
  • Contents =
    • ContosoUniversity.Database.d*
    • Deploy\Database.ps1
    • Deploy\DbDscResources.ps1
    • Deploy\Create login and database user.sql
  • Artifact Name = Database
  • Artifact Type = Server

Note that the Contents setting can take multiple entries on separate lines and we use this to be explicit about what the database artifact should contain. The website task should be configured as follows:

  • Copy Root = $(build.stagingDirectory)\_PublishedWebsites
  • Contents = **\*
  • Artifact Name = Website
  • Artifact Type = Server

Because we are specifying a published folder of website files that already has the Deploy folder present there's no need to be explicit about our requirements. With all this done the build should look similar to this:

web-portal-contosouniversity-rel-build

In order to test the successful creation of the artifacts, queue a build and then -- assuming the build was successful -- navigate to the build and click on the Artifacts link. You should see the Database and Website artifact folders and you can examine the contents using the Explore link:

web-portal-contosouniversity-rel-build-artifacts

Create a Basic Release

With the artifacts created we can now turn our attention to creating a basic release to get them copied on to a target node and then perform a deployment. Switch to the Release hub in the web portal and use the green cross icon to create a new release definition. The Deployment Templates window is presented and you should choose to start with an Empty template. There are four immediate actions to complete:

  1. Provide a Definition name -- ContosoUniversity for example.
  2. Change the name of the environment that has been added to DAT.
    web-portal-contosouniversity-release-definition-initial-tasks
  3. Click on Link to a build definition to link the release to the ContosoUniversity.Rel build definition.
    web-portal-contosouniversity-release-definition-link-to-build-definition
  4. Save the definition.

Next up we need to add two Windows Machine File Copy tasks to copy each artifact to one node called PRM-DAT-AIO. (As a reminder the DAT environment as I define it is just one server which hosts both the website and the database and where automated testing takes place.) Although it's possible to use just one task here the result of selecting artifacts differs according to the selected node in the artifact tree. At the node root, folders are created for each artifact but go one node lower and they aren't. I want a procedure that works for all environments which is as follows:

  1. Click on Add tasks to bring up the Add Tasks window. Use the Deploy link to filter the list of tasks and Add two Windows Machine File Copy tasks:
    web-portal-contosouniversity-release-definition-add-task
  2. Configure the properties of the tasks as follows:
    1. Edit the names (use the pencil icon) to read Copy Database files and Copy Website files respectively.
    2. Source = $(System.DefaultWorkingDirectory)/ContosoUniversity.Rel/Database or $(System.DefaultWorkingDirectory)/ContosoUniversity.Rel/Website accordingly (use the ellipsis to select)
    3. Machines = PRM-DAT-AIO.prm.local
    4. Admin login = Supply a domain account login that has admin privileges for PRM-DAT-AIO.prm.local
    5. Password = Password for the above domain account
    6. Destination folder = C:\temp\Database or C:\temp\Website accordingly
    7. Advanced Options > Clean Target = checked
  3. Click the ellipsis in the DAT environment and choose Deployment conditions.
    web-portal-contosouniversity-release-definition-environment-settings-deployment-conditions
  4. Change the Trigger to After release creation and click OK to accept.
  5. Save the changes and trigger a release using the green cross next to Release. You'll be prompted to select a build as part of the process:
    web-portal-contosouniversity-release-definition-environment-create-release
  6. If the release succeeds a C:\temp folder containing the artifact folders will have been created on on PRM-DAT-AIO.
  7. If the release fails switch to the Logs tab to troubleshoot. Permissions and whether the firewall has been configured to allow WinRM are the likely culprits. To preserve my sanity I do everything as domain admin and I have the domain firewall turned off. The usual warnings about these not necessarily being best practices in non-test environments apply!

Whilst you are checking the C:\temp folder on the target node have a look inside the artifact folders. They should both contain a Deploy folder that contains the PowerShell scripts that will be executed remotely using the PowerShell on Target Machines task. You'll need to configure two of each for the two artifacts as follows:

  1. Add two PowerShell on Target Machines tasks to alternately follow the Windows Machine File Copy tasks.
  2. Edit the names (use the pencil icon) to read Configure Database and Configure Website respectively.
  3. Configure the properties of the task as follows:
    1. Machines = PRM-DAT-AIO.prm.local
    2. Admin login = Supply a domain account that has admin privileges for PRM-DAT-AIO.prm.local
    3. Password = Password for the above domain account
    4. Protocol = HTTP
    5. Deployment > PowerShell Script = C:\temp\Database\Deploy\Database.ps1 or C:\temp\Website\Deploy\Website.ps1 accordingly
    6. Deployment > Initialization Script = C:\temp\Database\Deploy\DbDscResources.ps1 or C:\temp\Website\Deploy\WebDscResources.ps1 accordingly
  4. With reference to the parameters required by C:\temp\Database\Deploy\Database.ps1 configure Deployment > Script Arguments for the Database task as follows:
    1. $domainSqlServerSetupLogin = Supply a domain login that has privileges to install SQL Server on PRM-DAT-AIO.prm.local
    2. $domainSqlServerSetupPassword = Password for the above domain login
    3. $sqlServerSaPassword = Password you want to use for the SQL Server sa account
    4. $domainUserForIntegratedSecurityLogin = Supply a domain login to use for integrated security (PRM\CU-DAT in my case for the DAT environment)
    5. The finished result will be similar to: ‘PRM\Graham' ‘YourSecurePassword' ‘YourSecurePassword' ‘PRM\CU-DAT'
  5. With reference to the parameters required by C:\temp\Website\Deploy\Website.ps1 configure Deployment > Script Arguments for the Website task as follows:
    1. $domainUserForIntegratedSecurityLogin = Supply a domain login to use for integrated security (PRM\CU-DAT in my case for the DAT environment)
    2. $domainUserForIntegratedSecurityPassword = Password for the above domain account
    3. $sqlServerName = machine name for the SQL Server instance (PRM-DAT-AIO in my case for the DAT environment)
    4. The finished result will be similar to: ‘PRM\CU-DAT' ‘YourSecurePassword' ‘PRM-DAT-AIO'

At this point you should be able to save everything and the release should look similar to this:

web-portal-contosouniversity-release-definition-environment-create-release-all-tasks-added

Go ahead and trigger a new release. This should result in the PowerShell scripts being executed on the target node and IIS and SQL Server being installed, as well as the Contoso University application. You should be able to browse the application at http://prm-dat-aio. Result!

Variable Quality

Although we now have a working release for the DAT environment it will hopefully be obvious that there are serious shortcomings with the way we've configured the release. Passwords in plain view is one issue and repeated values is another. The latter issue is doubly of concern when we start creating further environments.

The answer to this problem is to create custom variables at both a ‘release' level and and at the ‘environment' level. Pretty much every text box seems to take a variable so you can really go to town here. It's also possible to create compound values based on multiple variables -- I used this to separate the location of the C:\temp folder from the rest of the script location details. It's worth having a bit of a think about your variable names in advance of using them because if you change your mind you'll need to edit every place they were used. In particular, if you edit the declaration of secret variables you will need to click the padlock to clear the value and re-enter it. This tripped me up until I added Write-Verbose statements to output the parameters in my DSC scripts and realised that passwords were not being passed through (they are asterisked so there is no security concern). (You do get the scriptArguments as output to the console but I find having them each on a separate line easier.)

Release-level variables are created in the Configuration section and if they are passwords can be secured as secrets by clicking the padlock icon. The release-level variables I created are as follows:

web-portal-contosouniversity-release-definition-release-variables

Environment-level variables are created by clicking the ellipsis in the environment and choosing Configure Variables. I created the following:

web-portal-contosouniversity-release-definition-environment-variables

The variables can then be used to reconfigure the release as per this screen shot which shows the PowerShell on Target Machines Configure Database task:

web-portal-contosouniversity-release-definition-tasks-using-variables

The other tasks are obviously configured in a similar way, and notice how some fields use more than one variable. Nothing has a actually changed by replacing hard-coded values with variables so triggering another release should be successful.

Environments Matter

With a successful deployment to the DAT environment we can now turn our attention to the other stages of the deployment pipeline -- DQA and PRD. The good news here is that all the work we did for DAT can be easily cloned for DQA which can then be cloned for PRD. Here's the procedure for DQA which don't forget is a two-node deployment:

  1. In the Configuration section create two new release level variables:
    1. TargetNode-DQA-SQL = PRM-DQA-SQL.prm.local
    2. TargetNode-DQA-IIS = PRM-DQA-IIS.prm.local
  2. In the DAT environment click on the ellipsis and select Clone environment and name it DQA.
  3. Change the two database tasks so the Machines property is $(TargetNode-DQA-SQL).
  4. Change the two website tasks so the Machines property is $(TargetNode-DQA-IIS).
  5. In the DQA environment click on the ellipsis and select Configure variables and make the following edits:
    1. Change DomainUserForIntegratedSecurityLogin to PRM\CU-DQA
    2. Click on the padlock icon for the DomainUserForIntegratedSecurityPassword variable to clear it then re-enter the password and click the padlock icon again to make it a secret. Don't miss this!
    3. Change SqlServerName to PRM-DQA-SQL
  6. In the DQA environment click on the ellipsis and select Deployment conditions and set Trigger to No automated deployment.

With everything saved and assuming the PRM-DQA-SQL and PRM-DQA-SQL nodes are running the release can now be triggered. Assuming the deployment to DAT was successful the release will wait for DQA to be manually deployed (almost certainly what is required as manual testing could be going on here):

web-portal-contosouniversity-release-definition-manual-deploy-of-DQA

To keep things simple I didn't assign any approvals for this release (ie they were all automatic) but do bear in mind there is some rich and flexible functionality available around this. If all is well you should be able to browse Contoso University on http://prm-dqa-iis. I won't describe cloning DQA to create PRD as it's very similar to the process above. Just don't forget to re-enter cloned password values! Do note that in the Environment Variables view of the Configuration section you can view and edit (but not create) the environment-level variables for all environments:

web-portal-contosouniversity-release-definition-all-environment-variables

This is a great way to check that variables are the correct values for the different environments.

And Finally...

There's plenty more functionality in Release Management that I haven't described but that's as far as I'm going in this post. One message I do want to get across is that the procedure I describe in this post is not meant to be a statement on the definitive way of using Release Management. Rather, it's designed to show what's possible and to get you thinking about your own situation and some of the factors that you might need to consider. As just one example, if you only have one application then the Visual Studio solution for the application is probably fine for the DSC code that installs IIS and SQL Server. However if you have multiple similar applications then almost certainly you don't want all that code repeated in every solution. Moving this code to the point at which the nodes are created could be an option here -- or perhaps there is a better way!

That's it for the moment but rest assured there's lots more to be covered in this series. If you want the final code that accompanies this post I've created a release here on my GitHub site.

Cheers -- Graham

Post Deployment Configuration with the PowerShell DSC Extension for Azure Resource Manager Templates

Posted by Graham Smith on April 28, 20162 Comments (click here to comment)

As part of a forthcoming blog post I'm writing for my series about Continuous Delivery with TFS / VSTS I want to be able to deploy PowerShell DSC scripts to Windows Server target nodes that both configure servers and deploy my application components. Separately, I want to automate the creation of target nodes so I can easily destroy and recreate them -- great for testing. In this previous post I explained how to do this with Azure Resource Manager templates, however the journey didn't end there since I also wanted to join the nodes to a domain and also install Windows Management Framework 5.0 in order to get the latest version of PowerShell DSC installed. Despite all that the journey still wasn't over because my server configuration and application deployment technique with PowerShell DSC uses WinRM which requires target nodes to have their firewalls configured to allow WinRM.

The solution to this problem lies with harnessing the true intended functionality of the PowerShell DSC Extension. Although you can just use it to install WMF it's real purpose is to run DSC configurations after the VM has been deployed. The configuration I used was as follows:

As you can see, rather than create any firewall rules I chose to simply turn the domain firewall off. The main reason is simplicity: creating firewall rules with DSC needs a custom resource which adds another layer of complexity to the problem. Although another option is to use netsh commands to create firewall rules in my case I have no issues with turning the firewall off.

The next step is to package this config in to a zip file and make it available on a publicly available URL. GitHub is one possible location that can be used to host the zip but I chose Azure blob storage. The Publish-AzureVMDscConfiguration cmdlet exists to help here, and can create the zip locally for onward transfer to GitHub (for example) or it can push it straight to Azure blob storage. I was using the latter route of course although I found that couldn't get the cmdlet to work with premium storage and ended up creating a standard storage account. The code is as follows:

The storage account key is copied from the Azure Portal via Storage account > $StorageAccount$ >Settings > Access keys. Don't try using mine as I've invalidated it. I should point out that I couldn't get this command to work consistently and it would sometimes error. I did get it to work eventually but I didn't manage to pin down the problem. The net effect of successfully running this code is a file called PostDeploymentConfig.ps1.zip in blob storage. As things stand though this file isn't accessible and its container (windows-powershell-dsc is created as a default) needs to have its access policy changed from Private to Blob.

With that done it's time to amend the JSON template. The dscExtension resource that was added in this post should now look as follows:

I've chosen to hard code the ModulesUrl and ConfigurationFunction settings because I won't need to change them but they can of course be parameterised. That's all there is to it, and the result is a VM that is completely ready to have its internals configured by PowerShell DSC scripts over WinRM. If you want to download the code that accompanies this post it's on my GitHub site as a release here.

Cheers -- Graham

Install Windows Management Framework 5.0 with Azure Resource Manager Templates

Posted by Graham Smith on April 9, 2016No Comments (click here to comment)

In a recent post on my blog series about Continuous Delivery with TFS / VSTS I mentioned that I was having to manually install Windows Management Framework 5.0 after creating a Windows server via ARM templates as it was a necessary precursor to running my PowerShell DSC configuration.  I also mentioned that automating the install was on my do-do list. But no more!

It turns out that the the PowerShell DSC extension for ARM templates will perform the installation, and that there's no need to actually run a DSC configuration if you don't need to -- just specify "WmfVersion": "5.0" in the settings section. The JSON to add to your ARM template should look similar to this:

I say similar because the code is configured to use the variables in my template, however you can see the full template to get the context on my GitHub Infrastructure repo here.

Many thanks to Zach Alexander and the PowerShell Team for pointing me in the right direction!

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Server Configuration as Code with PowerShell DSC

Posted by Graham Smith on April 7, 201610 Comments (click here to comment)

I suspect I'm on reasonably safe ground when I venture to suggest that most software engineers developing applications for Windows servers (and the organisations they work for) have yet to make the leap from just writing the application code to writing both the application code and the code that will configure the servers the application will run on. Why do I suggest this? It's partly from experience in that I've never come across anyone developing for the Windows platform who is doing this (or at least they haven't mentioned it to me) and partly because up until fairly recently Microsoft haven't provided any tooling for implementing configuration as code (as this engineering practice is sometimes referred to). There are products from other vendors of course but they tend to have their roots in the Linux world and use languages such as Ruby (or DSLs based on Ruby) which is probably going to seriously muddy the waters for organisations trying to get everyone up to speed with PowerShell.

This has all changed relatively recently with the introduction of PowerShell DSC, Microsoft's solution for implementing configuration as code on Windows (and other platforms as it happens). With PowerShell DSC (and related technologies) the configuration of servers is expressed as programming code that can be versioned in source control. When a change is required to a server the code is updated and the new configuration is then applied to the server. This process is usually idempotent, ie the configuration can be applied repeatedly and will always give the same result. It also won't generate errors if the configuration is already in the desired state. Through version control we can audit how a configuration changes over time and being code it can be applied as required to ensure server roles in different environments, or multiple instances of the same server role in the same environment, have a consistent configuration.

So ostensibly Windows server developers now have no excuse not to start implementing configuration as code. But if we've managed so far without this engineering practice why all the fuss now? What benefit is it going to bring to the table? The key benefit is that it's a cure for that age-old problem of servers that might start life from a build script, but over the months (and possibly years) different technicians make necessary tweaks here and there until one day the server becomes a unique work of art that nobody could ever hope to reproduce. Server backups become critical and everyone dreads the day that the server will need to be upgraded or replaced.

If your application is very simple you might just get away with this state of affairs -- not that it makes it right or a best practice. However if your application is constantly evolving with concomitant configuration changes and / or you are going down the microservices route then you absolutely can't afford to hand-crank the configuration of your servers. Not only is the manual approach very error prone it's also hugely time-consuming, and has no place in a world of continuous delivery where shortening lead times and increasing reliability and repeatability is the name of the game.

So if there's no longer an excuse to implement configuration as code on the Windows platform why isn't there a mad rush to adopt it? In my view, for most mid-size IT departments working with existing budgets and staffing levels and an existing landscape of hand-cranked servers it's going to be a real slog to switch the configuration of a live estate to being managed by code. Once you start thinking about the complexities of analysing exiting servers (some of which might have been around for years and which might have all sorts of bespoke applications running on them) combined with devising a system of managing scores or even hundreds of servers it's clear that a task of this nature is almost certainly going to require a dedicated team. And despite the potential benefits that configuration as code promises most mid-size IT departments are likely to struggle to stand-up such a team.

So if it's going to be hard how does an organisation get started with configuration as code and PowerShell DSC? Although I don't have anywhere near all of the answers it is already clear to me that if your organisation is in the business of writing applications for Windows servers then you need to approach the problem from both ends of the server spectrum. At the far end of the spectrum is the live estate where server ‘drift' needs to be controlled using PowerShell DSC's ‘pull' mode. This is where servers periodically reach out to a central repository to pull their ‘true' configuration and make any adjustments accordingly. At the near end of the spectrum are the servers that form the continuous delivery pipeline which need to have configuration changes applied to them just before a new version of the application gets deployed to them. Happily PowerShell has a ‘push' mode which will work nicely for this purpose. There is also the live deployment situation. Here, live servers will need to have configuration changes pushed to them before application deployment takes place and then will need to switch over to pull mode to keep them true.

The way I see things at the moment is that PowerShell DSC pull mode is going to be hard to implement at scale because of the lack of tooling to manage it. Whilst you could probably manage a handful of servers in pull mode using PowerShell DSC script files, any more than a handful is going to cause serious pain without some kind of management framework such as the one that is available for Chef. The good news though is that getting started with PowerShell DSC push mode for configuring servers that comprise the deployment pipeline as part of application development activities is a much more realistic prospect.

Big Picture Time

I'm not going to be able to cover everything about making PowerShell DSC push mode work in one blog post so it's probably worth a few words about the bigger picture. One key concept to establish early on is that the code that will configure the server(s) that an application will reside on has to live and change alongside the application code. At the very least the server configuration code needs to be in the same version control branch as the application code and frequently it will make sense for it to be part of the same Visual Studio solution. I won't be presenting that approach in this blog post and instead will concentrate on the mechanics of getting PowerShell DSC push mode working and writing the configuration code that enables the Contoso University sample application (which requires IIS and SQL Server) to run. In a future post I'll have the code in the same Visual Studio solution as the Contoso University sample application and will explain how to build an artefact that is then deployed by the release management tooling in TFS / VSTS prior to deploying the application.

For anyone who has come across this post by chance it is part of my ongoing series about Continuous Delivery with TFS / VSTS, and you may find it helpful to refer to some of the previous posts to understand the full context of what I'm trying to achieve. I should also mention that this post isn't intended to be a PowerShell DSC tutorial and if you are new to the technology I have a Getting Started post here with a link collection of useful learning resources. With all that out of the way let's get going!

Getting Started

Taking the Infrastructure solution from this blog post as a starting point (available as a code release at my Infrastructure repo on GitHub, final version of this post's code here) add a new PowerShell Script Project called ConfigurationScripts. To this new project add a new PowerShell Script file called ContosoUniversity.ps1 and add a hash table and empty Configuration block called WebAndDatabase as follows:

We're going to need an environment to deploy in to so using the techniques described in previous posts (here and here) create a PRM-DAT-AIO server that is joined to the domain. This server will need to have Windows Management Framework 5.0 installed -- a manual process as far as this particular post is concerned but something that is likely to need automating in the future.

To test a basic working configuration we'll create a folder on PRM-DAT-AIO to act as the IIS physical path to the ContosoUniversity web files. Add the following lines of code to the beginning of the configuration block:

To complete the skeleton code add the following lines of code to the end of ContosoUniversity.ps1:

The code contained in ContosoUniversity.ps1 should now be as follows:

Although you can create this code from any developer workstation you need to ensure that you can run it from a workstation that is joined to the same domain as PRM-DAT-AIO and has a folder called C:\Dsc\Mof. In order to keep authentication simple I'm also assuming that you are logged on to your developer workstation with domain credentials that allow you to perform DSC operations on PRM-DAT-AIO. Running this code will create a PRM-DAT-AIO.mof file in C:\Dsc\Mof which will deploy to PRM-DAT-AIO and create the folder. Magic!

Installing Resource Modules Locally

To do anything much more sophisticated than create a folder we'll need to import resources to our local workstation from the PowerShell Gallery. We'll be working with xWebAdministration and xSQLServer and they can be installed locally as follows:

These same commands will also install the latest version of the resources if a previous version exists. Referencing these resources in our configuration script seems to have changed with the release of DSC 5.0 and versioning information is a requirement. Consequently, these resources are referenced in the configuration as follows:

Obviously change the above code to reference the version of the module that you actually install. The resources are continually being updated with new versions and this requires a strategy to upgrade on a periodic basis.

Making Resource Modules Available Remotely

Whilst the additions in the previous section allows us to create advanced configurations on our developer workstation these configurations are not going to run against target nodes since as things stand the target nodes don't know anything about custom resources (as opposed to resources such as PSDesiredStateConfiguration which ship with the Windows Management Framework). We can fix this by telling the Local Configuration Manager (LCM) of target nodes where to get the custom resources from. The procedure (which I've adapted from Nana Lakshmanan's blog post) is as follows:

  • Choose a server in the domain to host a fileshare. I'm using my domain controller (PRM-CORE-DC) as it's always guaranteed to be available under normal conditions. Create a folder called C:\Dsc\DscResources (Dsc purposefully repeated) and share it as Read/Write for Everyone as \\PRM-CORE-DC\DscResources.
  • Custom resources need to be zipped in a format required by DSC the pull protocol. The PowerShell to do this for version 1.10 of xWebAdministration and 1.5 of xSQLServer (using a local C:\Dsc\Resources folder) is as follows:

    Of course depending on the frequency of your having to do this to cope with updates and the number of resources you end up working with you probably want to re-write all this up in to some sort of reusable package.
  • With the packages now in the right format in the fileshare we need to tell the LCM of target nodes where to look. We do this by creating a new configuration decorated with the [DscLocalConfigurationManager()] attribute:

    The Settings block is used to set various properties of the LCM which are required in order for configurations we'll be writing to run. The ResourceRepositoryShare block obviously specifies the location of the zipped resource packages.
  • The final requirement is to add the line of code (Set-DscLocalConfigurationManager -Path C:\Dsc\Mof -Verbose) to apply the LCM settings.

The revised version of ContosoUniversity.ps1 should now be as follows:

At this stage we now have our complete working framework in place and we can begin writing the configuration blocks that colectively will leave us with a server that is capable of running our Contoso University application.

Writing Configurations for the Web Role

Configuring for the web role requires consideration of the following factors:

  • The server features that are required to run your application. For Contoso University that's IIS, .NET Framework 4.5 Core and ASP.NET 4.5.
  • The mandatory IIS configurations for your application. For Contoso University that's a web site with a custom physical path.
  • The optional IIS configurations for your application. I like things done in a certain way so I want to see an application pool called ContosoUniversity and the Contoso University web site configured to use it.
  • Any tidying-up that you want to do to free resources and start thinking like you are configuring NanoServer. For me this means removing the default web site and default application pools.

Although you'll know if your configurations have generated errors how will you know if they've generated the desired result? The following ‘debugging' options can help:

  • I know that the home page of Contoso University will load without a connection to a database, so I copied a build of the website to C:\inetpub\ContosoUniversity on PRM-DAT-AIO so I could test the site with a browser. You can download a zip of the build from here although be aware that AV software might mistakenly regard it as malware.
  • The IIS management tools can be installed on target nodes whilst you are in configuration mode so you can see graphically what's happening. The following configuration does the trick:
  • If you are testing with a local version of Internet Explorer make sure you turn off Compatibility View or your site may render with odd results. From the IE toolbar choose Tools > Compatibility View Settings and uncheck Display intranet sites in Compatibility View.

Whilst you are in configuration mode the following resources will be of assistance:

  • The xWebAdministration documentation on GitHub: https://github.com/PowerShell/xWebAdministration.
  • The example files that ship with xWebAdministration: C:\Program Files\WindowsPowerShell\Modules\xWebAdministration\n.n.n.n\Examples.
  • A Google search for xWebAdministration.

The configuration settings required to meet my requirements stated above are as follows:

There is one more piece of the jigsaw to finish the configuration and that's amending the application pool to use a domain account that has permissions to talk to SQL Server. That's a more advanced topic so I'm dealing with it later.

Writing Configurations for the Database Role

Configuring for the SQL Server database role is slightly different from the web role since we need to install SQL Server which is a separate application. The installation files need to be made available as follows:

  • Choose a server in the domain to host a fileshare. As above I'm using my domain controller. Create a folder called C:\Dsc\DscInstallationMedia and and share it as Read/Write for Everyone as \\PRM-CORE-DC\DscInstallationMedia.
  • Download a suitable SQL Server ISO image to the server hosting the fileshare -- I used en_sql_server_2014_enterprise_edition_with_service_pack_1_x64_dvd_6669618.iso from MSDN Subscriber Downloads.
  • Mount the ISO and copy the contents of its drive to a folder called SqlServer2014 created under C:\Dsc\DscInstallationMedia.

In contrast to configuring for the web role there are fewer configurations required for the database role. There is a requirement to supply a credential though and for this I'm using the Key Vault technique described in this post. This gives rise to new code within and preceding the configuration hash table as follows:

For a server such as the one we are configuring where the database is on the same machine as the web server and only the database engine is required there are just two configuration blocks needed to install SQL Server. For more complicated scenarios the following resources will be of assistance:

  • The xSQLServer documentation on GitHub: https://github.com/PowerShell/xSQLServer.
  • The example files that ship with xSQLServer: C:\Program Files\WindowsPowerShell\Modules\xSQLServer\n.n.n.n\Examples.
  • A Google search for xSQLServer.

The configuration settings required for the single server scenario are as follows:

In order to assist with ‘debugging' activities I've included the installation of the SQL Server management tools but this can be omitted when the configuration has been tested and deemed fit for purpose. Later in this post we'll manually install the remaining parts of the Contoso University application to prove that the installation worked but for the time being you can run SQL Server Management Studio to see the database engine running in all its glory!

Amending the Application Pool Identity

The Contoso University website is granted access to the database via a domain account that firstly gets configured as the Identity for the website's application pool and then gets configured as a SQL Server login associated with a user which has the appropriate permissions to the database. The SQL Server configuration is taken care of by a permissions script that we'll come to shortly, and the immediate task is concerned with amending the Identity property of the ConsosoUniversity application pool so that it references a domain account.

Initially this looked like it was going to be painful since xWebAdministration doesn't currently have the ability to configure the inner workings of application pools. Whilst investigating the possibilities I had the good fortune to come across a fork of xWebAdministration on the PowerShell.org GitHub site where those guys have created a module which does what we want. I need to introduce a slight element of caution here since the fork doesn't look like it's under active development. On the other hand maybe there are no major issues that need fixing. And if there are and they aren't going to get fixed at least the code is there to be forked. Because this fork isn't in the PowerShell Gallery getting it to work locally is a manual process:

  • Download the code to C:\Dsc\Resources and unblock and extract it. Change the folder name from cWebAdministration-master to cWebAdministration and copy to C:\Program Files\WindowsPowerShell\Modules.
  • In the configuration block reference the module as Import-DscResource –ModuleName @{ModuleName="cWebAdministration";ModuleVersion="2.0.1″}.

The configuration required to make the resource available to target nodes has an extra manual step:

  • In the root of C:\DSC\Resources\cWebAdministration create a folder named 2.0.1 and copy the contents of C:\DSC\Resources\cWebAdministration to this folder.
  • The following code can now be used to package the resource and copy it to the fileshare:

I tend towards using a different domain account for the Identity properties of the website application pools in the different environments that make up the deployment pipeline. In doing so it protects the pipeline form a complete failure if something happens to that domain account -- it gets locked-out for example. To support this scenario the configuration block to configure the application pool identity needs to support dynamic configuration and takes the following form:

The dynamic configuration is supported by Key Vault code to retrieve the password of the domain account used to configure the application pool (not shown) and the following additions to the configuration hash table:

The code does of course rely on the existence of the PRM\CU-DAT domain account (set so the password doesn't expire). This is the last piece of configuration, and you can view the final result on GitHub here.

The Moment of Truth

After all that configuration, is it enough to make the Contoso University application work? To find out:

  • If you haven't already, download, unblock and unzip the ContosoUniversityConfigAsCode package from here, although as mentioned previously be aware that AV software might mistakenly regard it as malware.
  • The contents of the Website folder should be copied (if not already) to C:\inetpub\ContosoUniversity on the target node.
  • Edit the SchoolContext connection string in Web.config if required -- the download has the server set to localhost and the database to ContosoUniversity.
  • On the target node run SQL Server Management Studio and install the database as follows:
    • In Object Explorer right-click the Databases node and choose Deploy Data-tier Application.
    • Navigate through the wizard, and at Select Package choose ContosoUniversity.Database.dacpac from the database folder of the ContosoUniversityConfigAsCode download.
    • Move to the next page of the wizard (Update Configuration) and change the Name to ContosoUniversity.
    • Navigate past the Summary page and the DACPAC will be deployed:
      ssms-deploy-dacpac
  • Still in SSMS, apply the permissions script as follows:
    • Open Create login and database user.sql from the Database\Scripts folder in the ContosoUniversityConfigAsCode download.
    • If the pre-configured login/user (PRM\CU-DAT) is different from the one you are using update accordingly, then execute the script.

You can now navigate to http://prm-dat-aio (or whatever your server is called) and if all is well make a mental note to pour a well-deserved beverage of your choosing.

Looking Forward

Although getting this far is certainly an important milestone it's by no means the end of the journey for the configuration as code story. Our configuration code now needs to be integrated in to the Contoso University Visual Studio solution so that it can be built as an artefact alongside the website and database artefacts. We then need to be able to deploy the configuration before deploying the application -- all automated through the new release management tooling that has just shipped with TFS 2015 Update 2 or through VSTS if you are using that. Until next time...

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Start of a New Journey

Posted by Graham Smith on November 4, 2015No Comments (click here to comment)

[Please note: Just a couple of weeks after publishing this post Microsoft changed the name of Visual Studio Online (VSO) to Visual Studio Team Services (VSTS). I've updated the title and URL of this post for consistency with future posts but the text below remains unchanged.]

I first started investigating how to implement continuous delivery with TFS -- working almost exclusively in Microsoft Azure -- nearly two years ago. Out of these investigations (and backed-up by practical experience where I work) came my original 24-post series on implementing continuous delivery with TFS and a shorter series covering continuous delivery with VSO.

Although the concepts that I covered in my original series haven't really changed the tooling certainly has -- only what you would expect in this fast-moving industry of ours of course. In particular there have been fundamental changes to the way Microsoft Azure works and we also have a brand new web-based implementation of Release Management coming our way. Additionally, there are aspects of continuous delivery that my original series didn't cover because the tooling I wanted to use simply wasn't in place or mature enough. Consequently it feels like the right time to start a brand new blog post series, and it is my intention in this post to set the scene for what's in store.

Aims of the new Series
  • Hopefully by now most people realise that despite its name VSO (Visual Studio Online) is Microsoft's cloud version of TFS. My original continuous delivery series focussed on TFS since the Release Management tooling didn't originally work with VSO. Although that eventually changed the story is now completely different and the original WPF-based Release Management has a brand new web-based successor. As with most new ALM features coming out of Microsoft this will initially be available in VSO. TFS 2015 will get the new release management tooling sometime later -- see here to keep track of when this might be. Despite the possible complications of different release timeframes I'm planning to make this new series of posts applicable to both TFS and VSO. This will hopefully avoid unnecessary repetition and allow anyone working through the series to pick either VSO or TFS and be confident that they can follow along without finding I have been focussing on one of the implementations to the detriment of the other.
  • Of all the things that can cause software to fail other than actual defects, application configuration is probably the one that is most troublesome. That's my experience anyway. However there is another factor that can cause problems which is the actual configuration of the server(s) the application is installed on. The big question here is how can we be sure that the configuration of the servers we tested on is the same in production, because if there are differences it could spell disaster? Commonly known as configuration as code I'm planning to address this issue in this new series of posts using Microsoft's PowerShell DSC technology.
  • So we've got a process for managing the configuration of our server internals, but what about for actually creating the servers I hear you ask? It's an important point, since who doesn't want to be able to create infrastructure in an automated and repeatable way? I'll be addressing this requirement using the technologies provided by Azure Resource Manager, namely what I think are going to turn out to be idempotent PowerShell cmdlets and (as a different approach) JSON templates. For sure, you are unlikely to be using these technologies in an on premises situation however for me the important thing is to get hands-on experience of an infrastructure as code technology that helps me think strategically about this problem space.
  • I'm a huge advocate for IT people using cloud technologies to help them with their continuous learning activities and if you have an MSDN subscription you could have up to £95 worth of Microsoft Azure credits to use each month. Being able to create servers in Azure and take advantage of the many other services it offers opens up a whole world of possibilities that just a few years ago were out of reach for most of us. However, as well as being a useful learning tool I also feel strongly that most IT people should be learning cloud technologies as they will surely have an effect on most of our jobs at some point. Maybe not today, maybe not tomorrow but soon etc. Consequently, I use Azure both because it is a great place to build sandbox environments but also because I'm confident that learning Azure will help my future career. I hope you will feel the same way about cloud technologies, whether it's Azure or another offering.
  • Lastly, I'm planning to make each blog post shorter and to have a more specific theme. Something like the single responsibility principle for blogging. My hope is that shorter posts will make it easier for those ‘trying this at home' to follow along and will also make it easier to find where I've written about a specific piece of technology. Shorter posts will also help me as it will hopefully be an end to the nightmare blog post that takes several weeks to research, debug and explain in a coherent way.
Who is the new Series Aimed at?

Clearly I hope my blog posts will help as many people as possible. However I have purposefully chosen to work with a specific set of technologies and if this happens to be your chosen set then you are likely to get more direct mileage out of my posts than someone who uses different tools. If you do use different tools though I hope that you will still gain some benefit because many concepts are very similar. Using Chef or Puppet rather than PowerShell DSC? No problem -- go ahead and use those great tools. Your organisation has chosen Octopus Deploy as your release management tooling? My hope is that you should have little problem following along, using Octopus as a direct replacement for Microsoft's offering. As with my previous series I do assume a reasonable level of experience with the underlying technologies and for those for whom this is lacking I'll continue to publish Getting Started posts with link collections to help get up to speed with a topic.

I carry out my research activities with the benefit of an MSDN Enterprise subscription as this gives me access to all of Microsoft's tooling and also monthly Azure credits. If you don't have an MSDN subscription there are still ways you can follow along though. Anyone can sign up for a free VSO account and there is also a free Express version of TFS. Similarly there is a free Community version of Visual Studio and a free Express version of SQL Server. All this, combined with a 180-day evaluation of Windows Server which you could run using Hyper-V on a workstation with sufficient memory should allow you to get very close to the sort of setup that's possible with an MSDN account.

Looking to the Future

It might seem odd to be looking at the future at the beginning of a new blog post series however I can already see a time when this series is out of date and needs updating with a series that includes container technologies. However I'm purposefully resisting blogging about containers for the time being -- it feels just a bit too new and shiny for me at the moment and in any case there is no shortage of other people blogging in this space.

Happy learning!

Cheers -- Graham

Getting Started with PowerShell DSC

Posted by Graham Smith on March 17, 2015No Comments (click here to comment)

Whenever I explain to people the common failure points for the deployment of an application I'll often draw a triangle. One point is for application code, another for application configuration and the other for server configuration. (Of course there are plenty of other ways for a deployment to fail but if it's because the power to your server room has failed you have a different class of problem.) Minimising the chances of application code being the culprit starts with good coding practices such as appropriate use of design patterns, test driven development or similar -- the list goes on and everyone will have their view. This continues with practising continuous integration and deploying code to a delivery pipeline using a tool such as Release Management for Visual Studio that can manage an application's configuration between environments. But how to manage server configuration? In many organisations initial sever configuration is typically done by hand -- possibly using a build list. Over time tweaks are made by different technicians until eventually the server becomes a work of art: a one-off that nobody could reliably reproduce.

The answer to all this is tooling that implements configuration as code. Typically this means declaring in a code file what you want a server's configuration to look like and then leaving some other component to figure out how to achieve that -- and to correct any deviations that might occur. This is in contrast to an imperative code build script where you would prescribe what would happen but where you would have to take care of error handling and other factors that could cause issues.

In the non-Windows world tools such as Puppet and Chef are commonly used to automate the configuration of servers. And whilst they do have something to offer the Windows folks it's not a completely happy story because both tools require a Linux machine as the master server. For a while there wasn't a ‘native' solution to the configuration as code problem for the Windows platform however all that changed with PowerShell 4 and the release of PowerShell DSC (Desired State Configuration). If you don't already have a configuration as code solution and you are a Windows shop then PowerShell DSC is almost certainly the route of choice. There is now a wealth of options for learning PowerShell DSC and my pick of some of the best places to start is as follows:

Although I haven't had chance to watch much of then yet Getting Started with PowerShell Desired State Configuration (DSC) and Advanced PowerShell Desired State Configuration (DSC) and Custom Resources are undoubtedly going to turn out to be unmissable. As I mention in my Getting Started with Windows PowerShell blog post the double act that is Jason Helmick and PowerShell inventor Jeffrey Snover is an enormously informative but but at the same time hugely entertaining combination. I chuckled and chortled all the way through their two PowerShell JumpStart series of videos and I'm expecting more of the same with these latest ones. Having fun whilst learning? What could be better?

Cheers -- Graham

Continuous Delivery with TFS: Making Sense of the DSC Feature in Release Management

Posted by Graham Smith on February 8, 20154 Comments (click here to comment)

When I first started listing the draft titles of blog posts for my series on implementing continuous delivery with TFS naturally the vNext / Agent-less / PowerShell DSC feature of Release Management that shipped with 2013.3 was on my list. And why not? Surely this was the successor to the agent-based way of doing things? Out with the old and in with the new...

Naturally I'd looked in to PowerShell DSC and knew that it was touted as a make-it-so technology for configuring Windows servers: rather than using an imperative script to install and configure components one uses a declarative approach that describes what a server should look like and PowerShell DSC goes off and does one's bidding, so to speak. It wasn't immediately obvious how the new vNext features of Release Management would relate to the delivery pipeline I was building in Azure but I trusted that time would tell. Well time has now told and as far as my research is concerned I can't see that the vNext features have any part to play. Deploying a DACPAC? Running automated tests via Microsoft Test Manager? The vNext features appear to be irrelevant.

Interestingly I'm not the only one who has come to the conclusion that vNext is not a must-do replacement for agent-based deployments. Both Colin Dembovsky and Donovan Brown have recently blogged on similar lines. So what is the point of the vNext features in Release Management? Clearly if you want to ensure that your environment is configured correctly before you deploy your components then a vNext release template might be the way to go. But most organisations are probably (or should be) thinking about automating the configuration of their servers at a higher, more global level, not just when it comes to triggering an actual deployment. Certainly at the time of writing this post I think I'm right in saying that if you want to use Release Management with Visual Studio Online you have to use a vNext release template, but this just feels like Microsoft haven't implemented using agent-based release templates yet.

Although I'm planning to cover PowerShell DSC in a different blog post series as far as this series is concerned I'm not going to complicate things by covering the vNext way of implementing releases as it feels like it won't add much value and will be entering a world of unnecessary rework and pain. Disagree? Sound off in the comments...

Cheers -- Graham