Continuous Delivery with TFS / VSTS – Server Configuration and Application Deployment with Release Management

Posted by Graham Smith on May 2, 20162 Comments (click here to comment)

At this point in my blog series on Continuous Delivery with TFS / VSTS we have finally reached the stage where we are ready to start using the new web-based release management capabilities of VSTS and TFS. The functionality has been in VSTS for a little while now but only came to TFS with Update 2 of TFS 2015 which was released at the end of March 2016.

Don't tell my wife but I'm having a torrid love affair with the new TFS / VSTS Release Management. It's flippin' brilliant! Compared to the previous WPF desktop client it's a breath of fresh air: easy to understand, quick to set up and a joy to use. Sure there are some improvements that could be made (and these will come in time) but for the moment, for a relatively new product, I'm finding the experience extremely agreeable. So let's crack on!

Setting the Scene

The previous posts in this series set the scene for this post but I'll briefly summarise here. We'll be deploying the Contoso University sample application which consists of an ASP.NET MVC website and a SQL Server database which I've converted to a SQL Server Database Project so deployment is by DACPAC. We'll be deploying to three environments (DAT, DQA and PRD) as I explain here and not only will we be deploying the application we'll first be making sure the environments are correctly configured with PowerShell DSC using an adaptation of the procedure I describe here.

My demo environment in Azure is configured as a Windows domain and includes an instance of TFS 2015 Update 2 which I'll be using for this post as it's the lowest common denominator, although I will point out any VSTS specifics where needed. We'll be deploying to newly minted Windows Server 2012 R2 VMs which have been joined to the domain, configured with WMF 5.0 and had their domain firewall turned off -- see here for details. (Note that if you are using versions of Windows server earlier than 2012 that don't have remote management turned on you have a bit of extra work to do.) My TFS instance is hosting the build agent and as such the agent can ‘see' all the machines in the domain. I'm using Integrated Security to allow the website to talk to the database and use three different domain accounts (CU-DAT, CU-DQA and CU-PRD) to illustrate passing different credentials to different environments. I assume you have these set up in advance.

As far as development tools are concerned I'm using Visual Studio 2015 Update 2 with PowerShell Tools installed and Git for version control within a TFS / VSTS team project. It goes without saying that for each release I'm building the application only once and as part of the build any environment-specific configuration is replaced with tokens. These tokens are replaced with the correct values for that environment as that same tokenised build moves through the deployment pipeline.

Writing Server Configuration Code Alongside Application Code

A key concept I am promoting in this blog post series is that configuring the servers that your application will run on should not be an afterthought and neither should it be a manual click-through-GUI process. Rather, you should be configuring your servers through code and that code should be written at the same time as you write your application code. Furthermore the server configuration code should live with your application code. To start then we need to configure Contoso University for this way of working. If you are following along you can get the starting point code from here.

  1. Open the ContosoUniversity solution in Visual Studio and add new folders called Deploy to the ContosoUniversity.Database and ContosoUniversity.Web projects.
  2. In ContosoUniversity.Database\Deploy create two new files: Database.ps1 and DbDscResources.ps1. (Note that SQL Server Database Projects are a bit fussy about what can be created in Visual Studio so you might need to create these files in Windows Explorer and add them in as new items.)
  3. Database.ps1 should contain the following code:
  4. DbDscResources.ps1 should contain the following code:
  5. In ContosoUniversity.Web\Deploy create two new files: Website.ps1 and WebDscResources.ps1.
  6. Website.ps1 should contain the following code:
  7. WebDscResources.ps1 should contain the following code:
  8. In ContosoUniversity.Database\Scripts move Create login and database user.sql to the Deploy folder and remove the Scripts folder.
  9. Make sure all these files have their Copy to Output Directory property set to Copy always. For the files in ContosoUniversity.Database\Deploy the Build Action property should be set to None.

The Database.ps1 and Website.ps1 scripts contain the PowerShell DSC to both configure servers for either IIS or SQL Server and then to deploy the actual component. See my Server Configuration as Code with PowerShell DSC post for more details. (At the risk of jumping ahead to the deployment part of this post, the bits to be deployed are copied to temp folders on the target nodes -- hence references in the scripts to C:\temp\$whatever$.)

In the case of the database component I'm using the xDatabase custom DSC resource to deploy the DACPAC. I came across a problem with this resource where it wouldn't install the DACPAC using domain credentials, despite the credentials having the correct permissions in SQL Server. I ended up having to install SQL Server using Mixed Mode authentication and installing the DACPAC using the sa login. I know, I know!

My preferred technique for deploying website files is plain xcopy. For me the requirement is to clear the old files down and replace them with the new ones. After some experimentation I ended up with code to stop IIS, remove the web folder, copy the new web folder from its temp location and then restart IIS.

Both the database and website have files with configuration tokens that needed replacing as part of the deployment. I'm using the xReleaseManagement custom DSC resource which takes a hash table of tokens (in the __TOKEN_NAME__ format) to replace.

In order to use custom resources on target nodes the custom resources need to be in place before attempting to run a configuration. I had hoped to use a push server technique for this but it was not to be since for this post at least I'm running the DSC configurations on the actual target nodes and the push server technique only works if the MOF files are created on a staging machine that has the custom resources installed. Instead I'm copying the custom resources to the target nodes just prior to running the DSC configurations and this is the purpose of the DbDscResources.ps1 and WebDscResources.ps1 files. The custom resources live on a UNC that is available to target nodes and get there by simply copying them from a machine where they have been installed (C:\Program Files\WindowsPowerShell\Modules is the location) to the UNC.

Create a Release Build

With the Visual Studio now configured (don't forget to commit the changes) we now need to create a build to check that initial code quality checks have passed and if so to publish the database and website components ready for deployment. Create a new build definition called ContosoUniversity.Rel and follow this post to configure the basics and this post to create a task to run unit tests. Note that for the Visual Studio Build task the MSBuild Arguments setting is /p:OutDir=$(build.stagingDirectory) /p:UseWPP_CopyWebApplication=True /p:PipelineDependsOnBuild=False /p:RunCodeAnalysis=True. This gives us a _PublishedWebsites\ContosoUniversity.Web folder (that contains all the web files that need to be deployed) and also runs the transformation to tokensise Web.config. Additionally, since we are outputting to $(build.stagingDirectory) the Test Assembly setting of the Visual Studio Test task needs to be $(build.stagingDirectory)\**\*UnitTests*.dll;-:**\obj\**. At some point we'll want to version our assemblies but I'll return to that in a another post.

One important step that has changed since my earlier posts is that the Restore NuGet Packages option in the Visual Studio Build task has been deprecated. The new way of doing this is to add a NuGet Installer task as the very first item and then in the Visual Studio Build task (in the Advanced section in VSTS) uncheck Restore NuGet Packages.

To publish the database and website as components -- or Artifacts (I'm using the TFS spelling) as they are known -- we use the Copy and Publish Build Artifacts tasks. The database task should be configured as follows:

  • Copy Root = $(build.stagingDirectory)
  • Contents =
    • ContosoUniversity.Database.d*
    • Deploy\Database.ps1
    • Deploy\DbDscResources.ps1
    • Deploy\Create login and database user.sql
  • Artifact Name = Database
  • Artifact Type = Server

Note that the Contents setting can take multiple entries on separate lines and we use this to be explicit about what the database artifact should contain. The website task should be configured as follows:

  • Copy Root = $(build.stagingDirectory)\_PublishedWebsites
  • Contents = **\*
  • Artifact Name = Website
  • Artifact Type = Server

Because we are specifying a published folder of website files that already has the Deploy folder present there's no need to be explicit about our requirements. With all this done the build should look similar to this:

web-portal-contosouniversity-rel-build

In order to test the successful creation of the artifacts, queue a build and then -- assuming the build was successful -- navigate to the build and click on the Artifacts link. You should see the Database and Website artifact folders and you can examine the contents using the Explore link:

web-portal-contosouniversity-rel-build-artifacts

Create a Basic Release

With the artifacts created we can now turn our attention to creating a basic release to get them copied on to a target node and then perform a deployment. Switch to the Release hub in the web portal and use the green cross icon to create a new release definition. The Deployment Templates window is presented and you should choose to start with an Empty template. There are four immediate actions to complete:

  1. Provide a Definition name -- ContosoUniversity for example.
  2. Change the name of the environment that has been added to DAT.
    web-portal-contosouniversity-release-definition-initial-tasks
  3. Click on Link to a build definition to link the release to the ContosoUniversity.Rel build definition.
    web-portal-contosouniversity-release-definition-link-to-build-definition
  4. Save the definition.

Next up we need to add two Windows Machine File Copy tasks to copy each artifact to one node called PRM-DAT-AIO. (As a reminder the DAT environment as I define it is just one server which hosts both the website and the database and where automated testing takes place.) Although it's possible to use just one task here the result of selecting artifacts differs according to the selected node in the artifact tree. At the node root, folders are created for each artifact but go one node lower and they aren't. I want a procedure that works for all environments which is as follows:

  1. Click on Add tasks to bring up the Add Tasks window. Use the Deploy link to filter the list of tasks and Add two Windows Machine File Copy tasks:
    web-portal-contosouniversity-release-definition-add-task
  2. Configure the properties of the tasks as follows:
    1. Edit the names (use the pencil icon) to read Copy Database files and Copy Website files respectively.
    2. Source = $(System.DefaultWorkingDirectory)/ContosoUniversity.Rel/Database or $(System.DefaultWorkingDirectory)/ContosoUniversity.Rel/Website accordingly (use the ellipsis to select)
    3. Machines = PRM-DAT-AIO.prm.local
    4. Admin login = Supply a domain account login that has admin privileges for PRM-DAT-AIO.prm.local
    5. Password = Password for the above domain account
    6. Destination folder = C:\temp\Database or C:\temp\Website accordingly
    7. Advanced Options > Clean Target = checked
  3. Click the ellipsis in the DAT environment and choose Deployment conditions.
    web-portal-contosouniversity-release-definition-environment-settings-deployment-conditions
  4. Change the Trigger to After release creation and click OK to accept.
  5. Save the changes and trigger a release using the green cross next to Release. You'll be prompted to select a build as part of the process:
    web-portal-contosouniversity-release-definition-environment-create-release
  6. If the release succeeds a C:\temp folder containing the artifact folders will have been created on on PRM-DAT-AIO.
  7. If the release fails switch to the Logs tab to troubleshoot. Permissions and whether the firewall has been configured to allow WinRM are the likely culprits. To preserve my sanity I do everything as domain admin and I have the domain firewall turned off. The usual warnings about these not necessarily being best practices in non-test environments apply!

Whilst you are checking the C:\temp folder on the target node have a look inside the artifact folders. They should both contain a Deploy folder that contains the PowerShell scripts that will be executed remotely using the PowerShell on Target Machines task. You'll need to configure two of each for the two artifacts as follows:

  1. Add two PowerShell on Target Machines tasks to alternately follow the Windows Machine File Copy tasks.
  2. Edit the names (use the pencil icon) to read Configure Database and Configure Website respectively.
  3. Configure the properties of the task as follows:
    1. Machines = PRM-DAT-AIO.prm.local
    2. Admin login = Supply a domain account that has admin privileges for PRM-DAT-AIO.prm.local
    3. Password = Password for the above domain account
    4. Protocol = HTTP
    5. Deployment > PowerShell Script = C:\temp\Database\Deploy\Database.ps1 or C:\temp\Website\Deploy\Website.ps1 accordingly
    6. Deployment > Initialization Script = C:\temp\Database\Deploy\DbDscResources.ps1 or C:\temp\Website\Deploy\WebDscResources.ps1 accordingly
  4. With reference to the parameters required by C:\temp\Database\Deploy\Database.ps1 configure Deployment > Script Arguments for the Database task as follows:
    1. $domainSqlServerSetupLogin = Supply a domain login that has privileges to install SQL Server on PRM-DAT-AIO.prm.local
    2. $domainSqlServerSetupPassword = Password for the above domain login
    3. $sqlServerSaPassword = Password you want to use for the SQL Server sa account
    4. $domainUserForIntegratedSecurityLogin = Supply a domain login to use for integrated security (PRM\CU-DAT in my case for the DAT environment)
    5. The finished result will be similar to: ‘PRM\Graham' ‘YourSecurePassword' ‘YourSecurePassword' ‘PRM\CU-DAT'
  5. With reference to the parameters required by C:\temp\Website\Deploy\Website.ps1 configure Deployment > Script Arguments for the Website task as follows:
    1. $domainUserForIntegratedSecurityLogin = Supply a domain login to use for integrated security (PRM\CU-DAT in my case for the DAT environment)
    2. $domainUserForIntegratedSecurityPassword = Password for the above domain account
    3. $sqlServerName = machine name for the SQL Server instance (PRM-DAT-AIO in my case for the DAT environment)
    4. The finished result will be similar to: ‘PRM\CU-DAT' ‘YourSecurePassword' ‘PRM-DAT-AIO'

At this point you should be able to save everything and the release should look similar to this:

web-portal-contosouniversity-release-definition-environment-create-release-all-tasks-added

Go ahead and trigger a new release. This should result in the PowerShell scripts being executed on the target node and IIS and SQL Server being installed, as well as the Contoso University application. You should be able to browse the application at http://prm-dat-aio. Result!

Variable Quality

Although we now have a working release for the DAT environment it will hopefully be obvious that there are serious shortcomings with the way we've configured the release. Passwords in plain view is one issue and repeated values is another. The latter issue is doubly of concern when we start creating further environments.

The answer to this problem is to create custom variables at both a ‘release' level and and at the ‘environment' level. Pretty much every text box seems to take a variable so you can really go to town here. It's also possible to create compound values based on multiple variables -- I used this to separate the location of the C:\temp folder from the rest of the script location details. It's worth having a bit of a think about your variable names in advance of using them because if you change your mind you'll need to edit every place they were used. In particular, if you edit the declaration of secret variables you will need to click the padlock to clear the value and re-enter it. This tripped me up until I added Write-Verbose statements to output the parameters in my DSC scripts and realised that passwords were not being passed through (they are asterisked so there is no security concern). (You do get the scriptArguments as output to the console but I find having them each on a separate line easier.)

Release-level variables are created in the Configuration section and if they are passwords can be secured as secrets by clicking the padlock icon. The release-level variables I created are as follows:

web-portal-contosouniversity-release-definition-release-variables

Environment-level variables are created by clicking the ellipsis in the environment and choosing Configure Variables. I created the following:

web-portal-contosouniversity-release-definition-environment-variables

The variables can then be used to reconfigure the release as per this screen shot which shows the PowerShell on Target Machines Configure Database task:

web-portal-contosouniversity-release-definition-tasks-using-variables

The other tasks are obviously configured in a similar way, and notice how some fields use more than one variable. Nothing has a actually changed by replacing hard-coded values with variables so triggering another release should be successful.

Environments Matter

With a successful deployment to the DAT environment we can now turn our attention to the other stages of the deployment pipeline -- DQA and PRD. The good news here is that all the work we did for DAT can be easily cloned for DQA which can then be cloned for PRD. Here's the procedure for DQA which don't forget is a two-node deployment:

  1. In the Configuration section create two new release level variables:
    1. TargetNode-DQA-SQL = PRM-DQA-SQL.prm.local
    2. TargetNode-DQA-IIS = PRM-DQA-IIS.prm.local
  2. In the DAT environment click on the ellipsis and select Clone environment and name it DQA.
  3. Change the two database tasks so the Machines property is $(TargetNode-DQA-SQL).
  4. Change the two website tasks so the Machines property is $(TargetNode-DQA-IIS).
  5. In the DQA environment click on the ellipsis and select Configure variables and make the following edits:
    1. Change DomainUserForIntegratedSecurityLogin to PRM\CU-DQA
    2. Click on the padlock icon for the DomainUserForIntegratedSecurityPassword variable to clear it then re-enter the password and click the padlock icon again to make it a secret. Don't miss this!
    3. Change SqlServerName to PRM-DQA-SQL
  6. In the DQA environment click on the ellipsis and select Deployment conditions and set Trigger to No automated deployment.

With everything saved and assuming the PRM-DQA-SQL and PRM-DQA-SQL nodes are running the release can now be triggered. Assuming the deployment to DAT was successful the release will wait for DQA to be manually deployed (almost certainly what is required as manual testing could be going on here):

web-portal-contosouniversity-release-definition-manual-deploy-of-DQA

To keep things simple I didn't assign any approvals for this release (ie they were all automatic) but do bear in mind there is some rich and flexible functionality available around this. If all is well you should be able to browse Contoso University on http://prm-dqa-iis. I won't describe cloning DQA to create PRD as it's very similar to the process above. Just don't forget to re-enter cloned password values! Do note that in the Environment Variables view of the Configuration section you can view and edit (but not create) the environment-level variables for all environments:

web-portal-contosouniversity-release-definition-all-environment-variables

This is a great way to check that variables are the correct values for the different environments.

And Finally...

There's plenty more functionality in Release Management that I haven't described but that's as far as I'm going in this post. One message I do want to get across is that the procedure I describe in this post is not meant to be a statement on the definitive way of using Release Management. Rather, it's designed to show what's possible and to get you thinking about your own situation and some of the factors that you might need to consider. As just one example, if you only have one application then the Visual Studio solution for the application is probably fine for the DSC code that installs IIS and SQL Server. However if you have multiple similar applications then almost certainly you don't want all that code repeated in every solution. Moving this code to the point at which the nodes are created could be an option here -- or perhaps there is a better way!

That's it for the moment but rest assured there's lots more to be covered in this series. If you want the final code that accompanies this post I've created a release here on my GitHub site.

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Server Configuration as Code with PowerShell DSC

Posted by Graham Smith on April 7, 201610 Comments (click here to comment)

I suspect I'm on reasonably safe ground when I venture to suggest that most software engineers developing applications for Windows servers (and the organisations they work for) have yet to make the leap from just writing the application code to writing both the application code and the code that will configure the servers the application will run on. Why do I suggest this? It's partly from experience in that I've never come across anyone developing for the Windows platform who is doing this (or at least they haven't mentioned it to me) and partly because up until fairly recently Microsoft haven't provided any tooling for implementing configuration as code (as this engineering practice is sometimes referred to). There are products from other vendors of course but they tend to have their roots in the Linux world and use languages such as Ruby (or DSLs based on Ruby) which is probably going to seriously muddy the waters for organisations trying to get everyone up to speed with PowerShell.

This has all changed relatively recently with the introduction of PowerShell DSC, Microsoft's solution for implementing configuration as code on Windows (and other platforms as it happens). With PowerShell DSC (and related technologies) the configuration of servers is expressed as programming code that can be versioned in source control. When a change is required to a server the code is updated and the new configuration is then applied to the server. This process is usually idempotent, ie the configuration can be applied repeatedly and will always give the same result. It also won't generate errors if the configuration is already in the desired state. Through version control we can audit how a configuration changes over time and being code it can be applied as required to ensure server roles in different environments, or multiple instances of the same server role in the same environment, have a consistent configuration.

So ostensibly Windows server developers now have no excuse not to start implementing configuration as code. But if we've managed so far without this engineering practice why all the fuss now? What benefit is it going to bring to the table? The key benefit is that it's a cure for that age-old problem of servers that might start life from a build script, but over the months (and possibly years) different technicians make necessary tweaks here and there until one day the server becomes a unique work of art that nobody could ever hope to reproduce. Server backups become critical and everyone dreads the day that the server will need to be upgraded or replaced.

If your application is very simple you might just get away with this state of affairs -- not that it makes it right or a best practice. However if your application is constantly evolving with concomitant configuration changes and / or you are going down the microservices route then you absolutely can't afford to hand-crank the configuration of your servers. Not only is the manual approach very error prone it's also hugely time-consuming, and has no place in a world of continuous delivery where shortening lead times and increasing reliability and repeatability is the name of the game.

So if there's no longer an excuse to implement configuration as code on the Windows platform why isn't there a mad rush to adopt it? In my view, for most mid-size IT departments working with existing budgets and staffing levels and an existing landscape of hand-cranked servers it's going to be a real slog to switch the configuration of a live estate to being managed by code. Once you start thinking about the complexities of analysing exiting servers (some of which might have been around for years and which might have all sorts of bespoke applications running on them) combined with devising a system of managing scores or even hundreds of servers it's clear that a task of this nature is almost certainly going to require a dedicated team. And despite the potential benefits that configuration as code promises most mid-size IT departments are likely to struggle to stand-up such a team.

So if it's going to be hard how does an organisation get started with configuration as code and PowerShell DSC? Although I don't have anywhere near all of the answers it is already clear to me that if your organisation is in the business of writing applications for Windows servers then you need to approach the problem from both ends of the server spectrum. At the far end of the spectrum is the live estate where server ‘drift' needs to be controlled using PowerShell DSC's ‘pull' mode. This is where servers periodically reach out to a central repository to pull their ‘true' configuration and make any adjustments accordingly. At the near end of the spectrum are the servers that form the continuous delivery pipeline which need to have configuration changes applied to them just before a new version of the application gets deployed to them. Happily PowerShell has a ‘push' mode which will work nicely for this purpose. There is also the live deployment situation. Here, live servers will need to have configuration changes pushed to them before application deployment takes place and then will need to switch over to pull mode to keep them true.

The way I see things at the moment is that PowerShell DSC pull mode is going to be hard to implement at scale because of the lack of tooling to manage it. Whilst you could probably manage a handful of servers in pull mode using PowerShell DSC script files, any more than a handful is going to cause serious pain without some kind of management framework such as the one that is available for Chef. The good news though is that getting started with PowerShell DSC push mode for configuring servers that comprise the deployment pipeline as part of application development activities is a much more realistic prospect.

Big Picture Time

I'm not going to be able to cover everything about making PowerShell DSC push mode work in one blog post so it's probably worth a few words about the bigger picture. One key concept to establish early on is that the code that will configure the server(s) that an application will reside on has to live and change alongside the application code. At the very least the server configuration code needs to be in the same version control branch as the application code and frequently it will make sense for it to be part of the same Visual Studio solution. I won't be presenting that approach in this blog post and instead will concentrate on the mechanics of getting PowerShell DSC push mode working and writing the configuration code that enables the Contoso University sample application (which requires IIS and SQL Server) to run. In a future post I'll have the code in the same Visual Studio solution as the Contoso University sample application and will explain how to build an artefact that is then deployed by the release management tooling in TFS / VSTS prior to deploying the application.

For anyone who has come across this post by chance it is part of my ongoing series about Continuous Delivery with TFS / VSTS, and you may find it helpful to refer to some of the previous posts to understand the full context of what I'm trying to achieve. I should also mention that this post isn't intended to be a PowerShell DSC tutorial and if you are new to the technology I have a Getting Started post here with a link collection of useful learning resources. With all that out of the way let's get going!

Getting Started

Taking the Infrastructure solution from this blog post as a starting point (available as a code release at my Infrastructure repo on GitHub, final version of this post's code here) add a new PowerShell Script Project called ConfigurationScripts. To this new project add a new PowerShell Script file called ContosoUniversity.ps1 and add a hash table and empty Configuration block called WebAndDatabase as follows:

We're going to need an environment to deploy in to so using the techniques described in previous posts (here and here) create a PRM-DAT-AIO server that is joined to the domain. This server will need to have Windows Management Framework 5.0 installed -- a manual process as far as this particular post is concerned but something that is likely to need automating in the future.

To test a basic working configuration we'll create a folder on PRM-DAT-AIO to act as the IIS physical path to the ContosoUniversity web files. Add the following lines of code to the beginning of the configuration block:

To complete the skeleton code add the following lines of code to the end of ContosoUniversity.ps1:

The code contained in ContosoUniversity.ps1 should now be as follows:

Although you can create this code from any developer workstation you need to ensure that you can run it from a workstation that is joined to the same domain as PRM-DAT-AIO and has a folder called C:\Dsc\Mof. In order to keep authentication simple I'm also assuming that you are logged on to your developer workstation with domain credentials that allow you to perform DSC operations on PRM-DAT-AIO. Running this code will create a PRM-DAT-AIO.mof file in C:\Dsc\Mof which will deploy to PRM-DAT-AIO and create the folder. Magic!

Installing Resource Modules Locally

To do anything much more sophisticated than create a folder we'll need to import resources to our local workstation from the PowerShell Gallery. We'll be working with xWebAdministration and xSQLServer and they can be installed locally as follows:

These same commands will also install the latest version of the resources if a previous version exists. Referencing these resources in our configuration script seems to have changed with the release of DSC 5.0 and versioning information is a requirement. Consequently, these resources are referenced in the configuration as follows:

Obviously change the above code to reference the version of the module that you actually install. The resources are continually being updated with new versions and this requires a strategy to upgrade on a periodic basis.

Making Resource Modules Available Remotely

Whilst the additions in the previous section allows us to create advanced configurations on our developer workstation these configurations are not going to run against target nodes since as things stand the target nodes don't know anything about custom resources (as opposed to resources such as PSDesiredStateConfiguration which ship with the Windows Management Framework). We can fix this by telling the Local Configuration Manager (LCM) of target nodes where to get the custom resources from. The procedure (which I've adapted from Nana Lakshmanan's blog post) is as follows:

  • Choose a server in the domain to host a fileshare. I'm using my domain controller (PRM-CORE-DC) as it's always guaranteed to be available under normal conditions. Create a folder called C:\Dsc\DscResources (Dsc purposefully repeated) and share it as Read/Write for Everyone as \\PRM-CORE-DC\DscResources.
  • Custom resources need to be zipped in a format required by DSC the pull protocol. The PowerShell to do this for version 1.10 of xWebAdministration and 1.5 of xSQLServer (using a local C:\Dsc\Resources folder) is as follows:

    Of course depending on the frequency of your having to do this to cope with updates and the number of resources you end up working with you probably want to re-write all this up in to some sort of reusable package.
  • With the packages now in the right format in the fileshare we need to tell the LCM of target nodes where to look. We do this by creating a new configuration decorated with the [DscLocalConfigurationManager()] attribute:

    The Settings block is used to set various properties of the LCM which are required in order for configurations we'll be writing to run. The ResourceRepositoryShare block obviously specifies the location of the zipped resource packages.
  • The final requirement is to add the line of code (Set-DscLocalConfigurationManager -Path C:\Dsc\Mof -Verbose) to apply the LCM settings.

The revised version of ContosoUniversity.ps1 should now be as follows:

At this stage we now have our complete working framework in place and we can begin writing the configuration blocks that colectively will leave us with a server that is capable of running our Contoso University application.

Writing Configurations for the Web Role

Configuring for the web role requires consideration of the following factors:

  • The server features that are required to run your application. For Contoso University that's IIS, .NET Framework 4.5 Core and ASP.NET 4.5.
  • The mandatory IIS configurations for your application. For Contoso University that's a web site with a custom physical path.
  • The optional IIS configurations for your application. I like things done in a certain way so I want to see an application pool called ContosoUniversity and the Contoso University web site configured to use it.
  • Any tidying-up that you want to do to free resources and start thinking like you are configuring NanoServer. For me this means removing the default web site and default application pools.

Although you'll know if your configurations have generated errors how will you know if they've generated the desired result? The following ‘debugging' options can help:

  • I know that the home page of Contoso University will load without a connection to a database, so I copied a build of the website to C:\inetpub\ContosoUniversity on PRM-DAT-AIO so I could test the site with a browser. You can download a zip of the build from here although be aware that AV software might mistakenly regard it as malware.
  • The IIS management tools can be installed on target nodes whilst you are in configuration mode so you can see graphically what's happening. The following configuration does the trick:
  • If you are testing with a local version of Internet Explorer make sure you turn off Compatibility View or your site may render with odd results. From the IE toolbar choose Tools > Compatibility View Settings and uncheck Display intranet sites in Compatibility View.

Whilst you are in configuration mode the following resources will be of assistance:

  • The xWebAdministration documentation on GitHub: https://github.com/PowerShell/xWebAdministration.
  • The example files that ship with xWebAdministration: C:\Program Files\WindowsPowerShell\Modules\xWebAdministration\n.n.n.n\Examples.
  • A Google search for xWebAdministration.

The configuration settings required to meet my requirements stated above are as follows:

There is one more piece of the jigsaw to finish the configuration and that's amending the application pool to use a domain account that has permissions to talk to SQL Server. That's a more advanced topic so I'm dealing with it later.

Writing Configurations for the Database Role

Configuring for the SQL Server database role is slightly different from the web role since we need to install SQL Server which is a separate application. The installation files need to be made available as follows:

  • Choose a server in the domain to host a fileshare. As above I'm using my domain controller. Create a folder called C:\Dsc\DscInstallationMedia and and share it as Read/Write for Everyone as \\PRM-CORE-DC\DscInstallationMedia.
  • Download a suitable SQL Server ISO image to the server hosting the fileshare -- I used en_sql_server_2014_enterprise_edition_with_service_pack_1_x64_dvd_6669618.iso from MSDN Subscriber Downloads.
  • Mount the ISO and copy the contents of its drive to a folder called SqlServer2014 created under C:\Dsc\DscInstallationMedia.

In contrast to configuring for the web role there are fewer configurations required for the database role. There is a requirement to supply a credential though and for this I'm using the Key Vault technique described in this post. This gives rise to new code within and preceding the configuration hash table as follows:

For a server such as the one we are configuring where the database is on the same machine as the web server and only the database engine is required there are just two configuration blocks needed to install SQL Server. For more complicated scenarios the following resources will be of assistance:

  • The xSQLServer documentation on GitHub: https://github.com/PowerShell/xSQLServer.
  • The example files that ship with xSQLServer: C:\Program Files\WindowsPowerShell\Modules\xSQLServer\n.n.n.n\Examples.
  • A Google search for xSQLServer.

The configuration settings required for the single server scenario are as follows:

In order to assist with ‘debugging' activities I've included the installation of the SQL Server management tools but this can be omitted when the configuration has been tested and deemed fit for purpose. Later in this post we'll manually install the remaining parts of the Contoso University application to prove that the installation worked but for the time being you can run SQL Server Management Studio to see the database engine running in all its glory!

Amending the Application Pool Identity

The Contoso University website is granted access to the database via a domain account that firstly gets configured as the Identity for the website's application pool and then gets configured as a SQL Server login associated with a user which has the appropriate permissions to the database. The SQL Server configuration is taken care of by a permissions script that we'll come to shortly, and the immediate task is concerned with amending the Identity property of the ConsosoUniversity application pool so that it references a domain account.

Initially this looked like it was going to be painful since xWebAdministration doesn't currently have the ability to configure the inner workings of application pools. Whilst investigating the possibilities I had the good fortune to come across a fork of xWebAdministration on the PowerShell.org GitHub site where those guys have created a module which does what we want. I need to introduce a slight element of caution here since the fork doesn't look like it's under active development. On the other hand maybe there are no major issues that need fixing. And if there are and they aren't going to get fixed at least the code is there to be forked. Because this fork isn't in the PowerShell Gallery getting it to work locally is a manual process:

  • Download the code to C:\Dsc\Resources and unblock and extract it. Change the folder name from cWebAdministration-master to cWebAdministration and copy to C:\Program Files\WindowsPowerShell\Modules.
  • In the configuration block reference the module as Import-DscResource –ModuleName @{ModuleName="cWebAdministration";ModuleVersion="2.0.1″}.

The configuration required to make the resource available to target nodes has an extra manual step:

  • In the root of C:\DSC\Resources\cWebAdministration create a folder named 2.0.1 and copy the contents of C:\DSC\Resources\cWebAdministration to this folder.
  • The following code can now be used to package the resource and copy it to the fileshare:

I tend towards using a different domain account for the Identity properties of the website application pools in the different environments that make up the deployment pipeline. In doing so it protects the pipeline form a complete failure if something happens to that domain account -- it gets locked-out for example. To support this scenario the configuration block to configure the application pool identity needs to support dynamic configuration and takes the following form:

The dynamic configuration is supported by Key Vault code to retrieve the password of the domain account used to configure the application pool (not shown) and the following additions to the configuration hash table:

The code does of course rely on the existence of the PRM\CU-DAT domain account (set so the password doesn't expire). This is the last piece of configuration, and you can view the final result on GitHub here.

The Moment of Truth

After all that configuration, is it enough to make the Contoso University application work? To find out:

  • If you haven't already, download, unblock and unzip the ContosoUniversityConfigAsCode package from here, although as mentioned previously be aware that AV software might mistakenly regard it as malware.
  • The contents of the Website folder should be copied (if not already) to C:\inetpub\ContosoUniversity on the target node.
  • Edit the SchoolContext connection string in Web.config if required -- the download has the server set to localhost and the database to ContosoUniversity.
  • On the target node run SQL Server Management Studio and install the database as follows:
    • In Object Explorer right-click the Databases node and choose Deploy Data-tier Application.
    • Navigate through the wizard, and at Select Package choose ContosoUniversity.Database.dacpac from the database folder of the ContosoUniversityConfigAsCode download.
    • Move to the next page of the wizard (Update Configuration) and change the Name to ContosoUniversity.
    • Navigate past the Summary page and the DACPAC will be deployed:
      ssms-deploy-dacpac
  • Still in SSMS, apply the permissions script as follows:
    • Open Create login and database user.sql from the Database\Scripts folder in the ContosoUniversityConfigAsCode download.
    • If the pre-configured login/user (PRM\CU-DAT) is different from the one you are using update accordingly, then execute the script.

You can now navigate to http://prm-dat-aio (or whatever your server is called) and if all is well make a mental note to pour a well-deserved beverage of your choosing.

Looking Forward

Although getting this far is certainly an important milestone it's by no means the end of the journey for the configuration as code story. Our configuration code now needs to be integrated in to the Contoso University Visual Studio solution so that it can be built as an artefact alongside the website and database artefacts. We then need to be able to deploy the configuration before deploying the application -- all automated through the new release management tooling that has just shipped with TFS 2015 Update 2 or through VSTS if you are using that. Until next time...

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Join a VM to a Domain with Azure Resource Manager Templates

Posted by Graham Smith on March 20, 2016No Comments (click here to comment)

In the previous post in my blog post series on Continuous Delivery with TFS / VSTS we learned how to provision a Windows Server virtual machine using Azure Resource Manager templates. The next major step in this quest to automate the creation and configuration of the infrastructure to which we'll deploy our application is to configure server internals, starting with joining a VM to the domain. My initial thinking was that this would need to be some kind of PowerShell command, and whilst this is an option I was very pleased to find that there is an ARM template resource to do this. The resource in question goes by the name of JsonADDomainExtension; it's a VM extension and you can read about it (and the PowerShell commands to do the same thing) in this blog post.

I have to confess that I struggled to get the extension to work at first. I spent a whole afternoon fiddling with the settings and getting nowhere, and spent quite a bit of time reading forum posts from others who were having similar difficulties (mostly with the PowerShell commands though). I gave up in frustration, only to come back to it a few days later to try again to find it was all working! I describe the steps I took below -- please be aware that it's very much a direct continuation of this post so please do check that out first if you haven't done so already.

Adding the JsonADDomainExtension to the JSON Template

Getting starting with the extension is very easy, as it's just a case of dropping the JSON in to the resources part of the template. The code I initially used to make the extension work was as follows:

I added this code to the WindowsServer2012R2Datacenter.json file which has variables defined for use where the VM name is required. Note that OUPath can be an empty string, the requirement for the escaped backslash for the (domain) User and the use of the magic number 3 in Options (just go with it or see here for the details).

Whilst this (eventually) worked fine for me the big issue was how to hide the password for the account that will join the VM to the domain. I hard coded it in to the template to get the extension working but even when refactored as a parameter the password is still in plain view -- now just in the PowerShell calling script.

Say Hello to Azure Key Vault

As luck would have it around the time I was initially getting JsonADDomainExtension to work I watched Cloud Cover Episode 200: Azure Resource Manager Tooling with Brian Moore where Brian mentioned the forthcoming ability to use Azure Key Vault to supply secret values such as passwords to ARM templates. Following a very helpful email exchange Brian pointed me towards this page which is a partial answer to the solution I wanted to get working.

At the time of writing there was no portal interface for configuring Azure Key Vault so it's over to PowerShell (no bad thing) to create a new vault:

In the code above this creates a vault named prmkeyvault. Next we need to add our password as a secret:

This creates a new secret called DomainAdminPassword. Of course, the objects that have just been created can be examined with Azure Resource Explorer:

azure-resource-explorer-key-vault

Use the Secret in the JSON Template

The Microsoft guidance for passing secrets to templates is based on the use of an ARM parameters file. This wasn't quite what I wanted as I'm using a PowerShell script to supply my parameters. The way to access secrets using PowerShell is along the following lines:

You can see how I integrated the code above in to my PowerShell script by examining Create PRM-DAT.ps1 in the code release that accompanies this post on my Infrastructure repository on GitHub. It's not quite the full solution at the moment though because despite having a mechanism in place for automatically authenticating to Azure PowerShell the use of Azure Key Vault cmdlets in the script causes the authentication dialog to pop-up. I'm still working on how to stop that -- if you know please leave a message in the comments!

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Infrastructure as Code with Azure Resource Manager Templates

Posted by Graham Smith on February 25, 2016No Comments (click here to comment)

So far in this blog post series on Continuous Delivery with TFS / VSTS we have gradually worked our way to the position of having a build of our application which is almost ready to be deployed to target servers (or nodes if you prefer) in order to conduct further testing before finally making its way to production. This brings us to the question of how these nodes should be provisioned and configured. In my previous series on continuous delivery deployment was to nodes that had been created and configured manually. However with the wealth of automation tools available to us we can -- and should -- improve on that.  This post explains how to achieve the first of those -- provisioning a Windows Server virtual machine using Azure Resource Manager templates. A future post will deal with the configuration side of things using PowerShell DSC.

Before going further I should point out that this post is a bit different from my other posts in the sense that it is very specific to Azure. If you are attempting to implement continuous delivery in an on premises situation chances are that the specifics of what I cover here are not directly usable. Consequently, I'm writing this post in the spirit of getting you to think about this topic with a view to investigating what's possible for your situation. Additionally, if you are not in the continuous delivery space and have stumbled across this post through serendipity I do hope you will be able to follow along with my workflow for creating templates. Once you get past the Big Picture section it's reasonably generic and you can find the code that accompanies this post at my GitHub repository here.

The Infrastructure Big Picture

In order to understand where I am going with this post it's probably helpful to understand the big picture as it relates to this blog series on continuous delivery. Our final continuous delivery pipeline is going to consist of three environments:

  • DAT -- development automated test where automated UI testing takes place. This will be an ‘all in one' VM hosting both SQL Server and IIS. Why have an all-in-one VM? It's because the purpose of this environment is to run automated tests, and if those tests fail we want a high degree of certainty that it was because of code and not any other factors such as network problems or a database timeout. To achieve that state of certainty we need to eliminate as many influencing variables as possible, and the simplest way of achieving that is to have everything running on the same VM. It breaks the rule about early environments reflecting production but if you are in an on premises situation and your VMs are on hand-me-down infrastructure and your network is busy at night (when your tests are likely running) backing up VMs and goodness knows what else then you might come to appreciate the need for an all-in-one VM for automated testing.
  • DQA -- development quality assurance where high-value manual testing takes place. This really does need to reflect production so it will consist of a database VM and a web server VM.
  • PRD -- production for the live code. It will consist of a database VM and a web server VM.

These environments map out to the following infrastructure I'll be creating in Azure:

  • PRM-DAT -- resource group to hold everything for the DAT environment
    • PRM-DAT-AIO -- all in one VM for the DAT environment
  • PRM-DQA -- resource group to hold everything for the DQA environment
    • PRM-DQA-SQL -- database VM for the DQA environment
    • PRM-DQA-IIS -- web server VM for the DQA environment
  • PRM-PRD -- resource group to hold everything for the DQA environment
    • PRM-PRD-SQL -- database VM for the PRD environment
    • PRM-PRD-IIS -- web server VM for the PRD environment

The advantage of using resource groups as containers is that an environment can be torn down very easily. This makes more sense when you realise that it's not just the VM that needs tearing down but also storage accounts, network security groups, network interfaces and public IP addresses.

Overview of the ARM Template Development Workflow

We're going to be creating our infrastructure using ARM templates which is a declarative approach, ie we declare what we want and some other system ‘makes it so'. This is in contrast to an imperative approach where we specify exactly what should happen and in what order. (We can use an imperative approach with ARM using PowerShell but we don't get any parallelisation benefits.) If you need to get up to speed with ARM templates I have a Getting Started blog post with a collection useful useful links here. The problem -- for me at least -- is that although Microsoft provide example templates for creating a Windows Server VM (for instance) they are heavily parametrised and designed to work as standalone VMs, and it's not immediately obvious how they can fit in to an existing network. There's also the issue that at first glance all that JSON can look quite intimidating! Fear not though, as I have figured out what I hope is a great workflow for creating ARM templates which is both instructive and productive. It brings together a number of tools and technologies and I make the assumption that you are familiar with these. If not I've blogged about most of them before. A summary of the workflow steps with prerequisites and assumptions is as follows:

  • Create a Model VM in Azure Portal. The ARM templates that Microsoft provide tend to result in infrastructure that have different internal names compared with the same infrastructure created through the Azure Portal. I like how the portal names things and in order to help replicate that naming convention for VMs I find it useful to create a model VM in the portal whose components I can examine via the Azure Resource Explorer.
  • Create a Visual Studio Solution. Probably the easiest way to work with ARM templates is in Visual Studio. You'll need the Azure SDK installed to see the Azure Resource Group project template -- see here for more details. We'll also be using Visual Studio to deploy the templates using PowerShell and for that you'll need the PowerShell Tools for Visual Studio extension. If you are new to this I have a Getting Started blog post here. We'll be using Git in either TFS or VSTS for version control but if you are following this series we've already covered that.
  • Perform an Initial Deployment. There's nothing worse than spending hours coding only to find that what you're hoping to do doesn't work and that the problem is hard to trace. The answer of course is to deploy early and that's the purpose of this step.
  • Build the Deployment Template Resource by Resource Using Hard-coded Values. The Microsoft templates really go to town when it comes to implementing variables and parameters. That level of detail isn't required here but it's hard to see just how much is required until the template is complete. My workflow involves using hard-coded values initially so the focus can remain on getting the template working and then refactoring later.
  • Refactor the Template with Parameters, Variables and Functions. For me refactoring to remove the hard-coded values is one of most fun and rewarding parts of the process. There's a wealth of programming functionality available in ARM templates -- see here for all the details.
  • Use the Template to Create Multiple VMs. We've proved the template can create a single VM -- what about multiple VMs? This section explores the options.

That's enough overview -- time to get stuck in!

Create a Model VM in Azure Portal

As above, the first VM we'll create using an ARM template is going to be called PRM-DAT-AIO in a resource group called PRM-DAT. In order to help build the template we'll create a model VM called PRM-DAT-AAA in a resource group called PRM-DAT via the Azure Portal. The procedure is as follows:

  • Create a resource group called PRM-DAT in your preferred location -- in my case West Europe.
  • Create a standard (Standard-LRS) storage account in the new resource group -- I named mine prmdataaastorageaccount. Don't enable diagnostics.
  • Create a Windows Server 2012 R2 Datacenter VM (size right now doesn't matter much -- I chose Standard DS1 to keep costs down) called PRM-DAT-AAA based on the PRM-DAT resource group, the prmdataaastorageaccount storage account and the prmvirtualnetwork that was created at the beginning of this blog series as the common virtual network for all VMs. Don't enable monitoring.
  • In Public IP addresses locate PRM-DAT-AAA and under configuration set the DNS name label to prm-dat-aaa.
  • In Network security groups locate PRM-DAT-AAA and add the following tag: displayName : NetworkSecurityGroup.
  • In Network interfaces locate PRM-DAT-AAAnnn (where nnn represents any number) and add the following tag: displayName : NetworkInterface.
  • In Public IP addresses locate PRM-DAT-AAA and add the following tag: displayName : PublicIPAddress.
  • In Storage accounts locate prmdataaastorageaccount and add the following tag: displayName : StorageAccount.
  • In Virtual machines locate PRM-DAT-AAA and add the following tag: displayName : VirtualMachine.

You can now explore all the different parts of this VM in the Azure Resource Explorer. For example, the public IP address should look similar to:

azure-resource-explorer-public-ip-address

Create a Visual Studio Solution

We'll be building and running our ARM template in Visual Studio. You may want to refer to previous posts (here and here) as a reminder for some of the configuration steps which are as follows:

  • In the Web Portal navigate to your team project and add a new Git repository called Infrastructure.
  • In Visual Studio clone the new repository to a folder called Infrastructure at your preferred location on disk.
  • Create a new Visual Studio Solution (not project!) called Infrastructure one level higher then the Infrastructure folder. This effectively stops Visual Studio from creating an unwanted folder.
  • Add .gitignore and .gitattributes files and perform a commit.
  • Add a new Visual Studio Project to the solution of type Azure Resource Group called DeploymentTemplates. When asked to select a template choose anything.
  • Delete the Scripts, Templates and Tools folders from the project.
  • Add a new project to the solution of type PowerShell Script Project called DeploymentScripts.
  • Delete Script.ps1 from the project.
  • In the DeploymentTemplates project add a new Azure Resource Manager Deployment Project item called WindowsServer2012R2Datacenter.json (spaces not allowed).
  • In the DeploymentScripts project add a new PowerShell Script item for the PowerShell that will create the PRM-DAT resource group with a PRM-DAT-AIO server -- I called my file Create PRM-DAT.ps1.
  • Perform a commit and sync to get everything safely under version control.

With all that configuration you should have a Visual Studio solution looking something like this:

visual-studio-infrastructure-solution

Perform an Initial Deployment

It's now time to write just enough code in Create PRM-DAT.ps1 to prove that we can initiate a deployment from PowerShell. First up is the code to authenticate to Azure PowerShell. I have the authentication code which was the output of this post wrapped in a function called Set-AzureRmAuthenticationForMsdnEnterprise which in turn is contained in a PowerShell module file called Authentication.psm1. This file in turn is deployed to C:\Users\Graham\Documents\WindowsPowerShell\Modules\Authentication which then allows me to call Set-AzureRmAuthenticationForMsdnEnterprise from anywhere on my development machine. (Although this function could clearly be made more generic with the use of some parameters I've consciously chosen not to so I can check my code in to GitHub without worrying about exposing any authentication details.) The initial contents of Create PRM-DAT.ps1 should end up looking as follows:

Running this code in Visual Studio should result in a successful outcome, although admittedly not much has happened because the resource group already existed and the deployment template is empty. Nonetheless, it's progress!

Build the Deployment Template Resource by Resource Using Hard-coded Values

The first resource we'll code is a storage account. In the DeploymentTemplates project open WindowsServer2012R2Datacenter.json which as things stand just contains some boilerplate JSON for the different sections of the template that we'll be completing. What you should notice is the JSON Outline window is now available to assist with editing the template. Right-click resources and choose Add New Resource:

visual-studio-json-outline-add-new-resource

In the Add Resource window find Storage Account and add it with the name (actually the display name) of  StorageAccount:

visual-studio-json-outline-add-new-resource-storage-account

This results in boilerplate JSON being added to the template along with a variable for actual storage account name and a parameter for account type. We'll use a variable later but for now delete the variable and parameter that was added -- you can either use the JSON Outline window or manually edit the template.

We now need to edit the properties of the resource with actual values that can create (or update) the resource. In order to understand what to add we can use the Azure Resource Explorer to navigate down to the storageAccounts node of the MSDN subscription where we created prmdataaastorageaccount:

azure-resource-explorer-storage-accounts-prmdataaastorageaccount

In the right-hand pane of the explorer we can see the JSON that represents this concrete resource, and although the properties names don't always match exactly it should be fairly easy to see how the ‘live' values can be used as a guide to populating the ones in the deployment template:

azure-resource-explorer-storage-accounts-prmdataaastorageaccount-json

So, back to the deployment template the following unassigned properties can be assigned the following values:

  • "name": "prmdataiostorageaccount"
  • "location": "West Europe"
  • "accountType": "Standard_LRS"

The resulting JSON should be similar to:

Save the template and switch to Create PRM-DAT.ps1 to run the deployment script which should create the storage account. You can verify this either via the portal or the explorer.

The next resource we'll create is a NetworkSecurityGroup, which has an extra twist in that at the time of writing adding it to the template isn't supported by the JSON Outline window. There's a couple of ways to go here -- either type the JSON by hand or use the Create function in the Azure Resource Explorer to generate some boilerplate JSON. This latter technique actually generates more JSON than is needed so in this case is something of a hindrance. I just typed the JSON directly and made use of the IntelliSense options in conjunction with the PRM-DAT-AAA network security group values via the Azure Resource Explorer. The JSON that needs adding is as follows:

Note that you'll need to separate this resource from the storage account resource with a comma to ensure the syntax is valid. Save the template, run the deployment and refresh the Azure Resource Explorer. You can now compare the new PRM-DAT-AIO and PRM-DAT-AAA network security groups in the explorer to validate the JSON that creates PRM-DAT-AIO. Note that by zooming out in your browser you can toggle between the two resources and see that it is pretty much just the etag values that are different.

The next resource to add is a public IP address. This can be added from the JSON Outline window using PublicIPAddress as the name but it also wants to add a reference to itself to a network interface which in turn wants to reference a virtual network. We are going to use an existing virtual network but we do need a network interface, so give the new network interface a name of NetworkInterface and the new virtual network can be any temporary name. As soon as the new JSON components have been added delete the virtual network and all of the variables and parameters that were added. All this makes sense when you do it -- trust me!

Once edited with the appropriate values the JSON for the public IP address should be as follows:

The edited JSON for the network interface should look similar to the code that follows, but note I've replaced my MSDN subscription GUID with an ellipsis.

It's worth remembering at this stage that we're hard-coding references to other resources. We'll fix that up later on, but for the moment note that the network interface needs to know what virtual network subnet it's on (created in an earlier post), and which public IP address and network security group it's using. Also note the dependsOn section which ensures that these resources exist before the network interface is created. At this point you should be able to run the deployment and confirm that the new resources get created.

Finally we can add a Windows virtual machine resource. This is supported from the JSON Outline window, however this resource wants to reference a storage account and virtual network. The storage account exists and that should be selected, but once again we'll need to use a temporary name for the virtual network and delete it and the variables and parameters. Name the virtual machine resource VirtualMachine. Edit the JSON with appropriate hard-coded values which should end up looking as follows:

Running the deployment now should result in a complete working VM which you can remote in to.

The final step before going any further is to tear-down the PRM-DAT resource group and check that a fully-working PRM-DAT-AIO VM is created. I added a Destroy PRM-DAT.ps1 file to my DeploymentScripts project with the following code:

Refactor the Template with Parameters, Variables and Functions

It's now time to make the template reusable by refactoring all the hard-coded values. Each situation is likely to vary but in this case my specific requirements are:

  • The template will always create a Windows Server 2012 R2 Datacenter VM, but obviously the name of the VM needs to be specified.
  • I want to restrict my VMs to small sizes to keep costs down.
  • I'm happy for the VM username to always be the same so this can be hard-coded in the template, whilst I want to pass the password in as a parameter.
  • I'm adding my VMs to an existing virtual network in a different resource group and I'm making a concious decision to hard-code these details in.
  • I want the names of all the different resources to be generated using the VM name as the base.

These requirements gave rise to the following parameters, variables and a resource function:

  • nodeName parameter -- this is used via variable conversions throughout the template to provide consistent naming of objects. My node names tend to be of the format used in this post and that's the only format I've tested. Beware if your node names are different as there are naming rules in force.
  • nodeNameToUpper variable -- used where I want to ensure upper case for my own naming convention preferences.
  • nodeNameToLower variable -- used where lower case is a requirement of ARM eg where nodeName forms part of a DNS entry.
  • vmSize parameter -- restricts the template to creating VMs that are not going to burn Azure credits too quickly and which use standard storage.
  • storageAccountName variable -- creates a name for the storage account that is based on a lower case nodeName.
  • networkInterfaceName variable -- creates a name for the network interface based on a lower case nodeName with a number suffix.
  • virtualNetworkSubnetName variable -- used to create the virtual network subnet which exists in a different resource group and requires a bit of construction work.
  • vmAdminUsername variable -- creates a username for the VM based on the nodeName. You'll probably want to change this.
  • vmAdminPassword parameter -- the password for the VM passed-in as a secure string.
  • resourceGroup().location resource function -- neat way to avoid hard-coding the location in to the template.

Of course, these refactorings shouldn't affect the functioning of the template, and tearing down the PRM-DAT resource group and recreating it should result in the same resources being created.

What about Environments where Multiple VMs are Required?

The work so far has been aimed at creating just one VM, but what if two or more VMs are needed? It's a very good question and there are at least two answers. The first involves using the template as-is and calling New-AzureRmResourceGroupDeployment in a PowerShell Foreach loop. I've illustrated this technique in Create PRM-DQA.ps1 in the DeploymentScripts project. Whilst this works very nicely the VMs are created in series rather than in parallel and, well, who wants to wait? My first thought at creating VMs in parallel was to extend the Foreach loop idea with the -parallel switch in a PowerShell workflow. The code which I was hoping would work looks something like this:

Unfortunately it seems like this idea is a dud -- see here for the details. Instead the technique appears to be to use the copy, copyindex and length features of ARM templates as documented here. This necessitates a minor re-write of the template to pass in and use an array of node names, however there are complications where I've used variables to construct resource names. At the time of publishing this post I'm working through these details -- keep an eye on my GitHub repository for progress.

Wrap-Up

Before actually wrapping-up I'll make a quick mention of the template's outputs node. A handy use for this is debugging, for example where you are trying to construct a complicated variable and want to check its value. I've left an example in the template to illustrate.

I'll finish this post with a question that I've been pondering as I've been writing this post, which is whether just because we can create and configure VMs at the push of a button does that mean we should create and configure new VMs every time we deploy our application? My thinking at the moment is probably not because of the time it will add but as always it depends. If you want a clean start every time you deploy then you certainly have that option, but my mind is already thinking ahead to the additional amount of time it's going to take to actually configure these VMs with IIS and SQL Server. Never say never though, as who knows what's in store for the future? As Azure (presumably) gets faster and VMs become more lightweight with the arrival of Nano Server perhaps creating and configuring VMs from scratch as part of the deployment pipeline will be so fast that there would be no reason not to. Or maybe we'll all be using containers by then...

Cheers -- Graham

Version Control PowerShell Scripts with Visual Studio and Visual Studio Team Services

Posted by Graham Smith on January 6, 20163 Comments (click here to comment)

It's a new year and whilst I'm not a big fan of New Year's resolutions I do try and use this time of year to have a bit of a tidy-up of my working environments and adopt some better ways of working. Despite being a developer who's used version control for years to manage application source code one thing I've been guilty of for some time now is not version controlling my PowerShell code scripts. Shock, horror I know -- but I'm pretty sure I'm not alone. In this post I'll be sharing how I solved this, but first let's take a quick look at the problem.

The Problem

Unless you are a heavyweight PowerShell user and have adopted a specialist editing tool, chances are that you are using the PowerShell ISE (Integrated Script Editor) that comes with Windows for editing and running your scripts. Whilst it's a reasonably capable editor it doesn't have integration points to any version control technologies. Consequently, if you want version control and you want to continue using the ISE you'll need to manage version control from outside the ISE -- which in my book isn't the seamless experience I'm used to. No matter, if you can live without a seamless experience the next question in this scenario is what version control technologies can be used? Probably quite a few, but since Git is the hot topic of the day how about GitHub -- the hosted version of Git that's free for public repositories? Ideal since it's hosted for you and there's the rather nice GitHub Desktop to make things slightly less seamless. Hang on though -- if you are like me you probably have all sorts of stuff in your PowerShells that you don't want being publicly available on GitHub. Not passwords or anything like that, just inner workings that I'd rather keep private. Okay, so not GitHub. How about running your own version of Git server? Nah...

A Solution

If you are a Visual Studio developer then tools you are already likely using offer one solution to this problem. And if you aren't a Visual Studio developer then the same tools can still be used -- very possibly for free. As you've probably already guessed from the blog title, the tools I'm suggesting are Visual Studio (2015, for the script editing experience) and Visual Studio Team Services (VSTS, for version control). Whoa -- Visual Studio supports PowerShell as a language? Since when did that happen? Since Adam Driscoll created the PowerShell Tools for Visual Studio extension is since when.

The aim of this post is to explain how to use Visual Studio and VSTS to version control PowerShell scripts rather than understand how to start using those tools, so if you need a primer then good starting points are here for Visual Studio and here for VSTS. The great thing is that both these tools are free for small teams. If you want to learn about PowerShell Tools for Visual Studio I have a Getting Started blog post with a collection of useful links here. In my implementation below I'm using Git as the version control technology, so please amend accordingly if you are using TFVC.

Implementing the Solution

Now we know that our PowerShell scripts are going to be version controlled by VSTS the next thing to decide is where in VSTS you want them to reside. VSTS is based around team projects, and the key decision is whether you want your scripts located together in one team project or whether you want scripts to live in different team projects -- perhaps because they naturally belong there. It's horses for courses so I'll show both ways.

If you want your scripts to live in associated team projects then you'll want to create a dedicated Git repository to hold the Visual Studio solution. Navigate to the team project in VSTS and then to the Code tab. Click on the down arrow next to the currently selected repository and in the popup that appears click on New repository:

vsts-create-new-git-repo

A Create a new repository dialogue will appear -- I created a new Git repository called PowerShellScripts. You can ignore the Add some code! call to action as we'll address this from Visual Studio.

Alternatively, if you want to go down the route of having all your scripts in one team project then you can simply create a new team project based on Git -- called PowerShellScripts for example. The newly created project will contain a repository of the same name putting you in the same position as above.

The next step is to switch to Visual Studio and ensure you have the PowerShell Tolls for Visual Studio 2015 extension installed. It's possible you do since it can be installed as part of the Visual Studio installation routine, although I can't remember whether it's selected be default. To check if it's installed navigate to Tools > Extensions and Updates > Installed > All and scroll down:

visual-studio-extensions-and-updates-powershell-tools

If you don't see the extension you can install it from Online > Visual Studio Gallery.

With that done it's time to connect to the team project. Still within Visual Studio, from Team Explorer choose the green Plug icon on the menu bar and then Manage Connections, and then Connect to Team Project:

visual-studio-manage-connections

This brings up the Connect to Team Foundation Server dialog which (via the Servers button) allows you to register your VSTS subscription as a ‘server' (the format is https://yoursubscriptionname.visualstudio.com). Once connected you will be able to select your Team Project.

Next up is cloning the repository that will hold the Visual Studio solution. If you are using a dedicated team project with just one Git repository you can just click the Home icon on the Team Explorer menu bar to get the cloning link on the Home panel:

visual-studio-clone-this-repository

If you have created an additional repository in an existing team project you will need to expand the list of repositories and double-click the one you previously created:

visual-studio-select-repository

This will take you directly to the cloning link on the Home panel -- no need to click the Home icon. Whichever way you get there, clicking the link opens up the settings to clone the repository to your local machine. If you are happy with the settings click Clone and you're done.

Solutions, Projects and Files

At the moment we are connected to a blank local repository, and the almost final push is to get our PowerShell scripts added. These will be contained in Visual Studio Projects that in turn are contained in a Visual Studio Solution. I'm a bit fussy about how I organise my projects and solutions -- I'll show you my way but feel free to do whatever makes you happy.

At the bottom of the Home tab click the New link, which brings up the New Project dialog. Navigate to Installed > Templates > Other Project Types > Visual Studio Solutions. I want to create a Blank Solution that is the same name as the repository, but I don't want a folder of the same name to be created which Visual Studio gives me no choice about. A sneaky trick is to provide the Name but delete the folder (of the same name) from the Location text box:

visual-studio-create-blank-solution

Take that Visual Studio! PowerShellScripts.sln now appears in the Solutions list of the Home tab and I can double-click it to open it, although you will need to manually switch to the Solution Explorer window to see the opened solution:

visual-studio-solution-explorer

The solution has no projects so right-click it and choose Add > New Project from the popup menu. This is the same dialog as above and you need to navigate to Installed > Templates > Other Languages > PowerShell and select PowerShell Script Project. At this point it's worth having a think about how you want to organise things. You could have all your scripts in one project, but since a solution can contain many projects you'll probably want to group related scripts in to their own project. I have a few scripts that deal with authorisation to Azure so I gave my first project the name Authorisation.Azure. Additional projects I might need are things like DSC.Azure and ARM.Azure. It's up to you and it can all be changed later of course.

The new project is created with a blank Script.ps1 file -- I usually delete this. There are several ways to get your scripts in -- probably the easiest is to move your existing ps1 scripts in to the project's folder in Windows Explorer, make sure they have the file names you want and then back in Visual Studio right-click the project and choose Add > Existing Item. You should see your script files and be able to select them all for inclusion in the project.

Don't Forget about Version Control!

We're now at the point where we can start to version control our PowerShell scripts. This is a whole topic in itself however you can get much of what you need to know from my Git with Visual Studio 2015 and TFS 2015 blog post and if you want to know more about Git I have a Getting Started post here. For now though the next steps are as follows:

  • In Team Explorer click on the home button then click Changes. Everything we added should be listed under Included Changes, plus a couple of Git helper files.
  • Add a commit comment and then from the Commit dropdown choose Commit and Sync:
    visual-studio-team-explorer-changes
  • This has the effect of committing your changes to the local repository and then syncing them with VSTS. You can confirm that from VSTS by navigating to the Code tab and selecting the repository. You should see the newly added files!

Broadly speaking the previous steps are the ones you'll use to check in any new changes that you make, either newly added files or amendments to scripts. Of course the beauty of Git is that if for whatever reason you don't have access to VSTS you can continue to work locally, committing your changes just to the local repository as frequently as makes sense. When you next have access to VSTS you can sync all the changes as a batch.

Finally, don't loose sight of the fact that as well as well as providing version control capabilities Visual Studio allows you to run and debug your scripts courtesy of the PowerShell Tolls for Visual Studio 2015 extension. Do be sure to check out my blog post that contains links to help you get working with this great tool.

Cheers -- Graham

Getting Started with PowerShell Tools for Visual Studio

Posted by Graham Smith on January 5, 2016No Comments (click here to comment)

If you are still using the PowerShell ISE to edit your PowerShell scripts than there may be a better way, particularly if you are already a Visual Studio user. Adam Driscoll's PowerShell Tools for Visual Studio extension has been around for some time now and is even better in Visual Studio 2015. Here is my pick of the best links to help you get started with this great tool:

Don't forget, if you don't already have access to Visual Studio you can download the community edition from here.

Cheers -- Graham

Continuous Delivery with VSO: Executing Automated Web Tests with Microsoft Test Manager

Posted by Graham Smith on April 9, 20154 Comments (click here to comment)

In this fourth post in my series on continuous delivery with VSO we take a look at executing automated web tests with Microsoft Test Manager. There are quite a few moving parts involved in getting all this working so it's worth me explaining the overall aim before diving in with the specifics.

Overview

The tests we want to run are automated web tests written using the Selenium framework. I first wrote these tests for my Continuous Delivery with TFS blog posts series and you can read about how to create the tests here and how run run them using MTM and TFS here. The goal in this post is to run these tests using MTM and VSO, triggered as part of the DAT stage of the pipeline from RM. The tests are run from a client workstation that is configured with MTM (a requirement at the time of writing) and the Microsoft Test Agent. I've used Selenium's Firefox driver in the test code so Firefox is also required on the client machine.

In terms of what actually happens, firstly RM copies the complete build over to the client workstation and then executes a PowerShell script that runs TCM.exe which is a command-line utility that lets you run tests that are part of a test plan. Precisely what happens next is under the bonnet stuff but it's along the lines of the test controller is informed that there is work to be done and that in turn informs the test agent on the client machine that it needs to run tests. The test agent knows from the test plan which tests to run and in which DLL they live and has access to the DLLs in the local copy of the build folder. Each test first starts Firefox and then connects to the web server running the deployed Contoso University and performs the automation specified in the test.

In many ways the process of getting all this to work with VSO rather than TFS is very similar and because of that I don't go in to every detail in this post and instead refer back to my TFS blog post.

Configure a Test Controller

VSO doesn't offer a test controller facility so you'll need to configure this yourself. If you have a test controller already in use then it's simplicity itself to repurpose it to point to your VSO account using the Browse button. If you are starting from scratch see here for the details but obviously ensure you connect to VSO rather than TFS. One other difference is that in order to get past some permissions problems I found it necessary to specify credentials for the lab service account -- I used the same as the service logon account.

Although I started off by repurposing an existing controller, because of permissions problems I ended up creating a dedicated build and test server as I wanted to start with a clean sheet. One thing I found was that the Visual Studio Test Controller service wouldn't automatically start after booting the OS from the Stopped (deallocated) state. The application error log was clearly reporting that the test controller wasn't able to connect to VSO. Manually starting the service was fine so presumably there was some sort of timing issue with other OS components not being ready.

Configure Microsoft Test Manager

If MTM isn't already installed on your development workstation then that's the first step. The second step is to connect MTM to your VSO account. I already had MTM installed and when I went to connect it to VSO the website was already listed. If that's not the case you can use the Add server link from the Connect to Your Team Project dialog. Navigating down to your Team Project (ContosoUniversity) enables the Connect now link which then takes you to a screen that allows you to choose between Testing Center and Lab Center. Choose the latter and then configure Lab Center as per the instructions here.

Continue following these instructions to configure Testing Centre with a new test plan and test cases. Note that you need to have the Contoso University solution open in order to associate the actual tests with the test cases. You'll also need to ensure that when deployed the tests navigate to the correct URL. In the Contoso University demo application this is hard-coded and you need to make the change in Driver.cs located in the ContosoUniversity.Web.SeFramework project.

Configure a Web Client Test Machine

The client test machine needs to be created in the cloud service that was created for DAT and joined to the domain if you are using one. The required configuration is very similar to that required for TFS as described here with the exception that the Release Management Deployment Agent isn't required and nor is the RMDEPLOYER account. Getting permissions correctly configured on this machine proved critical and I eventually realised that the Windows account that the tests will run under needs to be configured so that MTM can successfully connect to VSO with the appropriate credentials. To be clear, these are not the test account credentials themselves but rather the normal credentials you use to connect to VSO. To configure all this, once the test account has been added to the Local Administrators group and MTM has been installed and the licence key applied you will need to log on to Windows as the test account and start MTM. Connect to VSO and supply your VSO credentials in the same way as you did for your development workstation and and verify that you can navigate down to the Contoso University team project and open the test plan that was created in the previous section.

Initially I also battled with getting the test agent to register correctly with the test controller. I eventually uninstalled the test agent (which I had installed manually) and let the test controller perform the install followed by the configuration. Whether that was the real solution to the problem I don't know but it got things working for me.

Executing TCM.exe with PowerShell

As mentioned above the code that starts the tests is a PowerShell script that executes TCM.exe. As a starting point I used the script that Microsoft developed for agent-based release templates but had to modify it to make it work with RM-VSO. In particular changes were made to accommodate the way variables are passed in to the script (some implicit such as $TfsUrl or $TeamProject and some explicit such as $PlanId or $SuiteId) and to remove the optional build definition and build number parameters which are not available to the vNext pipeline and caused errors when specified on the TCM.exe command line. The modified script (TcmExecvNext.ps1) and the original Microsoft script for comparison (TcmExec.ps1) are available in a zip here and TcmExecvNext.ps1 should be copied to the Deploy folder in your source control root. One point to note is that for agent-based pipelines to TFS Collection URL is passed as $TfsUrlWithCollection however in vNext pipelines it is passed in as $TfsUrl.

Configure Release Management

Because we are using RM-VSO this part of the configuration is completely different from the instructions for RM-TFS. However before starting any new configuration you'll need to make a change to the component we created in the previous post. This is because TCM.exe doesn't seem to like accepting the name of a build folder if it has a space in it. Some more fiddling with PowerShell might have found a solution but I eventailly changed the component's name from Drop Folder to DropFolder. Note that you'll need to visit the existing action and reselect the newly named component. Another issue which cropped-up is that TCM.exe choked when the build directory parameter was supplied with a local file path. The answer was to create a share at C:\Windows\DtlDownloads\DropFolder and configure with appropriate permissions.

The new configuration procedure for RM-VSO is as follows:

  1. From Configure Paths > Environments link the web client test machine to the DAT environment.
  2. From Configure Apps > vNext Release Templates open Contoso University\DAT>DQA.
  3. From the Toolbox drag a Deploy Using PS/DSC action to the deployment sequence to follow Deploy Web and Database and rename the action Run Automated Web Tests.
  4. Open up the properties of Run Automated Web Tests and set the Configuration Variables as follows:
    1. ServerName = choose the name of the web client test machine from the dropdown.
    2. UserName = this is the test domain account (ALM\TFSTEST in my case) that was configured for the web client test machine.
    3. Password = password for the UserName
    4. ComponentName = choose DropFolder from the dropdown.
    5. PSScriptPath = Deploy\TcmExecvNext.ps1
    6. SkipCaCheck = true
  5. Still in the properties of Run Automated Web Tests and set the Custom configuration as follows:
    1. PlanId = 8 (or whatever your Plan ID is as it is likely to be different)
    2. SuiteId = 10 (or whatever your Suite ID is as it is likely to be different)
    3. ConfigId = 1 (or whatever your Configuration ID is as it is likely to be different)
    4. BuildDirectory = \\almclientwin81b\DtlDownloads\DropFolder (your machine name may be different)
    5. TestEnvironment = ALMCLIENTWIN81B (yours may be different)
    6. Title = Automated Web Tests

Bearing in mind that the Deploy Using PS/DSC action doesn't allow itself to be resized to show all configuration values the result should look something like this:

release-management-run-automated-tests

Start a Build

From Visual Studio manually queue a new build from your build definition. If everything is in place the build should succeed and you can open Microsoft Test Manager to check the results. Navigate to Testing Center > Test > Analyze Test Runs. You should see your test run listed and double-clicking it will hopefully show the happy sight of passing tests:

microsoft-test-manager-tests-passed-vso

Testing Times

As I noted in the TFS version of this post there are a lot of moving parts to get configured and working in order to be able to trigger tests to run from RM. Making all this work with VSO took many hours working through all the details and battling with permissions problems and myriad other things that didn't work in the way I was expecting them to. With luck I've hopefully captured all the details you need to try this in your own environment. If you do encounter difficulties please post in the comments and I'll do what I can to help.

Cheers -- Graham

Continuous Delivery with VSO: Application Deployment with Release Management

Posted by Graham Smith on March 30, 20155 Comments (click here to comment)

In the previous post in my blog series on implementing continuous delivery with VSO we got as far as configuring Release Management with a release path. In this post we cover the application deployment stage where we'll create the items to actually deploy the Contoso University application. In order to achieve this we'll need to create a component which will orchestrate copying the build to a temporary location on target nodes and then we'll need to create PowerShell scripts to actually install the web files to their proper place on disk and run the DACPAC to deploy any database changes. Note that although RM supports PowerShell DSC I'm not using it here and instead I'm using plain PowerShell. Why is that? It's because for what we're doing here -- just deploying components -- it feels like an unnecessary complication. Just because you can doesn't mean you should...

Sort out Build

The first thing you are going to want to sort out is build. VSO comes with 60 minutes of bundled build which disappears in no time. You can pay for more by linking your VSO account to an Azure subscription that has billing activated or the alternative is to use your own build server. This second option turns out to be ridiculously easy and Anthony Borton has a great post on how to do this starting from scratch here. However if you already have a build server configured it's a moment's work to reconfigure it for VSO. From Team Foundation Server Administration Console choose the Build Configuration node and select the Properties of the build controller. Stop the service and then use the familiar dialogs to connect to your VSO URL. Configure a new controller and agent and that's it!

Deploying PowerShell Scripts

The next piece of the jigsaw is how to get the PowerShell scripts you will write to the nodes where they should run. Several possibilities present themselves amongst which is embedding the scripts in your Visual Studio projects. From a reusability perspective this doesn't feel quite right somehow and instead I've adopted and reproduced the technique described by Colin Dembovsky here with his kind permission. You can implement this as follows:

  1. Create folders called Build and Deploy in the root of your version control for ContosoUniversity and check them in.
  2. Create a PowerShell script in the Build folder called CopyDeployFiles.ps1 in and add the following code:
  3. Check CopyDeployFiles.ps1 in to source control.
  4. Modify the process template of the build definition created in a previous post as follows:

2.Build > 5. Advanced > Post-build script arguments = -pathToCopy Deploy
2.Build > 5. Advanced > Post-build script path = Build/CopyDeployFiles.ps1

To explain, Post-build script path specifies that CopyDeployFiles.ps1 created above should be run and Post-build script arguments feeds in the -pathToCopy argument which is the Deploy folder we created above. The net effect of all this is that the Deploy folder and any contents gets created as part of the build.

Create a Component

In a multi-server world we'd create a component in RM from Configure Apps > Components for each server that we need to deploy to since a component is involved in ensuring that the build is copied to the target node. Each component would then be associated with an appropriately named PowerShell script to do the actual work of installing/copying/running tests or whatever is needed for that node. Because we are hosting IIS and SQL Server on the same machine we only actually need one component. We're getting ahead of ourselves a little but a side effect of this is that we will use only one PowerShell script for several tasks which is a bit ugly. (Okay, we could use two components but that would mean two build copy operations which feels equally ugly.)

With that noted create a component called Drop Folder and add a backslash (\) to Source > Builds with application > Path to package. The net effect of this when the deployment has taken place is the existence a folder called Drop Folder on the target node with the contents of the original drop folder copied over to the remote folder. As long as we don't need to create configuration variables for the component it can be reused in this basic form. It probably needs a better name though.

Create a vNext Release Template

Navigate to Configure Apps > vNext Release Templates and create a new template called Contoso University\DAT>DQA based on the Contoso University\DAT>DQA release path. You'll need to specify the build definition and check Can Trigger a Release from a Build. We now need to create the workflow on the DAT design surface as follows:

  1. Right-click the Components node of the Toolbox and Add the Drop Folder component.
  2. Expand the Actions node of the Toolbox and drag a Deploy Using PS/DSC action to the Deployment Sequence. Click the pen icon to rename to Deploy Web and Database.
  3. Double click the action and set the Configuration Variables as follows:
    1. ServerName = choose the appropriate server from the dropdown.
    2. UserName = the name of an account that has permissions on the target node. I'm using the RMDEPLOYER domain account that was set up for Deployment Agents to use in agent based deployments.
    3. Password = password for the UserName
    4. ComponentName = choose Drop Folder from the dropdown.
    5. SkipCaCheck = true
  4. The Actions do not display very well so a complete screenshot is not possible but it should look something like this (note SkipCaCheck isn't shown):
    release-management-deploy-using-ps-dsc-action

At this stage we can save the template and trigger a build. If everything is working you should be able to examine the target node and observe a folder called C:\Windows\DtlDownloads\Drop Folder that contains the build.

Deploy the Bits

With the build now existing on the target node the next step is to actually get the web files in place and deploy the database. We'll do this from one PowerShell script called WebAndDatabase.ps1 that you should create in the Deploy folder created above. Every time you edit this and want it to run do make sure you check it in to version control. To actually get it to run we need to edit the Deploy Web and Database action created above. The first step is to add Deploy\WebAndDatabase.ps1 as the parameter to the PSScriptPath configuration variable. We then need to add the following custom configuration variables by clicking on the green plus sign:

  • destinationPathC:\inetpub\wwwroot\CU-DAT
  • websiteSourcePath = _PublishedWebsites\ContosoUniversity.Web
  • dacpacNameContosoUniversity.Database.dacpac
  • databaseServerALMWEBDB01
  • databaseNameCU-DAT
  • loginOrUserALM\CU-DAT

The first section of the script will deploy the web files to C:\inetpub\wwwroot\CU-DAT on the target node, so create this folder if you haven't already. Obviously we could get PowerShell to do this but I'm keeping things simple. I'm using functions in WebAndDatabase.ps1 to keep things neat and tidy and to make debugging a bit easier if I want to only run one function.

The first function is as follows:

The code clears out the current set of web files and then copies the new set over. The tokens in Web.config get changed in the copied set so the originals can be used for the DQA stage.  Note how I'm using Write-Verbose statements with the -Verbose switch at the end. This causes the RM Deployment Log grid to display a View Log link in the Command Output column. Very handy for debugging purposes.

The second function deploys the DACPAC:

The code is simply building the command to run sqlpackage.exe -- pretty straightforward. Note that the script is hardcoded to SQL Server 2014 -- more on that below.

The final function deals with the Create login and database user.sql script that lives in the Scripts folder of the ContosoUniversity.Database project. This script ensures that the necessary SQL Server login and database user exists and is tokenised so it can be used in different stages -- see this article for all the details.

The tokens in the SQL script are first swapped for passed-in values and then the code builds a command to run the script. Again, pretty straightforward.

Loose Ends

At this stage you should be able to trigger a build and have all of the components deploy. In order to fully test that everything is working you'll want to create and configure a web application in IIS -- this article has the details.

To create the stated aim of an initial pipeline with both a DAT and DQA stage the final step is to actually configure all of the above for DQA. It's essentially a repeat of DAT so I'm not going to describe it here but do note that you can copy and paste the Deployment Sequence:

release-management-copy-stage

One remaining aspect to cover is the subject of script reusability. With RM-TFS there is an out-of-the-box way to achieve reusability with tools and actions. This isn't available in RM-VSO and instead potential reusability comes via storing scripts outside of the Visual Studio solution. This needs some thought though since the all-in-one script used above (by necessity) only has limited reusability and in a non-demo environment you would want to consider splitting the script and co-ordinating everything from a master script. Some of this would happen anyway if the web and database servers were distinct machines but there is probably more that should be done. For example, tokens that are to be swapped-out are hard-coded in the script above which limits reusability. I've left it like that for readability but this certainly feels like the sort of thing that should be improved upon. In a similar vein the path to sqlpackage.exe is hard coded and thus tied to a specific version of SQL Server and probably needs addressing.

In the next post we'll look at executing automated web tests. Meantime if you have any thoughts on great ways to use PowerShell with RM-VSO please do share in the comments.

Cheers -- Graham

Getting Started with Windows PowerShell

Posted by Graham Smith on February 8, 2015No Comments (click here to comment)

If you are just getting started with Windows PowerShell or haven't done much with it yet you may be thinking that it is just another scripting language. Nothing could be further from the truth because although PowerShell is a scripting language it's also a huge amount more than that. A Wikipedia page here has a nice overview of the history of PowerShell and of the different features that became available with each version, and gives the reader a good idea about the breadth of functionality. A key concept to understand is that PowerShell is involved in almost every area of automation on the Windows and Azure platforms and knowing, learning and using PowerShell is increasingly going to be essential for anyone working with Windows or Azure. Here are my top learning resources for getting started with PowerShell:

PowerShell is huge and in terms of resources this is just the tip of the iceberg. In my view the two Jump Start series of videos on the Microsoft Virtual Academy are unmissable. What's great about them is that Jason Helmick is a superb presenter and extremely funny guy and Jeffrey Snover is also an excellent presenter and also the inventor of PowerShell. This all adds up to an immensely enjoyable series of videos where you learn about the history of PowerShell as well as how to use it. Also well worth watching are the two videos from TechEd North America 2014 -- lots of value for the time it takes to watch them. I've listed two courses from Pluralsight that are useful if you are just getting going with PowerShell but there are plenty more for anyone wanting to dig deeper.

Cheers -- Graham