Post Deployment Configuration with the PowerShell DSC Extension for Azure Resource Manager Templates

Posted by Graham Smith on April 28, 20162 Comments (click here to comment)

As part of a forthcoming blog post I'm writing for my series about Continuous Delivery with TFS / VSTS I want to be able to deploy PowerShell DSC scripts to Windows Server target nodes that both configure servers and deploy my application components. Separately, I want to automate the creation of target nodes so I can easily destroy and recreate them -- great for testing. In this previous post I explained how to do this with Azure Resource Manager templates, however the journey didn't end there since I also wanted to join the nodes to a domain and also install Windows Management Framework 5.0 in order to get the latest version of PowerShell DSC installed. Despite all that the journey still wasn't over because my server configuration and application deployment technique with PowerShell DSC uses WinRM which requires target nodes to have their firewalls configured to allow WinRM.

The solution to this problem lies with harnessing the true intended functionality of the PowerShell DSC Extension. Although you can just use it to install WMF it's real purpose is to run DSC configurations after the VM has been deployed. The configuration I used was as follows:

As you can see, rather than create any firewall rules I chose to simply turn the domain firewall off. The main reason is simplicity: creating firewall rules with DSC needs a custom resource which adds another layer of complexity to the problem. Although another option is to use netsh commands to create firewall rules in my case I have no issues with turning the firewall off.

The next step is to package this config in to a zip file and make it available on a publicly available URL. GitHub is one possible location that can be used to host the zip but I chose Azure blob storage. The Publish-AzureVMDscConfiguration cmdlet exists to help here, and can create the zip locally for onward transfer to GitHub (for example) or it can push it straight to Azure blob storage. I was using the latter route of course although I found that couldn't get the cmdlet to work with premium storage and ended up creating a standard storage account. The code is as follows:

The storage account key is copied from the Azure Portal via Storage account > $StorageAccount$ >Settings > Access keys. Don't try using mine as I've invalidated it. I should point out that I couldn't get this command to work consistently and it would sometimes error. I did get it to work eventually but I didn't manage to pin down the problem. The net effect of successfully running this code is a file called PostDeploymentConfig.ps1.zip in blob storage. As things stand though this file isn't accessible and its container (windows-powershell-dsc is created as a default) needs to have its access policy changed from Private to Blob.

With that done it's time to amend the JSON template. The dscExtension resource that was added in this post should now look as follows:

I've chosen to hard code the ModulesUrl and ConfigurationFunction settings because I won't need to change them but they can of course be parameterised. That's all there is to it, and the result is a VM that is completely ready to have its internals configured by PowerShell DSC scripts over WinRM. If you want to download the code that accompanies this post it's on my GitHub site as a release here.

Cheers -- Graham

Install Windows Management Framework 5.0 with Azure Resource Manager Templates

Posted by Graham Smith on April 9, 2016No Comments (click here to comment)

In a recent post on my blog series about Continuous Delivery with TFS / VSTS I mentioned that I was having to manually install Windows Management Framework 5.0 after creating a Windows server via ARM templates as it was a necessary precursor to running my PowerShell DSC configuration.  I also mentioned that automating the install was on my do-do list. But no more!

It turns out that the the PowerShell DSC extension for ARM templates will perform the installation, and that there's no need to actually run a DSC configuration if you don't need to -- just specify "WmfVersion": "5.0" in the settings section. The JSON to add to your ARM template should look similar to this:

I say similar because the code is configured to use the variables in my template, however you can see the full template to get the context on my GitHub Infrastructure repo here.

Many thanks to Zach Alexander and the PowerShell Team for pointing me in the right direction!

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Join a VM to a Domain with Azure Resource Manager Templates

Posted by Graham Smith on March 20, 2016No Comments (click here to comment)

In the previous post in my blog post series on Continuous Delivery with TFS / VSTS we learned how to provision a Windows Server virtual machine using Azure Resource Manager templates. The next major step in this quest to automate the creation and configuration of the infrastructure to which we'll deploy our application is to configure server internals, starting with joining a VM to the domain. My initial thinking was that this would need to be some kind of PowerShell command, and whilst this is an option I was very pleased to find that there is an ARM template resource to do this. The resource in question goes by the name of JsonADDomainExtension; it's a VM extension and you can read about it (and the PowerShell commands to do the same thing) in this blog post.

I have to confess that I struggled to get the extension to work at first. I spent a whole afternoon fiddling with the settings and getting nowhere, and spent quite a bit of time reading forum posts from others who were having similar difficulties (mostly with the PowerShell commands though). I gave up in frustration, only to come back to it a few days later to try again to find it was all working! I describe the steps I took below -- please be aware that it's very much a direct continuation of this post so please do check that out first if you haven't done so already.

Adding the JsonADDomainExtension to the JSON Template

Getting starting with the extension is very easy, as it's just a case of dropping the JSON in to the resources part of the template. The code I initially used to make the extension work was as follows:

I added this code to the WindowsServer2012R2Datacenter.json file which has variables defined for use where the VM name is required. Note that OUPath can be an empty string, the requirement for the escaped backslash for the (domain) User and the use of the magic number 3 in Options (just go with it or see here for the details).

Whilst this (eventually) worked fine for me the big issue was how to hide the password for the account that will join the VM to the domain. I hard coded it in to the template to get the extension working but even when refactored as a parameter the password is still in plain view -- now just in the PowerShell calling script.

Say Hello to Azure Key Vault

As luck would have it around the time I was initially getting JsonADDomainExtension to work I watched Cloud Cover Episode 200: Azure Resource Manager Tooling with Brian Moore where Brian mentioned the forthcoming ability to use Azure Key Vault to supply secret values such as passwords to ARM templates. Following a very helpful email exchange Brian pointed me towards this page which is a partial answer to the solution I wanted to get working.

At the time of writing there was no portal interface for configuring Azure Key Vault so it's over to PowerShell (no bad thing) to create a new vault:

In the code above this creates a vault named prmkeyvault. Next we need to add our password as a secret:

This creates a new secret called DomainAdminPassword. Of course, the objects that have just been created can be examined with Azure Resource Explorer:

azure-resource-explorer-key-vault

Use the Secret in the JSON Template

The Microsoft guidance for passing secrets to templates is based on the use of an ARM parameters file. This wasn't quite what I wanted as I'm using a PowerShell script to supply my parameters. The way to access secrets using PowerShell is along the following lines:

You can see how I integrated the code above in to my PowerShell script by examining Create PRM-DAT.ps1 in the code release that accompanies this post on my Infrastructure repository on GitHub. It's not quite the full solution at the moment though because despite having a mechanism in place for automatically authenticating to Azure PowerShell the use of Azure Key Vault cmdlets in the script causes the authentication dialog to pop-up. I'm still working on how to stop that -- if you know please leave a message in the comments!

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Infrastructure as Code with Azure Resource Manager Templates

Posted by Graham Smith on February 25, 2016No Comments (click here to comment)

So far in this blog post series on Continuous Delivery with TFS / VSTS we have gradually worked our way to the position of having a build of our application which is almost ready to be deployed to target servers (or nodes if you prefer) in order to conduct further testing before finally making its way to production. This brings us to the question of how these nodes should be provisioned and configured. In my previous series on continuous delivery deployment was to nodes that had been created and configured manually. However with the wealth of automation tools available to us we can -- and should -- improve on that.  This post explains how to achieve the first of those -- provisioning a Windows Server virtual machine using Azure Resource Manager templates. A future post will deal with the configuration side of things using PowerShell DSC.

Before going further I should point out that this post is a bit different from my other posts in the sense that it is very specific to Azure. If you are attempting to implement continuous delivery in an on premises situation chances are that the specifics of what I cover here are not directly usable. Consequently, I'm writing this post in the spirit of getting you to think about this topic with a view to investigating what's possible for your situation. Additionally, if you are not in the continuous delivery space and have stumbled across this post through serendipity I do hope you will be able to follow along with my workflow for creating templates. Once you get past the Big Picture section it's reasonably generic and you can find the code that accompanies this post at my GitHub repository here.

The Infrastructure Big Picture

In order to understand where I am going with this post it's probably helpful to understand the big picture as it relates to this blog series on continuous delivery. Our final continuous delivery pipeline is going to consist of three environments:

  • DAT -- development automated test where automated UI testing takes place. This will be an ‘all in one' VM hosting both SQL Server and IIS. Why have an all-in-one VM? It's because the purpose of this environment is to run automated tests, and if those tests fail we want a high degree of certainty that it was because of code and not any other factors such as network problems or a database timeout. To achieve that state of certainty we need to eliminate as many influencing variables as possible, and the simplest way of achieving that is to have everything running on the same VM. It breaks the rule about early environments reflecting production but if you are in an on premises situation and your VMs are on hand-me-down infrastructure and your network is busy at night (when your tests are likely running) backing up VMs and goodness knows what else then you might come to appreciate the need for an all-in-one VM for automated testing.
  • DQA -- development quality assurance where high-value manual testing takes place. This really does need to reflect production so it will consist of a database VM and a web server VM.
  • PRD -- production for the live code. It will consist of a database VM and a web server VM.

These environments map out to the following infrastructure I'll be creating in Azure:

  • PRM-DAT -- resource group to hold everything for the DAT environment
    • PRM-DAT-AIO -- all in one VM for the DAT environment
  • PRM-DQA -- resource group to hold everything for the DQA environment
    • PRM-DQA-SQL -- database VM for the DQA environment
    • PRM-DQA-IIS -- web server VM for the DQA environment
  • PRM-PRD -- resource group to hold everything for the DQA environment
    • PRM-PRD-SQL -- database VM for the PRD environment
    • PRM-PRD-IIS -- web server VM for the PRD environment

The advantage of using resource groups as containers is that an environment can be torn down very easily. This makes more sense when you realise that it's not just the VM that needs tearing down but also storage accounts, network security groups, network interfaces and public IP addresses.

Overview of the ARM Template Development Workflow

We're going to be creating our infrastructure using ARM templates which is a declarative approach, ie we declare what we want and some other system ‘makes it so'. This is in contrast to an imperative approach where we specify exactly what should happen and in what order. (We can use an imperative approach with ARM using PowerShell but we don't get any parallelisation benefits.) If you need to get up to speed with ARM templates I have a Getting Started blog post with a collection useful useful links here. The problem -- for me at least -- is that although Microsoft provide example templates for creating a Windows Server VM (for instance) they are heavily parametrised and designed to work as standalone VMs, and it's not immediately obvious how they can fit in to an existing network. There's also the issue that at first glance all that JSON can look quite intimidating! Fear not though, as I have figured out what I hope is a great workflow for creating ARM templates which is both instructive and productive. It brings together a number of tools and technologies and I make the assumption that you are familiar with these. If not I've blogged about most of them before. A summary of the workflow steps with prerequisites and assumptions is as follows:

  • Create a Model VM in Azure Portal. The ARM templates that Microsoft provide tend to result in infrastructure that have different internal names compared with the same infrastructure created through the Azure Portal. I like how the portal names things and in order to help replicate that naming convention for VMs I find it useful to create a model VM in the portal whose components I can examine via the Azure Resource Explorer.
  • Create a Visual Studio Solution. Probably the easiest way to work with ARM templates is in Visual Studio. You'll need the Azure SDK installed to see the Azure Resource Group project template -- see here for more details. We'll also be using Visual Studio to deploy the templates using PowerShell and for that you'll need the PowerShell Tools for Visual Studio extension. If you are new to this I have a Getting Started blog post here. We'll be using Git in either TFS or VSTS for version control but if you are following this series we've already covered that.
  • Perform an Initial Deployment. There's nothing worse than spending hours coding only to find that what you're hoping to do doesn't work and that the problem is hard to trace. The answer of course is to deploy early and that's the purpose of this step.
  • Build the Deployment Template Resource by Resource Using Hard-coded Values. The Microsoft templates really go to town when it comes to implementing variables and parameters. That level of detail isn't required here but it's hard to see just how much is required until the template is complete. My workflow involves using hard-coded values initially so the focus can remain on getting the template working and then refactoring later.
  • Refactor the Template with Parameters, Variables and Functions. For me refactoring to remove the hard-coded values is one of most fun and rewarding parts of the process. There's a wealth of programming functionality available in ARM templates -- see here for all the details.
  • Use the Template to Create Multiple VMs. We've proved the template can create a single VM -- what about multiple VMs? This section explores the options.

That's enough overview -- time to get stuck in!

Create a Model VM in Azure Portal

As above, the first VM we'll create using an ARM template is going to be called PRM-DAT-AIO in a resource group called PRM-DAT. In order to help build the template we'll create a model VM called PRM-DAT-AAA in a resource group called PRM-DAT via the Azure Portal. The procedure is as follows:

  • Create a resource group called PRM-DAT in your preferred location -- in my case West Europe.
  • Create a standard (Standard-LRS) storage account in the new resource group -- I named mine prmdataaastorageaccount. Don't enable diagnostics.
  • Create a Windows Server 2012 R2 Datacenter VM (size right now doesn't matter much -- I chose Standard DS1 to keep costs down) called PRM-DAT-AAA based on the PRM-DAT resource group, the prmdataaastorageaccount storage account and the prmvirtualnetwork that was created at the beginning of this blog series as the common virtual network for all VMs. Don't enable monitoring.
  • In Public IP addresses locate PRM-DAT-AAA and under configuration set the DNS name label to prm-dat-aaa.
  • In Network security groups locate PRM-DAT-AAA and add the following tag: displayName : NetworkSecurityGroup.
  • In Network interfaces locate PRM-DAT-AAAnnn (where nnn represents any number) and add the following tag: displayName : NetworkInterface.
  • In Public IP addresses locate PRM-DAT-AAA and add the following tag: displayName : PublicIPAddress.
  • In Storage accounts locate prmdataaastorageaccount and add the following tag: displayName : StorageAccount.
  • In Virtual machines locate PRM-DAT-AAA and add the following tag: displayName : VirtualMachine.

You can now explore all the different parts of this VM in the Azure Resource Explorer. For example, the public IP address should look similar to:

azure-resource-explorer-public-ip-address

Create a Visual Studio Solution

We'll be building and running our ARM template in Visual Studio. You may want to refer to previous posts (here and here) as a reminder for some of the configuration steps which are as follows:

  • In the Web Portal navigate to your team project and add a new Git repository called Infrastructure.
  • In Visual Studio clone the new repository to a folder called Infrastructure at your preferred location on disk.
  • Create a new Visual Studio Solution (not project!) called Infrastructure one level higher then the Infrastructure folder. This effectively stops Visual Studio from creating an unwanted folder.
  • Add .gitignore and .gitattributes files and perform a commit.
  • Add a new Visual Studio Project to the solution of type Azure Resource Group called DeploymentTemplates. When asked to select a template choose anything.
  • Delete the Scripts, Templates and Tools folders from the project.
  • Add a new project to the solution of type PowerShell Script Project called DeploymentScripts.
  • Delete Script.ps1 from the project.
  • In the DeploymentTemplates project add a new Azure Resource Manager Deployment Project item called WindowsServer2012R2Datacenter.json (spaces not allowed).
  • In the DeploymentScripts project add a new PowerShell Script item for the PowerShell that will create the PRM-DAT resource group with a PRM-DAT-AIO server -- I called my file Create PRM-DAT.ps1.
  • Perform a commit and sync to get everything safely under version control.

With all that configuration you should have a Visual Studio solution looking something like this:

visual-studio-infrastructure-solution

Perform an Initial Deployment

It's now time to write just enough code in Create PRM-DAT.ps1 to prove that we can initiate a deployment from PowerShell. First up is the code to authenticate to Azure PowerShell. I have the authentication code which was the output of this post wrapped in a function called Set-AzureRmAuthenticationForMsdnEnterprise which in turn is contained in a PowerShell module file called Authentication.psm1. This file in turn is deployed to C:\Users\Graham\Documents\WindowsPowerShell\Modules\Authentication which then allows me to call Set-AzureRmAuthenticationForMsdnEnterprise from anywhere on my development machine. (Although this function could clearly be made more generic with the use of some parameters I've consciously chosen not to so I can check my code in to GitHub without worrying about exposing any authentication details.) The initial contents of Create PRM-DAT.ps1 should end up looking as follows:

Running this code in Visual Studio should result in a successful outcome, although admittedly not much has happened because the resource group already existed and the deployment template is empty. Nonetheless, it's progress!

Build the Deployment Template Resource by Resource Using Hard-coded Values

The first resource we'll code is a storage account. In the DeploymentTemplates project open WindowsServer2012R2Datacenter.json which as things stand just contains some boilerplate JSON for the different sections of the template that we'll be completing. What you should notice is the JSON Outline window is now available to assist with editing the template. Right-click resources and choose Add New Resource:

visual-studio-json-outline-add-new-resource

In the Add Resource window find Storage Account and add it with the name (actually the display name) of  StorageAccount:

visual-studio-json-outline-add-new-resource-storage-account

This results in boilerplate JSON being added to the template along with a variable for actual storage account name and a parameter for account type. We'll use a variable later but for now delete the variable and parameter that was added -- you can either use the JSON Outline window or manually edit the template.

We now need to edit the properties of the resource with actual values that can create (or update) the resource. In order to understand what to add we can use the Azure Resource Explorer to navigate down to the storageAccounts node of the MSDN subscription where we created prmdataaastorageaccount:

azure-resource-explorer-storage-accounts-prmdataaastorageaccount

In the right-hand pane of the explorer we can see the JSON that represents this concrete resource, and although the properties names don't always match exactly it should be fairly easy to see how the ‘live' values can be used as a guide to populating the ones in the deployment template:

azure-resource-explorer-storage-accounts-prmdataaastorageaccount-json

So, back to the deployment template the following unassigned properties can be assigned the following values:

  • "name": "prmdataiostorageaccount"
  • "location": "West Europe"
  • "accountType": "Standard_LRS"

The resulting JSON should be similar to:

Save the template and switch to Create PRM-DAT.ps1 to run the deployment script which should create the storage account. You can verify this either via the portal or the explorer.

The next resource we'll create is a NetworkSecurityGroup, which has an extra twist in that at the time of writing adding it to the template isn't supported by the JSON Outline window. There's a couple of ways to go here -- either type the JSON by hand or use the Create function in the Azure Resource Explorer to generate some boilerplate JSON. This latter technique actually generates more JSON than is needed so in this case is something of a hindrance. I just typed the JSON directly and made use of the IntelliSense options in conjunction with the PRM-DAT-AAA network security group values via the Azure Resource Explorer. The JSON that needs adding is as follows:

Note that you'll need to separate this resource from the storage account resource with a comma to ensure the syntax is valid. Save the template, run the deployment and refresh the Azure Resource Explorer. You can now compare the new PRM-DAT-AIO and PRM-DAT-AAA network security groups in the explorer to validate the JSON that creates PRM-DAT-AIO. Note that by zooming out in your browser you can toggle between the two resources and see that it is pretty much just the etag values that are different.

The next resource to add is a public IP address. This can be added from the JSON Outline window using PublicIPAddress as the name but it also wants to add a reference to itself to a network interface which in turn wants to reference a virtual network. We are going to use an existing virtual network but we do need a network interface, so give the new network interface a name of NetworkInterface and the new virtual network can be any temporary name. As soon as the new JSON components have been added delete the virtual network and all of the variables and parameters that were added. All this makes sense when you do it -- trust me!

Once edited with the appropriate values the JSON for the public IP address should be as follows:

The edited JSON for the network interface should look similar to the code that follows, but note I've replaced my MSDN subscription GUID with an ellipsis.

It's worth remembering at this stage that we're hard-coding references to other resources. We'll fix that up later on, but for the moment note that the network interface needs to know what virtual network subnet it's on (created in an earlier post), and which public IP address and network security group it's using. Also note the dependsOn section which ensures that these resources exist before the network interface is created. At this point you should be able to run the deployment and confirm that the new resources get created.

Finally we can add a Windows virtual machine resource. This is supported from the JSON Outline window, however this resource wants to reference a storage account and virtual network. The storage account exists and that should be selected, but once again we'll need to use a temporary name for the virtual network and delete it and the variables and parameters. Name the virtual machine resource VirtualMachine. Edit the JSON with appropriate hard-coded values which should end up looking as follows:

Running the deployment now should result in a complete working VM which you can remote in to.

The final step before going any further is to tear-down the PRM-DAT resource group and check that a fully-working PRM-DAT-AIO VM is created. I added a Destroy PRM-DAT.ps1 file to my DeploymentScripts project with the following code:

Refactor the Template with Parameters, Variables and Functions

It's now time to make the template reusable by refactoring all the hard-coded values. Each situation is likely to vary but in this case my specific requirements are:

  • The template will always create a Windows Server 2012 R2 Datacenter VM, but obviously the name of the VM needs to be specified.
  • I want to restrict my VMs to small sizes to keep costs down.
  • I'm happy for the VM username to always be the same so this can be hard-coded in the template, whilst I want to pass the password in as a parameter.
  • I'm adding my VMs to an existing virtual network in a different resource group and I'm making a concious decision to hard-code these details in.
  • I want the names of all the different resources to be generated using the VM name as the base.

These requirements gave rise to the following parameters, variables and a resource function:

  • nodeName parameter -- this is used via variable conversions throughout the template to provide consistent naming of objects. My node names tend to be of the format used in this post and that's the only format I've tested. Beware if your node names are different as there are naming rules in force.
  • nodeNameToUpper variable -- used where I want to ensure upper case for my own naming convention preferences.
  • nodeNameToLower variable -- used where lower case is a requirement of ARM eg where nodeName forms part of a DNS entry.
  • vmSize parameter -- restricts the template to creating VMs that are not going to burn Azure credits too quickly and which use standard storage.
  • storageAccountName variable -- creates a name for the storage account that is based on a lower case nodeName.
  • networkInterfaceName variable -- creates a name for the network interface based on a lower case nodeName with a number suffix.
  • virtualNetworkSubnetName variable -- used to create the virtual network subnet which exists in a different resource group and requires a bit of construction work.
  • vmAdminUsername variable -- creates a username for the VM based on the nodeName. You'll probably want to change this.
  • vmAdminPassword parameter -- the password for the VM passed-in as a secure string.
  • resourceGroup().location resource function -- neat way to avoid hard-coding the location in to the template.

Of course, these refactorings shouldn't affect the functioning of the template, and tearing down the PRM-DAT resource group and recreating it should result in the same resources being created.

What about Environments where Multiple VMs are Required?

The work so far has been aimed at creating just one VM, but what if two or more VMs are needed? It's a very good question and there are at least two answers. The first involves using the template as-is and calling New-AzureRmResourceGroupDeployment in a PowerShell Foreach loop. I've illustrated this technique in Create PRM-DQA.ps1 in the DeploymentScripts project. Whilst this works very nicely the VMs are created in series rather than in parallel and, well, who wants to wait? My first thought at creating VMs in parallel was to extend the Foreach loop idea with the -parallel switch in a PowerShell workflow. The code which I was hoping would work looks something like this:

Unfortunately it seems like this idea is a dud -- see here for the details. Instead the technique appears to be to use the copy, copyindex and length features of ARM templates as documented here. This necessitates a minor re-write of the template to pass in and use an array of node names, however there are complications where I've used variables to construct resource names. At the time of publishing this post I'm working through these details -- keep an eye on my GitHub repository for progress.

Wrap-Up

Before actually wrapping-up I'll make a quick mention of the template's outputs node. A handy use for this is debugging, for example where you are trying to construct a complicated variable and want to check its value. I've left an example in the template to illustrate.

I'll finish this post with a question that I've been pondering as I've been writing this post, which is whether just because we can create and configure VMs at the push of a button does that mean we should create and configure new VMs every time we deploy our application? My thinking at the moment is probably not because of the time it will add but as always it depends. If you want a clean start every time you deploy then you certainly have that option, but my mind is already thinking ahead to the additional amount of time it's going to take to actually configure these VMs with IIS and SQL Server. Never say never though, as who knows what's in store for the future? As Azure (presumably) gets faster and VMs become more lightweight with the arrival of Nano Server perhaps creating and configuring VMs from scratch as part of the deployment pipeline will be so fast that there would be no reason not to. Or maybe we'll all be using containers by then...

Cheers -- Graham

Continuous Delivery with TFS / VSTS – A New Way of Working with Azure Resource Manager

Posted by Graham Smith on November 12, 201510 Comments (click here to comment)

In this second post in my Continuous Delivery with TFS / VSTS series it's time to make a fresh start in Microsoft Azure. Whaddaya mean a fresh start? Well for a little while now there has been a new way to interact with Azure, namely through a feature known as Azure Resource Manager or ARM. This is in contrast to the ‘old' way of doing things which is now referred to as Azure Service Management or ASM. As I mention in a previous blog post ARM is the way of the future and this new series of blog posts is going to be entirely based on using ARM using the new portal (codename Ibiza) where portal interaction is necessary. I have lots of VMs created in ASM but the plan is to clear those down and start again.

However, I'm a big fan of using PowerShell to work with Azure at every possible opportunity and a further reason for making a fresh start is that there is a new set of Azure PowerShell cmdlets for working with ARM (to avoid naming clashes with ASM cmdlets). To me it makes sense to start a new series of posts based on this new functionality.

As usual, the aim of this post isn't to teach you foundational concepts and if you need an introduction to ARM I have a Getting Started blog post with a collection of links to help you get going. Be aware that most of the resources pre-date the arrival of the new Azure PowerShell cmdlets. In the rest of this post we'll look at how to get up-and-running with the new cmdlets.

Install the new Azure PowerShell Cmdlets

First things first you'll need to install the new-style Azure PowerShell cmdlets. At the time of writing these were in preview, and the important thing to note is that the new version (1.0 or later) introduces breaking changes so do consider which machine you are installing them on. In the fullness of time we will be able to perform the installation from Web Platform Installer but initially at least it's a manual process. Details are available from the announcement blog here and Petri has a nice set of instructions as well.

Logging in has Completely Changed

If you have been used to logging in to Azure using the publish settings file method then you need to be aware that this method simply will not work with ARM since certificate-based authentication isn't supported. Instead you use the Login-AzureRmAccount cmdlet, which causes the Sign in to Microsoft Azure PowerShell dialog to display. What happens next depends on the type of account you attempt to log in with. If you use a Microsoft Account (typically this is the account your MSDN subscription is associated with) the dialog will recognise this and redirect you to a Sign in to your Microsoft account dialog. If you log in with an Azure AD account you are logged straight in -- assuming authentication is successful of course.

After a successful login you'll see the current ‘settings' for the login session. If you only have one Azure subscription associated with your login you are good to go. If you have more than one you may need to change to the subscription you want to use. The Get-AzureRMSubscription cmdlet will list your subscriptions and then there are a couple of options for changing subscription:

If using the second version obviously replace with your GUID. In case you were wondering the one above is made up...

The ASM version of Select-AzureRmSubscription takes a -Default parameter to set the default subscription but this seems to be missing in the ARM version -- hopefully only a temporary thing.

But I Don't Want to Type my Password Every Time I use Azure

When you log in using Login-AzureRmAccount it seems a token is set which expires after a period of time -- about 12 hours according to this source. This means that you are going to be logging in manually quite frequently which can get to be a chore and in any case is of little use in automated scripts. There is an answer although it doesn't feel as elegant as the publish settings file method.

The technique involves firstly saving your password to disk in encrypted format (a one-time operation) and then using your login and the encrypted password to create a pscredential object that can be used with the -Credential parameter of Login-AzureRmAccount. All the details you need are explained here however do note that this technique only works with an Azure AD account and also be aware that the PowerShell is pre new-style cmdlets. The resulting new-style code ends up something like this:

If you only have one Azure subscription you can of course simplify the above snippet by removing the subscription details. Is it a good idea to store even an encrypted password on disk? It doesn't feel good to me but it seems for the moment that this is what we need to use. The smart money is probably on using an Azure AD account with very limited privileges and then adding permissions to the account as required. Do let me know in the comments if a better technique emerges!

Cheers -- Graham

Getting Started with Azure Resource Manager

Posted by Graham Smith on November 11, 2015No Comments (click here to comment)

Whether you have been working with Microsoft Azure for some time or are new to it there is one BIG thing you need to know about: there are now both ‘classic' and new ways of doing Azure. Classic is referred to as Azure Service Management (ASM) and new is known as Azure Resource Manager (ARM). Going forward ARM is definitely the way of the known future so it makes sense to understand what it's all about and what it can offer. The link collection below is my pick of the best resources to help you get up to speed. If your time is limited then don't miss Trevor Sullivan's MTUG Norway video -- it's a gem.

One thing to keep firmly in mind as you work your way through the resources above is that just recently a new set of Azure PowerShell cmdlets for ARM was released in preview. These cmdlets represent a breaking change from the old cmdlets so any code in the above resources is effectively soon going to be out-of-date. Having said that on the surface the differences are not huge (mostly naming differences) however under the covers I think things have changed as I have come across an odd bug or two. If you are just starting out with ARM it's probably worth using the new cmdlets -- just beware they are in preview for a reason.

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Start of a New Journey

Posted by Graham Smith on November 4, 2015No Comments (click here to comment)

[Please note: Just a couple of weeks after publishing this post Microsoft changed the name of Visual Studio Online (VSO) to Visual Studio Team Services (VSTS). I've updated the title and URL of this post for consistency with future posts but the text below remains unchanged.]

I first started investigating how to implement continuous delivery with TFS -- working almost exclusively in Microsoft Azure -- nearly two years ago. Out of these investigations (and backed-up by practical experience where I work) came my original 24-post series on implementing continuous delivery with TFS and a shorter series covering continuous delivery with VSO.

Although the concepts that I covered in my original series haven't really changed the tooling certainly has -- only what you would expect in this fast-moving industry of ours of course. In particular there have been fundamental changes to the way Microsoft Azure works and we also have a brand new web-based implementation of Release Management coming our way. Additionally, there are aspects of continuous delivery that my original series didn't cover because the tooling I wanted to use simply wasn't in place or mature enough. Consequently it feels like the right time to start a brand new blog post series, and it is my intention in this post to set the scene for what's in store.

Aims of the new Series
  • Hopefully by now most people realise that despite its name VSO (Visual Studio Online) is Microsoft's cloud version of TFS. My original continuous delivery series focussed on TFS since the Release Management tooling didn't originally work with VSO. Although that eventually changed the story is now completely different and the original WPF-based Release Management has a brand new web-based successor. As with most new ALM features coming out of Microsoft this will initially be available in VSO. TFS 2015 will get the new release management tooling sometime later -- see here to keep track of when this might be. Despite the possible complications of different release timeframes I'm planning to make this new series of posts applicable to both TFS and VSO. This will hopefully avoid unnecessary repetition and allow anyone working through the series to pick either VSO or TFS and be confident that they can follow along without finding I have been focussing on one of the implementations to the detriment of the other.
  • Of all the things that can cause software to fail other than actual defects, application configuration is probably the one that is most troublesome. That's my experience anyway. However there is another factor that can cause problems which is the actual configuration of the server(s) the application is installed on. The big question here is how can we be sure that the configuration of the servers we tested on is the same in production, because if there are differences it could spell disaster? Commonly known as configuration as code I'm planning to address this issue in this new series of posts using Microsoft's PowerShell DSC technology.
  • So we've got a process for managing the configuration of our server internals, but what about for actually creating the servers I hear you ask? It's an important point, since who doesn't want to be able to create infrastructure in an automated and repeatable way? I'll be addressing this requirement using the technologies provided by Azure Resource Manager, namely what I think are going to turn out to be idempotent PowerShell cmdlets and (as a different approach) JSON templates. For sure, you are unlikely to be using these technologies in an on premises situation however for me the important thing is to get hands-on experience of an infrastructure as code technology that helps me think strategically about this problem space.
  • I'm a huge advocate for IT people using cloud technologies to help them with their continuous learning activities and if you have an MSDN subscription you could have up to £95 worth of Microsoft Azure credits to use each month. Being able to create servers in Azure and take advantage of the many other services it offers opens up a whole world of possibilities that just a few years ago were out of reach for most of us. However, as well as being a useful learning tool I also feel strongly that most IT people should be learning cloud technologies as they will surely have an effect on most of our jobs at some point. Maybe not today, maybe not tomorrow but soon etc. Consequently, I use Azure both because it is a great place to build sandbox environments but also because I'm confident that learning Azure will help my future career. I hope you will feel the same way about cloud technologies, whether it's Azure or another offering.
  • Lastly, I'm planning to make each blog post shorter and to have a more specific theme. Something like the single responsibility principle for blogging. My hope is that shorter posts will make it easier for those ‘trying this at home' to follow along and will also make it easier to find where I've written about a specific piece of technology. Shorter posts will also help me as it will hopefully be an end to the nightmare blog post that takes several weeks to research, debug and explain in a coherent way.
Who is the new Series Aimed at?

Clearly I hope my blog posts will help as many people as possible. However I have purposefully chosen to work with a specific set of technologies and if this happens to be your chosen set then you are likely to get more direct mileage out of my posts than someone who uses different tools. If you do use different tools though I hope that you will still gain some benefit because many concepts are very similar. Using Chef or Puppet rather than PowerShell DSC? No problem -- go ahead and use those great tools. Your organisation has chosen Octopus Deploy as your release management tooling? My hope is that you should have little problem following along, using Octopus as a direct replacement for Microsoft's offering. As with my previous series I do assume a reasonable level of experience with the underlying technologies and for those for whom this is lacking I'll continue to publish Getting Started posts with link collections to help get up to speed with a topic.

I carry out my research activities with the benefit of an MSDN Enterprise subscription as this gives me access to all of Microsoft's tooling and also monthly Azure credits. If you don't have an MSDN subscription there are still ways you can follow along though. Anyone can sign up for a free VSO account and there is also a free Express version of TFS. Similarly there is a free Community version of Visual Studio and a free Express version of SQL Server. All this, combined with a 180-day evaluation of Windows Server which you could run using Hyper-V on a workstation with sufficient memory should allow you to get very close to the sort of setup that's possible with an MSDN account.

Looking to the Future

It might seem odd to be looking at the future at the beginning of a new blog post series however I can already see a time when this series is out of date and needs updating with a series that includes container technologies. However I'm purposefully resisting blogging about containers for the time being -- it feels just a bit too new and shiny for me at the moment and in any case there is no shortage of other people blogging in this space.

Happy learning!

Cheers -- Graham

Remote Desktop Connections to New-Style Azure VMs – Where Has The DNS Name Gone?

Posted by Graham Smith on September 17, 20152 Comments (click here to comment)

If there's one thing that's certain about Microsoft Azure it's that it's constantly changing. If your interaction with Azure is through the old portal or through PowerShell this might not be too obvious, however if you've used the new portal to any extent then it's hard to miss. One of the obvious changes is the appearance of ‘classic' versions of resources such as Virtual machines (classic). For most people -- me included -- this will immediately pose the question "does classic mean there is a new way of doing things that we should all be using going forward?".

The short answer is ‘yes'. The slightly longer answer is that there are now two different ways to interact with Azure: Azure Service Management (ASM) and Azure Resource Manager (ARM). ASM is the classic stuff and ARM is the new world where resources you create live in Resource Groups. The recommendation is to use ARM if you can -- see this episode of Tuesdays with Corey for details.

I've been learning about ARM in recent weeks and one of the first things I did was to create a new VM in the new portal. It's not a whole lot different from the old portal once you get used to the new ‘blade' feature of the new portal, however one key difference between ASM and ARM VMs is that ARM VMs are no longer created under a cloud service. I didn't think much of it until I downloaded the RDP file for my new ARM VM only to find the computer name field was populated with an IP address and a port number. This is in contrast to ASM VMs where the computer name field is populated with the DNS name of the cloud service and a Remote Desktop port number for that VM.

So what's the problem? It's that MSDN users of Azure who need to make sure their credits don't disappear like to deallocate their VMs at the end of a session so they aren't costing anything. But when a VM is deallocated it looses its IP address, only to be allocated a new (almost certainly different) one when the VM is next started. This is a major pain if you are relying on the IP address to form part of the connection details in an RDP file or a Remote Desktop Connection Manager profile. This isn't a problem with ASM where the DNS name of the cloud service and Remote Desktop port number don't change even if the IP address of the cloud service changes (which it will if all VMs get deallocated).

So what to do? Initial investigations seemed to point to the need for a load balancer, and so (rather reluctantly it has to be said) I started to delve in to the details. However it quickly became clear that creating a load balancer and all its associated gubbins (which is quite a lot) was going to be something of a pain for a developer type of guy like me. And then...a breakthrough! Whilst looking at the settings of my VM's Public IP address in the portal I noticed a DNS name label (optional) field with a tooltip that gave the impression that filling this field in would give a DNS name for my VM.

azure-portal-virtual-machine-dns-name-label

So I gave my VM a DNS name label (I used the same name as the VM which fortunately resulted in a unique DNS name) and then changed the computer name of my RDP file to tst-core-dc.westeurope.cloudapp.azure.com. Result -- a successful login! This feels like problem solved for me at the moment and if you face this issue I hope it helps you out.

Cheers -- Graham