Continuous Delivery with TFS / VSTS – Installing a Domain Controller

Posted by Graham Smith on November 25, 20153 Comments (click here to comment)

[Please note that I've edited this post since first publishing it to reflect new information and / or better ways of doing things. See revision list at the end of the post.]

This fourth blog post in my series on on Continuous Delivery with TFS / VSTS picks up from the previous post where we created some common infrastructure and moves on to installing a domain controller. If that seems a little over-the-top bear in mind that one of the aims of this series of blog posts is to help organisations with a traditional on premises TFS installation implement continuous delivery. Typically a domain controller running Active Directory is going to be part of the mix.

Create the PRM-CORE Resource Group

I'm planning to create my enduring servers in a resource group called PRM-CORE so the first step is to create the group:

Create the Domain Controller

My plan to create the domain controller in PowerShell came to an abrupt halt when the code bombed with an error that I couldn't fix. When I first started writing this post there was a bug in the new-style Azure PowerShell cmdlets that stops a VM from being created where the storage account already exists in a different group from the one the VM will be created in. With a newer version of the cmdlets this has now changed to a warning: WARNING: Storage account, prmstorageaccount, is not found. The OS disk may be in a different storage group. As far as I can tell, despite the message VMs are now created correctly. Anyway, all this was too late for me as I had already created the VM (called PRM-CORE-DC) via the portal. If you go down this route do make sure you set the DNS name label of the public IP address to the name of the VM. (See my post here for more details about why you should do this.) Other than that gotcha creating a VM in the portal is pretty straightforward but don't forget to specify the already-created premium storage account (if you have decided to go down the premium route as I have), virtual network and the resource group created above. I created my DM as a Standard DS2 (since I'm planning for it to be doing quite a lot of work) running Windows 2012 R2 Datacenter. Previously my DC would be configured as a Standard A0 and I would leave it turned on (which costs pennies per day) but the DS2 burns through credits much faster so I'll be turning it off. This will all be scripted so the DC can be shut down last (and started up first) and I'll also be showing how to automate this in case a lapse of memory leaves your VMs turned on.

Preparing for the Domain Controller Role

Probably the first thing to know about creating a domain controller in Azure is that it always needs to have the same internal IP address. If you never turn it off then that will work but the recommendation is to set a static internal IP address -- the first available one for the virtual network we are using is 10.0.0.4 . You can do this with the following PowerShell, assuming the VM is turned off and the target IP address isn't already in use:

You can also do this via the new portal. With your VM shut down navigate to its network interface then to Settings > IP addresses. From there you can make the IP address static and set it to 10.0.0.4:

azure portal static ip address

The second thing to know about setting up a domain controller in Azure is that if the AD DS database, logs, and SYSVOL are not stored on a non-OS disk there is a risk of loosing data. For a lightly used POC environment I'm happy to take the risk but if you are not you'll need to add a data disk to your VM and specify this as the location for the AD DS database, logs, and SYSVOL during the installation of the AD role.

Installing Active Directory

This isn't a whole lot different in Azure from an on-premises installation, although there is one crucial step (see below) particular to Azure to ensure success. Of course if you are a developer you may not be installing AD on a regular basis so my previous statement may be less than helpful. Fear not as you can find a complete rundown of most of what you need to know here. In essence though the steps are as follows:

  • Before installing AD in Azure you need to temporarily set the virtual network to a Custom DNS of 127.0.0.1 for the Primary DNS server setting. See this post for more details. It's crucial to do this before you start installing AD.
  • Install the Active Directory Domain Services role via Server Manager > Add roles and features.
  • The wizard steps are mostly straightforward but if you are unfamiliar with the process it may not be obvious that since we are starting from scratch you need to select Add a new forest on the Deployment Configuration step of the wizard.
  • You'll also need to specify a Root domain name. I chose prm.local.
  • With your VM restarted make sure you complete the Reset the DNS server for the Azure virtual network instructions in the documentation. Essentially this is replacing the temporary 127.0.0.1 primary DNS server  setting with the one for the DC, ie 10.0.0.4.
  • With AD up-and-running you'll probably want to navigate to Server Manager > Tools > Active Directory Users and Computers and create a user which you'll use to log on to servers when they have been added to the domain. It's not a best practice but you might find it useful if this user was in the Domain Admins group.

That's it for this time. Use your Domain Admin powers wisely!

Cheers -- Graham


Revisions:

12/12/2015 -- Replaced adding a DNS forwarder pointing to Google's DNS server with setting the virtual network's Secondary DNS Server to 168.63.129.16 to allow access to the Internet.

2/1/2016 -- Updated to reflect my adoption of premium storage and also to remove the change above and replace with a crucial technique for ensuring that the DNS roots hint list on the DC is populated correctly.

 

Continuous Delivery with TFS / VSTS – Laying Foundations in Azure

Posted by Graham Smith on November 19, 2015No Comments (click here to comment)

[Please note that I've edited this post since first publishing it to reflect new information and / or better ways of doing things. See revision list at the end of the post.]

In the previous post in this series on Continuous Delivery with TFS / VSTS we started working with the new-style Azure PowerShell cmdlets. In this post we use the cmdlets to lay some foundational building blocks in Azure.

Resource Groups

One of the new concepts in Azure Resource Manager (ARM) is that all resources live in Resource Groups which are really just containers. Over time best practices for resource groups will undoubtedly emerge but for the moment I'm planning to use resource groups as follows:

  • PRM-COMMON -- this resource group will be the container for shared infrastructure such as a storage account and virtual network and is the focus of this post.
  • PRM-CORE -- this resource group will be the container for enduring servers such as the domain controller and the TFS server.
  • PRM-$ENV$ -- these resource groups will be containers for servers that together form a specific environment. The plan is that these environments can be killed-off and completely recreated through ARM's JSON templates feature and PowerShell DSC.

I'm using PRM as my container prefix but this could be anything that suits you.

Creating PRM-COMMON

Once you have logged in to Azure PowerShell  creating PRM-COMMON is straightforward:

Note that you need to specify a location for the resource group -- something to consider as you'll probably want all your groups close together. If you want to visually verify the creation of the group head over to the Azure Portal and navigate to Resource groups.

Create a Storage Account

In order to keep things simple I prefer just one storage account for all of the virtual hard disks for all the VMs I create, although this isn't necessarily a best practice for some production workloads such as SQL Server. One thing that is worth investigating is premium storage, which uses SSDs rather than magnetic disks. I'd originally discounted this as it looked like it would be expensive against my Azure credits however after experiencing how VS 2015 runs on standard storage (slow on a Standard A4 VM) I investigated and found the extra cost to be marginal and the performance benefit huge. I recommend you run your own tests before committing to premium storage in case you are on a subscription where the costs may not outweigh the performance gains, however I'm sold and most of my VMs from now on will be on premium storage. To take advantage of premium storage you need a dedicated premium storage account:

Even though the resource group is in West Europe you still need to supply the location to the New-AzureRmStorageAccount cmdlet, and note that  I've added a ‘p' on the end of the name to indicate the account is a premium one as I may well create a standard account for other purposes.

Create a Virtual Network

We also need to create a virtual network. Slightly more complicated but only just:

Create a SendGrid Account for Email Services

In later posts we'll want to configure some systems to send email notifications. The simplest way to achieve this in Azure is to sign up for a free SendGrid email service account from the Azure Marketplace. At the time of writing the documentation is still for the classic Azure portal which doesn't allow the SendGrid account to be created in a resource group. The new portal does though so that's the one to go for. Navigate to Marketplace > Web + Mobile and search for sendgrid to display the  SendGrid Email Delivery which is what you want. Creating an account is straightforward -- I called mine PRMSMTP and created it in the PRM-COMMON resource group. Make sure you choose the free option.

Using Azure Resource Explorer

As an alternative to using the portal to examine the infrastructure we have just created we can also use the browser-based Azure Resource Explorer.  This very nifty tool allows you to drill down in to your Azure subscriptions and see all the resources you have created and their properties. Give it a go!

Cheers -- Graham


Revisions:

31/12/15 -- Recommendation to investigate using premium storage.

 

Continuous Delivery with TFS / VSTS – A New Way of Working with Azure Resource Manager

Posted by Graham Smith on November 12, 201510 Comments (click here to comment)

In this second post in my Continuous Delivery with TFS / VSTS series it's time to make a fresh start in Microsoft Azure. Whaddaya mean a fresh start? Well for a little while now there has been a new way to interact with Azure, namely through a feature known as Azure Resource Manager or ARM. This is in contrast to the ‘old' way of doing things which is now referred to as Azure Service Management or ASM. As I mention in a previous blog post ARM is the way of the future and this new series of blog posts is going to be entirely based on using ARM using the new portal (codename Ibiza) where portal interaction is necessary. I have lots of VMs created in ASM but the plan is to clear those down and start again.

However, I'm a big fan of using PowerShell to work with Azure at every possible opportunity and a further reason for making a fresh start is that there is a new set of Azure PowerShell cmdlets for working with ARM (to avoid naming clashes with ASM cmdlets). To me it makes sense to start a new series of posts based on this new functionality.

As usual, the aim of this post isn't to teach you foundational concepts and if you need an introduction to ARM I have a Getting Started blog post with a collection of links to help you get going. Be aware that most of the resources pre-date the arrival of the new Azure PowerShell cmdlets. In the rest of this post we'll look at how to get up-and-running with the new cmdlets.

Install the new Azure PowerShell Cmdlets

First things first you'll need to install the new-style Azure PowerShell cmdlets. At the time of writing these were in preview, and the important thing to note is that the new version (1.0 or later) introduces breaking changes so do consider which machine you are installing them on. In the fullness of time we will be able to perform the installation from Web Platform Installer but initially at least it's a manual process. Details are available from the announcement blog here and Petri has a nice set of instructions as well.

Logging in has Completely Changed

If you have been used to logging in to Azure using the publish settings file method then you need to be aware that this method simply will not work with ARM since certificate-based authentication isn't supported. Instead you use the Login-AzureRmAccount cmdlet, which causes the Sign in to Microsoft Azure PowerShell dialog to display. What happens next depends on the type of account you attempt to log in with. If you use a Microsoft Account (typically this is the account your MSDN subscription is associated with) the dialog will recognise this and redirect you to a Sign in to your Microsoft account dialog. If you log in with an Azure AD account you are logged straight in -- assuming authentication is successful of course.

After a successful login you'll see the current ‘settings' for the login session. If you only have one Azure subscription associated with your login you are good to go. If you have more than one you may need to change to the subscription you want to use. The Get-AzureRMSubscription cmdlet will list your subscriptions and then there are a couple of options for changing subscription:

If using the second version obviously replace with your GUID. In case you were wondering the one above is made up...

The ASM version of Select-AzureRmSubscription takes a -Default parameter to set the default subscription but this seems to be missing in the ARM version -- hopefully only a temporary thing.

But I Don't Want to Type my Password Every Time I use Azure

When you log in using Login-AzureRmAccount it seems a token is set which expires after a period of time -- about 12 hours according to this source. This means that you are going to be logging in manually quite frequently which can get to be a chore and in any case is of little use in automated scripts. There is an answer although it doesn't feel as elegant as the publish settings file method.

The technique involves firstly saving your password to disk in encrypted format (a one-time operation) and then using your login and the encrypted password to create a pscredential object that can be used with the -Credential parameter of Login-AzureRmAccount. All the details you need are explained here however do note that this technique only works with an Azure AD account and also be aware that the PowerShell is pre new-style cmdlets. The resulting new-style code ends up something like this:

If you only have one Azure subscription you can of course simplify the above snippet by removing the subscription details. Is it a good idea to store even an encrypted password on disk? It doesn't feel good to me but it seems for the moment that this is what we need to use. The smart money is probably on using an Azure AD account with very limited privileges and then adding permissions to the account as required. Do let me know in the comments if a better technique emerges!

Cheers -- Graham

Getting Started with Azure Resource Manager

Posted by Graham Smith on November 11, 2015No Comments (click here to comment)

Whether you have been working with Microsoft Azure for some time or are new to it there is one BIG thing you need to know about: there are now both ‘classic' and new ways of doing Azure. Classic is referred to as Azure Service Management (ASM) and new is known as Azure Resource Manager (ARM). Going forward ARM is definitely the way of the known future so it makes sense to understand what it's all about and what it can offer. The link collection below is my pick of the best resources to help you get up to speed. If your time is limited then don't miss Trevor Sullivan's MTUG Norway video -- it's a gem.

One thing to keep firmly in mind as you work your way through the resources above is that just recently a new set of Azure PowerShell cmdlets for ARM was released in preview. These cmdlets represent a breaking change from the old cmdlets so any code in the above resources is effectively soon going to be out-of-date. Having said that on the surface the differences are not huge (mostly naming differences) however under the covers I think things have changed as I have come across an odd bug or two. If you are just starting out with ARM it's probably worth using the new cmdlets -- just beware they are in preview for a reason.

Cheers -- Graham

Continuous Delivery with TFS / VSTS – Start of a New Journey

Posted by Graham Smith on November 4, 2015No Comments (click here to comment)

[Please note: Just a couple of weeks after publishing this post Microsoft changed the name of Visual Studio Online (VSO) to Visual Studio Team Services (VSTS). I've updated the title and URL of this post for consistency with future posts but the text below remains unchanged.]

I first started investigating how to implement continuous delivery with TFS -- working almost exclusively in Microsoft Azure -- nearly two years ago. Out of these investigations (and backed-up by practical experience where I work) came my original 24-post series on implementing continuous delivery with TFS and a shorter series covering continuous delivery with VSO.

Although the concepts that I covered in my original series haven't really changed the tooling certainly has -- only what you would expect in this fast-moving industry of ours of course. In particular there have been fundamental changes to the way Microsoft Azure works and we also have a brand new web-based implementation of Release Management coming our way. Additionally, there are aspects of continuous delivery that my original series didn't cover because the tooling I wanted to use simply wasn't in place or mature enough. Consequently it feels like the right time to start a brand new blog post series, and it is my intention in this post to set the scene for what's in store.

Aims of the new Series
  • Hopefully by now most people realise that despite its name VSO (Visual Studio Online) is Microsoft's cloud version of TFS. My original continuous delivery series focussed on TFS since the Release Management tooling didn't originally work with VSO. Although that eventually changed the story is now completely different and the original WPF-based Release Management has a brand new web-based successor. As with most new ALM features coming out of Microsoft this will initially be available in VSO. TFS 2015 will get the new release management tooling sometime later -- see here to keep track of when this might be. Despite the possible complications of different release timeframes I'm planning to make this new series of posts applicable to both TFS and VSO. This will hopefully avoid unnecessary repetition and allow anyone working through the series to pick either VSO or TFS and be confident that they can follow along without finding I have been focussing on one of the implementations to the detriment of the other.
  • Of all the things that can cause software to fail other than actual defects, application configuration is probably the one that is most troublesome. That's my experience anyway. However there is another factor that can cause problems which is the actual configuration of the server(s) the application is installed on. The big question here is how can we be sure that the configuration of the servers we tested on is the same in production, because if there are differences it could spell disaster? Commonly known as configuration as code I'm planning to address this issue in this new series of posts using Microsoft's PowerShell DSC technology.
  • So we've got a process for managing the configuration of our server internals, but what about for actually creating the servers I hear you ask? It's an important point, since who doesn't want to be able to create infrastructure in an automated and repeatable way? I'll be addressing this requirement using the technologies provided by Azure Resource Manager, namely what I think are going to turn out to be idempotent PowerShell cmdlets and (as a different approach) JSON templates. For sure, you are unlikely to be using these technologies in an on premises situation however for me the important thing is to get hands-on experience of an infrastructure as code technology that helps me think strategically about this problem space.
  • I'm a huge advocate for IT people using cloud technologies to help them with their continuous learning activities and if you have an MSDN subscription you could have up to £95 worth of Microsoft Azure credits to use each month. Being able to create servers in Azure and take advantage of the many other services it offers opens up a whole world of possibilities that just a few years ago were out of reach for most of us. However, as well as being a useful learning tool I also feel strongly that most IT people should be learning cloud technologies as they will surely have an effect on most of our jobs at some point. Maybe not today, maybe not tomorrow but soon etc. Consequently, I use Azure both because it is a great place to build sandbox environments but also because I'm confident that learning Azure will help my future career. I hope you will feel the same way about cloud technologies, whether it's Azure or another offering.
  • Lastly, I'm planning to make each blog post shorter and to have a more specific theme. Something like the single responsibility principle for blogging. My hope is that shorter posts will make it easier for those ‘trying this at home' to follow along and will also make it easier to find where I've written about a specific piece of technology. Shorter posts will also help me as it will hopefully be an end to the nightmare blog post that takes several weeks to research, debug and explain in a coherent way.
Who is the new Series Aimed at?

Clearly I hope my blog posts will help as many people as possible. However I have purposefully chosen to work with a specific set of technologies and if this happens to be your chosen set then you are likely to get more direct mileage out of my posts than someone who uses different tools. If you do use different tools though I hope that you will still gain some benefit because many concepts are very similar. Using Chef or Puppet rather than PowerShell DSC? No problem -- go ahead and use those great tools. Your organisation has chosen Octopus Deploy as your release management tooling? My hope is that you should have little problem following along, using Octopus as a direct replacement for Microsoft's offering. As with my previous series I do assume a reasonable level of experience with the underlying technologies and for those for whom this is lacking I'll continue to publish Getting Started posts with link collections to help get up to speed with a topic.

I carry out my research activities with the benefit of an MSDN Enterprise subscription as this gives me access to all of Microsoft's tooling and also monthly Azure credits. If you don't have an MSDN subscription there are still ways you can follow along though. Anyone can sign up for a free VSO account and there is also a free Express version of TFS. Similarly there is a free Community version of Visual Studio and a free Express version of SQL Server. All this, combined with a 180-day evaluation of Windows Server which you could run using Hyper-V on a workstation with sufficient memory should allow you to get very close to the sort of setup that's possible with an MSDN account.

Looking to the Future

It might seem odd to be looking at the future at the beginning of a new blog post series however I can already see a time when this series is out of date and needs updating with a series that includes container technologies. However I'm purposefully resisting blogging about containers for the time being -- it feels just a bit too new and shiny for me at the moment and in any case there is no shortage of other people blogging in this space.

Happy learning!

Cheers -- Graham

Remote Desktop Connections to New-Style Azure VMs – Where Has The DNS Name Gone?

Posted by Graham Smith on September 17, 20152 Comments (click here to comment)

If there's one thing that's certain about Microsoft Azure it's that it's constantly changing. If your interaction with Azure is through the old portal or through PowerShell this might not be too obvious, however if you've used the new portal to any extent then it's hard to miss. One of the obvious changes is the appearance of ‘classic' versions of resources such as Virtual machines (classic). For most people -- me included -- this will immediately pose the question "does classic mean there is a new way of doing things that we should all be using going forward?".

The short answer is ‘yes'. The slightly longer answer is that there are now two different ways to interact with Azure: Azure Service Management (ASM) and Azure Resource Manager (ARM). ASM is the classic stuff and ARM is the new world where resources you create live in Resource Groups. The recommendation is to use ARM if you can -- see this episode of Tuesdays with Corey for details.

I've been learning about ARM in recent weeks and one of the first things I did was to create a new VM in the new portal. It's not a whole lot different from the old portal once you get used to the new ‘blade' feature of the new portal, however one key difference between ASM and ARM VMs is that ARM VMs are no longer created under a cloud service. I didn't think much of it until I downloaded the RDP file for my new ARM VM only to find the computer name field was populated with an IP address and a port number. This is in contrast to ASM VMs where the computer name field is populated with the DNS name of the cloud service and a Remote Desktop port number for that VM.

So what's the problem? It's that MSDN users of Azure who need to make sure their credits don't disappear like to deallocate their VMs at the end of a session so they aren't costing anything. But when a VM is deallocated it looses its IP address, only to be allocated a new (almost certainly different) one when the VM is next started. This is a major pain if you are relying on the IP address to form part of the connection details in an RDP file or a Remote Desktop Connection Manager profile. This isn't a problem with ASM where the DNS name of the cloud service and Remote Desktop port number don't change even if the IP address of the cloud service changes (which it will if all VMs get deallocated).

So what to do? Initial investigations seemed to point to the need for a load balancer, and so (rather reluctantly it has to be said) I started to delve in to the details. However it quickly became clear that creating a load balancer and all its associated gubbins (which is quite a lot) was going to be something of a pain for a developer type of guy like me. And then...a breakthrough! Whilst looking at the settings of my VM's Public IP address in the portal I noticed a DNS name label (optional) field with a tooltip that gave the impression that filling this field in would give a DNS name for my VM.

azure-portal-virtual-machine-dns-name-label

So I gave my VM a DNS name label (I used the same name as the VM which fortunately resulted in a unique DNS name) and then changed the computer name of my RDP file to tst-core-dc.westeurope.cloudapp.azure.com. Result -- a successful login! This feels like problem solved for me at the moment and if you face this issue I hope it helps you out.

Cheers -- Graham

Azure Automation Fails with “AADSTS50055: Password is expired.”

Posted by Graham Smith on January 24, 201510 Comments (click here to comment)

A while back I posted on how to set up Azure Automation to ensure your VMs get shut down if you accidentally leave them running. Very important for those of us with MSDN accounts that need to preserve Azure credits.

A few days ago when starting the runbook manually I noticed that it had no effect and my VMs didn't shut down. On investigation from the Azure Portal (Automation > $(AutomationAccount) > Runbooks > $(Runbook) > Jobs > $(CompletedJobThatIsFailing) > History) I saw that the job was throwing an exception:

Add-AzureAccount : AADSTS70002: Error validating credentials. AADSTS50055: Password is expired.
Trace ID: 4f7030e5-5d95-4c91-8b64-606231e3b056
Correlation ID: 7c2eb266-bf31-45ec-bb72-2677badd8ad3
Timestamp: 2015-01-22 00:32:53Z: The remote server returned an error: (401) Unauthorized.
At Stop-AzureVMExceptDomainController:5 char:5
+
+ CategoryInfo : CloseError: (:) [Add-AzureAccount], AadAuthenticationFailedException
+ FullyQualifiedErrorId : Microsoft.WindowsAzure.Commands.Profile.AddAzureAccount

So, the password on the automation account must have expired. I went through the procedure of resetting it following the original instructions for creating the automation account here and sure enough everything sprung to life again.

Two things are troublesome about all this: Firstly I had no idea that my password expired and secondly I don't want it to expire. It seems that if you don't do anything passwords will expire after 90 days. You can fix this using PowerShell but there are one or two hoops:

  1. Install the Microsoft Online Services Sign-In Assistant for IT Professionals RTW.
  2. Install the Azure Active Directory Module for Windows PowerShell (64-bit version).
  3. Import the AD module using the import-module MSOnline command in PowerShell.
  4. Connect to Azure AD and change the automation account so the password doesn't expire using the following code:
  5. The code above prompts you to supply credentials. However, for me my Microsoft account didn't work -- just kept causing an exception. I had to use a Windows Azure Active Directory account and ended up creating a new account with the Global Administrator role. Looking back I might have been able to give the automation account I was trying to change the Global Administrator role rather than create a new one -- feel free to try this first.
  6. If you want to check the password expiry status of an Azure AD account use this code:

With the expiry taken care of it's time to wonder if there is some notification scheme in place for this. I noticed in my authentication account that I hadn't set an alternate email address. I have now but of course it's too late to know if that's the notification route. One for the forums I think...

Cheers -- Graham

Azure VM Reporting “The network location cannot be reached”

Posted by Graham Smith on January 19, 201512 Comments (click here to comment)

For the past few days my TFS Admin server in Azure had been reporting that it couldn't access a share on the domain controller in the same Azure network and was reporting an The network location cannot be reached error. The server was logging on to the domain okay, could ping the DC and could resolve domain users when setting permissions on a share (for example), and other machines could see the Drop folder on the TFS Admin machine. It wasn't the DC at fault as all other machines could see it and access the share on the DC.

I spent ages Googling and checking DNS and other such settings but everything checked out normal. So I took the plunge and removed the server from the domain, removed its entry from the Computers section of Active Directory, rebooted the offending server and tried to rejoin the domain. Nooooooo! Exactly the same error message trying to join the domain with an The machine ALMTFSADMIN attempted to join the domain but failed. The error code was 1231. error in the System Event Log.

Fearing a complete rebuild of my TFS demo infrastructure coming on I desperately tried the good old netsh int ip reset command (yes it seems it can be done on a VM without trashing the network connection and possibly locking yourself out) but no change and then and I then tried uninstalling the  File and Printer Sharing for Microsoft Networks service of the network adapter. Still nothing doing!

I then came across a post where a respondent advised removing orphaned network adapters. I knew there was something funny going on here since my network adapter name was Microsoft Hyper-V Network Adapter #149 and was presumably installing a new adapter every time the machine booted from cold (ie from the Stopped (Deallocated) state). With nothing to loose I opened up Device Manager, selected View > Show hidden devices and then expanded Network adapters. Sure enough there was a monster list of orphaned Microsoft Hyper-V Network Adapter entries -- 148 to be precise. Breaking all rules about automating tasks that you do more than once I feverishly manually uninstalled all the entries bar the current one. Then rebooted. And with bated breath tried to rejoin the domain...with success!

Whew! Lucky escape this time. Follow-up actions are to find out why this is happening and how to uninstall these in a more automated way -- with the PowerShell Remove-VMNetworkAdapter cmdlet for example? Do leave a comment if you have any insight!

Cheers -- Graham

Continuous Delivery with TFS: Standing up an Environment

Posted by Graham Smith on January 9, 2015No Comments (click here to comment)

At this point in my series of posts on building a continuous delivery pipeline with TFS we have installed and configured quite a lot of the TFS infrastructure that we will need however as yet we don't have an environment to deploy our sample application to. We'll attend to that in this post, but first a few words about the bigger environments picture.

Thinking About the Bigger Environments Picture

When thinking about what can cause an application deployment to fail the first things we probably think about are the application code itself and then any configuration settings the code needs in order to run. That's only one side of the coin though, and the configuration of the environments we are deploying to might equally be to blame when things go wrong. It makes sense then that we need tooling to manage both the application and its configuration and also the environments we deploy to. The TFS ecosystem addresses the former but doesn't address the latter. So what does? There are two large pieces to consider: provisioning the servers that form the environment our application needs to run in and then the actual configuration of those servers our application will run on.

Provisioning servers is very much a horses for courses affair: what you do depends on where your servers will run and whether they need to be created afresh each time they are used. On premises it might be fine for servers to be long-lived and always on. In the cloud you may want the ability to create a test environment for a specific task and then tear it down afterwards to save costs. In the Windows world dynamically creating environments can be achieved pretty much anywhere using scripting tools such as PowerShell.  In Azure there are further options since there is now tooling such as Brewmaster or Resource Manager that can create servers using templates that describe what is required.

Managing the internal configuration of servers using tooling once they have been created can and should be done wherever your servers are running. The idea is that the only way a server gets changed is through the tooling so that if the server needs to be recreated (or more of the same are needed) it's pretty much a push-button exercise. (This is probably in contrast to the position most organisations are in today where servers are tweaked by hand as required until they become works of art that nobody knows how to faithfully recreate. If you are in the position of needing to know the configuration of an existing server then a product such as GuardRail can probably help.) The tooling to manage server configuration includes Puppet, Chef and Microsoft's new offering in this space PowerShell Desired State Configuration (DSC).

The techniques and tools mentioned above are definitely the way to go, however in this post I'm ignoring all that because the aim is to get the simplest thing working possible (although researching and writing about automating infrastructure and server configuration is on my list). Additionally, because we are setting up a demo environment we'll be working in multi-tenancy mode, ie multiple environments are hosted on the same server to keep the number of VMs that will be required to a minimum.

Provisioning Web and Database Servers

Our sample application consists of a web front end talking to a SQL Server database so we'll need two servers -- ALMWEB01 running IIS and ALMSQL01 running SQL Server 2014 or whatever version makes you happy. I use a basic A2 VM for the web server and a basic A3 for SQL Server, both created from a Windows Server 2012 R2 Datacenter image and configured as per my Azure foundations post. Once stood up the VMs need joining to the domain and configuring for their server roles, the details of which I'm not covering here as I'm assuming you already know or can learn. One point to note is that you need to resist the temptation to create the SQL Server VM from a preconfigured SQL Server image from the gallery. This will eat your Azure credits as you pay for the SQL Server licence time with these images. Rather, download SQL Server from MSDN and do the installation yourself on a vanilla Windows Server VM.

Installing the Release Management Deployment Agent

The final configuration we need to perform to make these VMs ready to participate in the delivery pipeline is to install the Release Management Deployment Agent. The agent needs to run with a service account so create a domain account (I use ALM\RMDEPLOYER) and add it to the Administrators group of the two servers. Next open up the Release Management client and add this account (Administration > Manage Users) giving it the Service User role. Back on your deployment servers you can now run the Deployment Agent install and provide the appropriate configuration details:

release-management-deployment-agent-configuration

After clicking Apply settings the installer will run through its list of items to configure and if there are no errors the agent will be up-and-running and ready to communicate with the Release Management server. To check this open up the Release Management client and navigate to Configure Paths > Servers. Click on the down arrow next to the New button and choose Scan for Agents. This will bring up the Unregistered Servers dialog which allows one to scan for and then register servers. If all is well you'll be able to register your two servers which should appear in the Servers tab as follows:

release-management-registered-servers

We're not quite ready to start building a pipeline with Release Management yet and in the next instalment we'll carry out the configuration steps that will take us to that point.

Cheers -- Graham

Continuous Delivery with TFS: Provisioning a Visual Studio Development Machine

Posted by Graham Smith on December 9, 2014No Comments (click here to comment)

In this instalment of my series on building a continuous delivery pipeline with TFS we look at provisioning a Visual Studio development machine. Although we installed Visual Studio on the TFS admin server to support the build process and you may be thinking you could use this instance, in my experience it's a sluggish experience because of all the other components that are installed. You might also have a physical machine on which Visual Studio is installed and you may be wondering if you could use this. Assuming that there are no complications such as firewalls the answer is a cautious yes -- my initial Azure setup involved connecting to a publicly accessible TFS endpoint and it was mostly okay. In this scenario though your development machine isn't part of your Azure network and the experience is a bit clunky. This can be resolved by configuring a Site-to-Site VPN but that isn't something I've done and isn't something I'm going to cover in this series. Rather, my preference is to provision and use an Azure-based development machine. In fact I like this solution so much I don't bother maintaining a physical PC for Visual Studio research and learning any more -- for me it's Azure all the way.

So if like me you decide to go down the Azure path you have a couple of options to choose from, assuming you have an MSDN subscription. You can create a VM with your chosen operating system and install any version of Visual Studio that your MSDN subscription level allows. Alternatively you can create a VM from the gallery with Visual Studio already installed (in the Portal there is a check box to display MSDN images). The first thing to note is that only MSDN subscriptions get the availability to run desktop versions of Windows as VMs, so if you don't have MSDN you will need to run a version of Windows Server. The second thing to note if you are opting for a pre-installed version of Visual Studio is that just because you see Ultimate listed doesn't mean you have access to it. In order to activate Visual Studio Ultimate you will need to log in to Visual Studio with an MSDN Ultimate subscription or provide an Ultimate licence key. I've been there and suffered the disappointment. I have mentioned this to Microsoft but at the time of writing this hasn't been rectified. With all that out of the way, my preference is to create a VM and install Visual Studio myself as I like the flexibility of choosing what components I want installed. Whichever route you choose, ensure that you add this VM to your domain if you are using a domain controller and that you add the domain account that you will use for everyday access with appropriate privileges. You'll also want to make sure that VS is updated to the latest version and that all Windows updates have been applied.

As a final step you might want to install any other development tools that you regularly use. As a minimum you should probably install the Microsoft Visual Studio Team Foundation Server 2013 Power Tools. These provide many useful extras but the one I use regularly is the ability to perform source control operations from Windows Explorer.

Cheers -- Graham