Archives by Graham Smith

Getting Started with Release Management for Visual Studio

Posted by Graham Smith on December 30, 2014No Comments (click here to comment)

Release Management for Visual Studio (also known as Visual Studio Release Management) is the TFS ecosystem component that manages the deployment of code to each stage of the delivery pipeline. It was bought by Microsoft in 2013 from InCycle and started life as InCycle's InRelease product (which was specifically built to work with TFS). Microsoft are gradually making it their own and adding new features and capabilities. Although there is a good selection of tools for managing code deployments in the marketplace the fact that it is included with most versions of MSDN makes it an obvious choice for anyone developing applications and building continuous delivery pipelines with TFS via the MSDN route. Originally one had to purchase extra licences for each server one was planning to deploy to however that requirement is being completely scrapped from the beginning of January 2015, so if you are covered by one of the MSDN versions there are no extra costs.

Because Release Management is fairly new on the scene there isn't the wealth of learning resources that are available for other parts of TFS. The list is growing though -- here's my pick for anyone looking to get started:

If you are looking for a quick and relatively straightforward way to get some hands-on experience of Release Management then the Brian Keller VM could be a good option. You can download it for running under Hyper-V or alternatively run it as an Azure VM. If you choose the latter option then this post could help you, although bear in mind that it was written in mid-2013 and some aspects of Azure may have changed. Unless you have a very fast Internet connection you will more than likely want to use an Azure VM to perform the downloading rather than round-trip the VM via Earth.

Cheers -- Graham

Continuous Delivery with TFS: Our Sample Application

Posted by Graham Smith on December 27, 2014No Comments (click here to comment)

In this post that is part of my series on implementing continuous delivery with TFS we look at the sample application that will be used to illustrate various aspects of the deployment pipeline. I've chosen Microsoft's fictional Contoso University ASP.NET MVC application as it comprises components that need to be deployed to a web server and a database server and it lends itself to (reasonably) easily demonstrating automated acceptance testing. You can download the completed application here and read more about its construction here.

Out of the box Contoso University uses Entity Framework Code First Migrations for database development however this isn't what I would use for enterprise-level software development. Instead I recommend using a tool such as Microsoft's SQL Server Database Tools (SSDT) and more specifically the SQL Server Database Project component of SSDT that can be added to a Visual Studio solution. The main focus of this post is on converting Contoso University to use a SQL Server Database Project and if you are not already up to speed with this technology I have a Getting Started post here. Please note that I don't describe every mouse-click below so some familiarity will be essential. I'm using the version of LocalDb that corresponds to SQL Server 2012 below as this is what Contoso University has been developed against. If you want to use the LocalDb that corresponds to SQL Server 2014 ((localdb)\ProjectsV12) then it will probably work but watch out for glitches. So, there is a little bit of work to do to get Contoso University converted, and this post will take us to the point of readying it for configuration with TFS.

Getting Started
  1. Download the Contoso University application using the link above and unblock and then extract the zip to a convenient temporary location.
  2. Navigate to ContosoUniversity.sln and open the solution. Build the solution which should cause NuGet packages to be restored using the Automatic Package Restore method.
  3. From Package Manager Console issue an Update-Database command (you may have to close down and restart Visual Studio for the command to become available). This should cause a ContosoUniversity2 database (including data) to be created in LocalDb. (You can verify this by opening the SQL Server Object Explorer window and expanding the (LocalDb)\v11.0 node. ContosoUniversity2 should be visible in the Database folder. Check that data has been added to the tables as we're going to need it.)
Remove EF Code First Migrations
  1. Delete SchoolIniializer.cs from the DAL folder.
  2. Delete the DatabaseInitializer configuration from Web.config (this will probably be commented out but I'm removing it for completeness' sake):
  3. Remove the Migrations folder and all its contents.
  4. Expand the ContosoUniversity2 database from  the SQL Server Object Explorer window and delete dbo._MigrationHistory from the Tables folder.
  5. Run the solution to check that it still builds and data can be edited.
Configure the solution to work with a SQL Server Database Project (SSDP)
  1. Add an SSDP called ContosoUniversity.Database to the solution.
  2. Import the ContosoUniversity2 database to the new project using default values.
  3. In the ContosoUniversity.Database properties enable Code Analysis in the Code Analysis tab.
  4. Create and save a publish profile called CU-DEV.publish.xml to publish to a database called CU-DEV on (LocalDb)\v11.0.
  5. In Web.config change the SchoolContext connection string to point to CU-DEV.
  6. Build the solution to check there are no errors.
Add Dummy Data

The next step is to provide the facility to add dummy data to a newly published version of the database. There are a couple of techniques for doing this depending on requirements -- the one I'm demonstrating only adds the dummy data if a table contains no rows, so ensuring that a live database can't get polluted. I'll be extracting the data from ContosoUniversity2 and I'll want to maintain existing referential integrity, so I'll be using SET IDENTITY_INSERT ON | OFF on some tables to insert values to primary key columns that have the identity property set. Firstly create a new folder in the SSDP called ReferenceData (or whatever pleases you) and then add a post deployment script file (Script.PostDeployment.sql) to the root of the ContosoUniversity.database project (note there can only be one of these). Then follow this general procedure for each table:

  1. In the SQL Server Object Explorer window expand the tree to display the ContosoUniversity2 database tables.
  2. Right click a table and choose View Data. From the table's toolbar click the Script icon to create the T-SQL to insert the data (SET IDENTITY_INSERT ON | OFF should be added by the scripting engine where required).
  3. Amend the script with an IF statement so that the insert will only take place if the table is empty. The result script should look similar to the following:
  4. Save the file in the ReferenceData folder in the format TableName.data.sql and add it to the solution as an existing item.
  5. Use the SQLCMD syntax to call the file in the post deployment script file. (The order the table inserts are executed will need to cater for referential integrity. Person, Department, Course, CourseInstructor, Enrollment and OfficeAssignment should work.) When editing Script.PostDeployment.sql the SQLCMD Mode toolbar button will turn off Transact-SQL IntelliSense and stop ‘errors' from being highlighted.
  6. When all the ReferenceData files have been processed the Script.PostDeployment.sql should look something like:

    You should now be able to use CU-DEV.publish.xml to actually publish a database called CU-DEV to LocalDB that contains both schema and data and which works in the same way as the database created by EF Code First Migrations.
Finishing Touches

For the truly fussy among us (that's me) that like neat and tidy project names in VS solutions there is an optional set of configuration steps that can be performed:

  1. Remove the ContosoUniversity ASP.NET MVC project from the solution and rename it to ContosoUniversity.Web. In the file system rename the containing folder to ContosoUniversity.Web.
  2. Add the renamed project back in to the solution and from the Application tab of the project's Properties change the Assembly name and Default namespace to ContosoUniversity.Web.
  3. Perform the following search and replace actions:
    namespace ContosoUniversity > namespace ContosoUniversity.Web
    using ContosoUniversity > using ContosoUniversity .Web
    ContosoUniversity.ViewModels > ContosoUniversity.Web.ViewModels
    ContosoUniversity.Models > ContosoUniversity.Web.Models
  4. You may need to close the solution and reopen it before checking that nothing is broken and the application runs without errors.

That's it for the moment. In the next post in this series I'll explain how to get the solution under version control in TFS and how to implement continuous integration.

Cheers -- Graham

Continuous Delivery with TFS: Pausing to Consider the Big Picture

Posted by Graham Smith on December 18, 2014No Comments (click here to comment)

In this fifth post in my series about building a continuous delivery pipeline with TFS we pause the practical work and consider the big picture. If you have taken my advice and started to read the Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation book you will know that continuous delivery (or deployment) pipelines are all about helping speed up the process of getting code from being an idea to working software that is in the hands of end users. Specifically, continuous delivery pipelines are concerned with moving code from development, through testing and in to production. In the olden days when our applications were released once or twice a year it didn't much matter how long this phase took because it probably wasn't the bottleneck. Painful yes, but a bottleneck probably not. However with the increasing popularity of agile methods of software development such as Scrum where deployment to live can sometimes be as frequent as several times a day the journey time from development to live can become a crucial limiting step and needs to be as quick as possible. As well as being quick, the journey also needs to be repeatable and reliable and the answer to these three requirements is automation, automation, automation.

The delivery pipeline we are building in this series will use a sample ASP.NET MVC web application that talks to a SQL Server database. The hypothetical requirements are that on the journey from development to production the application is deployed to an environment where automated acceptance tests are run and then optionally (according to an approvals workflow) to an environment where manual and exploratory tests can be carried out. I've chosen this scenario because it's probably a reasonably common one and can illustrate many of the facets of delivery pipelines and the TFS tooling used to manage them. Your circumstances can and will vary though and you will need to take the ideas and techniques I present and adapt them to your situation.

The starting point of the pipeline is the developer workstation -- what I refer to as the DEV environment. I'm slightly opinionated here in that my view of an ideal application is one that can be checked-out from version control and then run with only minimal configuration steps entirely in DEV. If there is some complicated requirement to hook in to other machines or applications then I'd want to be taking a hard look at what is going on. An example of an acceptable post check-out configuration step would be creating a database in LocalDB from the publish profile of a SQL Server Database Project. Otherwise everything else just works. The solution uses automated acceptance tests? They just work. The automated acceptance tests need supporting data? They handle that automatically. The application talks to external systems? It's all taken care of automatically through service virtualisation. You get the idea...

Moving on, when code is checked back in to version control from DEV all of the changes from each developer need merging and building together in a process known as continuous integration. TFS handles this for us very nicely and can also run static code analysis and unit tests as part of the CI process. The result of CI is a build of all of an application's components that could potentially be released to production. (This answers an early question I grappled with -- to build as debug or release?) These components are then deployed to increasingly live like environments where code and configuration can be tested to gain confidence in that build. One of the core tenets of continuous delivery pipelines is that the same build should be deployed to successive environments in the pipeline. If any of the tests fail in an environment the build is deemed void and the process starts again.

The next environment in the pipeline is one where automated acceptance tests will be run. Typically this will be an overnight process, especially if tests number in their hundreds and test runs take some hours to complete. I define this environment to be a test of whether code has broken the tests such that the code needs fixing or the tests need updating to accommodate the changed code. To this end all variables that could affect tests need to be controlled. This includes data, interfaces to external systems and in some cases the environment itself if the poor performance of the environment might cause tests to fail. I refer to this environment as DAT -- development automated test.

If code passes all the tests in the DAT environment a build can optionally be deployed to an environment where exploratory testing or manual test cases can be carried out. I call this DQA -- development quality assurance. This environment should be more live-like than DAT and could contain data that is representative of production and live links to any external systems. For some pipelines DQA could be the final environment before deploying to production. For other pipelines further environments might be needed for load testing (as an example) or to satisfy an organisational governance requirement.

So that's the Big Picture about what this series is all about -- back to building stuff in the next post.

Cheers -- Graham

Getting Started with SQL Server Database Projects

Posted by Graham Smith on December 13, 2014No Comments (click here to comment)

By now hopefully all developers understand the importance of keeping their source code under version control and are actually practising this for any non-throwaway code. That's all fine and dandy for your application, but what about your database? In my experience it's pretty rare for databases to be under version control, probably because in the past the tooling has been inadequate or simply off developer radars. There are a number of tools that can help with database version control but one of most readily accessible for Visual Studio developers is the SQL Server Database Project that can be added to a Visual Studio solution. SQL Server Database Projects are part of the Microsoft SQL Server Data Tools ( SSDT) package, which is obviously aimed at developing against SQL Server. You can start with a blank database but most likely you will already have an existing database in which case the database project has the ability to reverse engineer the schema. The result of this process is a series of files containing CREATE statements for the objects that comprise your database (tables, stored procedures and so on), with the files themselves (usually one per object) organised in a folder structure. Since these are essentially text files just like any other code file you can check them in to version control and have any changes recorded just like you would with, for example, a C# file.

In addition to facilitating version control database projects offer a wealth of extra functionality. A declarative approach is used with database projects, ie you state how you want your database to be via CREATE statements and then another process is responsible for making the schema of one or more target databases the same as the schema of your database project. You can also publish your schema to a new database -- ideal if you need to create a LocalDB version on a new development workstation for example. This is really only the tip of the iceberg and I encourage you to use the resources below as a starting point for learning about database projects and SSDT:

Since SSDT is built-in to Visual Studio 2013 the barrier to getting started is very low. Be sure to check for any updates from within Visual Studio (Tools > Extensions and Updates) before you begin. Finally, anyone who has spotted that the SQL Server installation wizard has an option to install SQL Server Data Tools has every right to be confused, since at one point in time this was also the new name for what was once BIDS (Business Intelligence Developer Studio). If you want to know more then this post and also this one will help clarify. Maybe.

Cheers -- Graham

Continuous Delivery with TFS: Provisioning a Visual Studio Development Machine

Posted by Graham Smith on December 9, 2014No Comments (click here to comment)

In this instalment of my series on building a continuous delivery pipeline with TFS we look at provisioning a Visual Studio development machine. Although we installed Visual Studio on the TFS admin server to support the build process and you may be thinking you could use this instance, in my experience it's a sluggish experience because of all the other components that are installed. You might also have a physical machine on which Visual Studio is installed and you may be wondering if you could use this. Assuming that there are no complications such as firewalls the answer is a cautious yes -- my initial Azure setup involved connecting to a publicly accessible TFS endpoint and it was mostly okay. In this scenario though your development machine isn't part of your Azure network and the experience is a bit clunky. This can be resolved by configuring a Site-to-Site VPN but that isn't something I've done and isn't something I'm going to cover in this series. Rather, my preference is to provision and use an Azure-based development machine. In fact I like this solution so much I don't bother maintaining a physical PC for Visual Studio research and learning any more -- for me it's Azure all the way.

So if like me you decide to go down the Azure path you have a couple of options to choose from, assuming you have an MSDN subscription. You can create a VM with your chosen operating system and install any version of Visual Studio that your MSDN subscription level allows. Alternatively you can create a VM from the gallery with Visual Studio already installed (in the Portal there is a check box to display MSDN images). The first thing to note is that only MSDN subscriptions get the availability to run desktop versions of Windows as VMs, so if you don't have MSDN you will need to run a version of Windows Server. The second thing to note if you are opting for a pre-installed version of Visual Studio is that just because you see Ultimate listed doesn't mean you have access to it. In order to activate Visual Studio Ultimate you will need to log in to Visual Studio with an MSDN Ultimate subscription or provide an Ultimate licence key. I've been there and suffered the disappointment. I have mentioned this to Microsoft but at the time of writing this hasn't been rectified. With all that out of the way, my preference is to create a VM and install Visual Studio myself as I like the flexibility of choosing what components I want installed. Whichever route you choose, ensure that you add this VM to your domain if you are using a domain controller and that you add the domain account that you will use for everyday access with appropriate privileges. You'll also want to make sure that VS is updated to the latest version and that all Windows updates have been applied.

As a final step you might want to install any other development tools that you regularly use. As a minimum you should probably install the Microsoft Visual Studio Team Foundation Server 2013 Power Tools. These provide many useful extras but the one I use regularly is the ability to perform source control operations from Windows Explorer.

Cheers -- Graham

Organise RDP Connections with Remote Desktop Connection Manager

Posted by Graham Smith on December 8, 2014No Comments (click here to comment)

Years ago when I first started working with Hyper-V I soon realised there must be a better way of remoting to servers than using the client built in to Windows. There was, in the form of a nifty utility called Remote Desktop Connection Manager or RDCMan. There are other tools but this one is simple and does the job very nicely. For many years it was stuck on version 2.2 published in 2010, probably because it originated as a tool used by Microsoft engineering and technical staff and wasn't the focus of any official attention. Fast-forward to 2014 and there is now a new 2.7 version, as before available as a free download from Microsoft. I highly recommend this for organising your RDP connections to your Azure (or any other) Windows VMs. In addition RDPMan is able to save your logon credentials and if you are in an environment where it's safe to do this it's a great time-saver.

There is a trick to getting RDCMan to work with Azure VMs which can cause endless frustration if you don't know it. The DNS name of the cloud service and the port of the Remote Desktop endpoint need to be entered in separate places in the RDCMan profile for your VM. See here for a post that has all the details you need to get started.

Cheers -- Graham

Continuous Delivery with TFS: Creating an All-in-One TFS Server

Posted by Graham Smith on December 7, 2014No Comments (click here to comment)

In this third instalment of my series about creating a continuous delivery pipeline using TFS it's time to actually install TFS. In a production environment you will more than likely -- but not always -- split the installation of TFS across multiple machines, however for demo purposes it's perfectly possible and actually preferable from a management perspective to install everything on to one machine. Do bear in mind that by installing everything on one server you will likely encounter fewer issues (permissions come to mind) than if you are installing across multiple machines. The message here is that the speed of configuring a demo rig in Azure will bear little resemblance to the time it takes to install an on-premises production environment.

Ben Day maintains a great guide to installing TFS and you can find more details here. My recommendation is that you follow Ben's guide and as such I'm not planning to go through the whole process here. Rather , I will document any aspects that are different. As well as reading Ben's guide I also recommend reading the Microsoft documentation on installing TFS, particularly if you will ultimately perform an on-premises installation. See one of my previous posts for more information. One of the problems of writing installation instructions is that they date quite quickly. I've referred to the latest versions of products at the time of writing below but feel free to use whatever is the latest when you come to do it

  • Start by downloading all the bits of software from MSDN or the free versions if you are going down that (untried by me) route. At this stage you will need TFS2013.4 and VS2013.4. I tend to store all the software I use in Azure on a share on my domain controller for ease of access across multiple machines.
  • If you are following my recommendation and using a domain controller the first step is to create service accounts that you will use in the TFS installation. There is comprehensive guidance here but at a minimum you will need TFSREPORTS, TFSSERVICE and TFSBUILD. These are  sample names and you can choose whatever you like of course.
  • The second step is to create your VM with reference to my Azure foundations post. Mine is called ALMTFSADMIN. This is going to be an all-in-one installation if you are following my recommendation in order to keep things simple, so a basic A4 size is probably about right.
  • Ben's guide refers to Windows Server 2012 because of SharePoint Foundation 2013's lack of support for Windows Server 12 R2. This was fixed with SharePoint 2013 SP1 so you can safely create your VM as Windows Server 2012 R2 Datacenter. Having said that, you don't actually need SharePoint for what we're doing here so feel free to leave it out. Probably makes sense as SharePoint is a beast and might slow your VM down. Ben's guide is for an on-premises rather than Azure installation of Windows so some parts are not relevant and can obviously be skipped.
  • Early versions of TFS 2013 didn't support SQL Server 2014 and Ben's guide covers installing both SQL Server 2012 and 14. You might as well go for the 2014 version unless you have reason to stick with 2012.
  • The TFS installation part of Ben's guide starts on page 99 and refers to TFS2013.2. The latest version as of this post is TFS2013.4 and you should go for that. As above my recommendation is to skip SharePoint.
  • Go ahead and install the build service. On a production installation you would almost never install the build service on the TFS application tier but it's fine in this case.
  • The build service (or more correctly the build agents) will need to build Visual Studio applications. The easiest way to enable this is to install Visual Studio itself -- VS2013.4 (whatever is the best SKU you are entitled to use) without the Windows 8 Phone components will do very nicely.
  • You can leave the test controller installation for the time being -- we will look at that in detail in a future post.

When the installations are complete and the VM has been restarted you should be able to access the TFS Administration Console and check that all is in order.  Congratulations -- your TFS admin box is up-and-running! Watch out for the next post where we create a Visual Studio development environment.

Cheers -- Graham

Getting started with Team Foundation Server and ALM

Posted by Graham Smith on December 5, 20142 Comments (click here to comment)

Team Foundation Server is Microsoft's ecosystem (my term) that allows organisations to implement software application lifecycle management and continuous delivery. TFS consists of the core application on top of which run various clients that perform specialised roles. There are many articles that explain the capabilities of TFS and why you would want to use it in preference to other tools (or not as the case may be), and the purpose of this post isn't to go over that well-trodden ground. Rather, I'm assuming that TFS is your way forward any you are looking for ways to get started with it. I'll cover a couple of scenarios for the core product as follows:

You haven't yet implemented TFS and are looking for guidance on installing and configuring it:

TFS is installed and you want to learn how to use it to best effect:

Whilst some of the training courses I link to above are free from the Microsoft Virtual Academy I make no apology for linking to Pluralsight courses for which one needs a subscription. For Microsoft.NET -- and increasingly other -- developers a Pluralsight subscription is in my view an indispensable tool and excellent value for money.

Enjoy getting to grips with TFS -- it's a great piece of kit. Watch out for future Getting Started posts on the other applications that form part of the ecosystem.

Cheers -- Graham

Use Azure Automation to Shut Down VMs Automatically

Posted by Graham Smith on December 4, 20145 Comments (click here to comment)

If you have an MSDN subscription (which gives you Azure credits) you will hopefully only forget to shut down your VMs after you have finished using them once. This happened to me and I was dismayed a few days later to find my Azure credits had been used up and I had to wait until the next billing cycle to carry on using Azure. There are a few ways to keep costs down (use a basic VM, size appropriately and don't install an image with a pre-loaded application such as SQL Server and instead install from an ISO from your MSDN subscription) but the most effective is to deallocate your VMs when you are finished using them.

As a safeguard after the episode where I ran down my credits I created a PowerShell script that I set up as a scheduled task to run daily at 1am on an always-on server that runs in my home datacentre under the stairs. Doable but not ideal, not least because of all the components that are installed by Azure PowerShell just to run one script. However, the recent launch of Azure Automation means this script can now be run from within Azure itself. Getting started with Azure Automation used to be a bit of a pain as there were quite a lot of steps to go through to set up authentication using certificates. The process is much simpler now as Azure Active Directory can be used. If you are just getting going with Azure Automation it's worth watching the Azure Friday Automation 101, 102 and 103 videos. When you are ready to start using Automation this post has the instructions for setting up the authentication. Once that is in place it's a case of navigating to the Automation pane in the Portal and creating an Automation Account. You then create a Runbook using the Quick Create feature and start editing it in draft mode. The following code is an example of how you might go about shutting down your VMs:

If you were to use the example code above you would need to have created a Runbook called Stop-AzureVMExceptDomainController and be using an Azure Active Directory user called Automation. (The code also ensures that a VM called ALMDC isn't shut down.) With the Runbook in place you can link it to a Schedule. You'll need to publish it first, although the typical workflow is to run in draft to test it until you are satisfied that it's working correctly. When you do finally publish you click the Schedule tab where you can link to a new or an existing schedule -- I have mine set to 1am.

Once your Runbook is in place you can of course run it manually as a convenient way to shut your VMs down when you have finished with them. No longer do you have to wait for PowerShell running locally to finish before you can turn your PC off. And if you do forget to shut your VMs off you can relax knowing that your schedule will kick in and do the job for you.

Cheers -- Graham

Continuous Delivery with TFS: Creating a Domain Controller

Posted by Graham Smith on December 3, 2014No Comments (click here to comment)

In this second post in my series about creating a continuous delivery pipeline using TFS I describe how to create a domain controller in Azure. It's not mandatory -- it's perfectly possible to use shadow accounts and that's how I started -- however the ability to use domain accounts makes configuring all of the moving parts much simpler. It also turns out that creating a domain controller isn't that much of a chore.

Create the VM

The first step is to create a Windows Server VM using the foundations configured in the first post in the series. I use a naming convention for groups of VMs so my domain controller is ALMDC, and since this VM won't be doing a lot of work size A0 is fine. If you have other VMs already created they should be deallocated so you can specify the first none-reserved IP address in the allocated range as static. For my Virtual Network in the 10.0.0.0/25 address space this will be 10.0.0.4 -- previous slots are reserved. If you create the VM using PowerShell you can specify which IP should be static when the VM is created. If you use the Portal you can do that later which is the technique I'll describe here. See this article for more details.

Configure the VM for DNS

Whilst the VM is being provisioned head over to your virtual network and select the Configure panel and add your new server and its IP address as a DNS server, as it will be also performing this role. You should end up with something like this:

Virtual Network DNS Configuration

Once the DC has been provisioned you use your version of the following PowerShell command to specify a static internal IP for a previously created VM:

This command needs to be run from an admin workstation that has been configured to work with Azure PowerShell and your Azure subscription. You need to install Azure PowerShell (easiest way is via the Microsoft Web Platform Installer) and then work through configuring it to work with your Azure subscription, details here. If all that's too much right now you can just make sure that your DC is the first VM in the cloud service to start so it uses the IP specified as DNS.

Install and Configure Active Directory

One you are logged in to the domain controller install the Active Directory Domain Services role via Server Manager > Add roles and features. After rebooting you will be prompted to install Active Directory and to specify a Fully Qualified Domain Name -- I chose ALM.local. Defaults can be chosen for other options. Next, install  the DNS Server role. I deleted the Forwarder entries (Server Manager > DNS Manager > Tools and choose Properties from the shortcut menu of the  DNS node and select the Forwarders tab) but I'm not sure now if that was absolutely necessary. You can check if everything is working by accessing a well-known website in IE. One point to note is that you shouldn't manually change the NIC settings of an Azure VM as that can lead to all sorts of trouble.

Although I've mentioned previously that you need to shut down your VMs so they show their status as Stopped (Deallocated) in the portal to avoid being charged I actually leave my DC running all the time as it only costs about £4 per month and I like to know that when I start my other VMs I have a fully functioning DC for them to connect to.

Cheers -- Graham