Continuous Delivery with TFS: Configuring the Sample Application for Version Control

Posted by Graham Smith on December 31, 2014No Comments (click here to comment)

In this instalment of my series about building a continuous delivery pipeline with TFS we pick up from a previous post (in which we configured the Contoso University sample application to work with a SQL Server Database Project) and add the Visual Studio solution to TFS version control. We'll be using the Visual Studio development machine that we created here and we'll be connecting to the TFS instance that we created here. As usual I'm assuming a degree of familiarity with the tooling and if that's missing you can look at my Getting Started posts to get acquainted.

Confirming or Creating a Team Project Collection

The first step is to check that your TFS instance is hosting at least one Team Project Collection. The installation process normally creates a collection called DefaultCollection which is fine, but if you didn't elect to create a collection now is the time to jump on to your TFS administration machine and create a new collection from the Administration Console ($SeverName$ > Application Tier > Team Project Collections > Create Collection). Use whatever name suits however as an aside bear in mind that if you have yet to implement TFS in a production environment most organizations will only ever need one collection and you may want to choose a name accordingly.

Creating a Team Project

With a Team Project Collection in place you can connect to TFS from your development machine and create the next-level container which is a Team Project. For this blog series I'm keeping things simple and creating a project called ContosoUniversity (ie the Team Project will just be for the Contoso University application) but do be aware that there are other patterns that can make more sense in a production environment eg one Team Project can contain many applications. To create a new Team Project, from within Visual Studio open the Team Explorer pane and click on the two-pronged plug icon on the toolbar. This panel has a link to Create Team Project and it's just a case of stepping through the wizard, choosing the Scrum process template and Team Foundation Version Control (TFVC).

Mapping a Workspace

Next up you will need to map a Workspace. Although the Team Explorer pane might offer to do this for you I prefer to configure this manually so I get exactly what I want -- useful if your Team Project contains many applications and you want a workspace for each one. In this case though I just want to map the Contoso University source location to a corresponding folder in the Workspaces folder of my profile, so from Source Control Explorer's Workspace dropdown choose Workspaces to Manage Workspaces then Add to create a new one. The completed dialogue should look something like this:

add-workspace

Adding the Solution to Version Control

Back in Source Control Explorer add a folder called Main to the ContosoUniversity node and check it in. Next convert this folder to a Branch (right-click the folder and select Branching and Merging > Convert to Branch).

In Windows Explorer you can now add the Visual Studio solution file and the ContosoUniversity.Database and ContosoUniversity.Web folders to the Main folder. If you are following my recommendation and have the Microsoft Visual Studio Team Foundation Server 2013 Power Tools installed you can select these three items and add them to TFVC:

add-to-version-control

TFVC is smart enough to know which files should be included and which should be excluded, and once the add has been performed you can either use the same menu to perform a Check In or switch to Visual Studio to do the same.

Launching the Sample Application

Finally, navigating to Team Explorer's Home tab in Visual Studio should show the Contoso University solution file which you can double-click to launch and test the application:

team-explorer-home-tab

When you build the solution the NuGet packages will be restored and these will be detected as add(s) in the Pending Changes tab of Team Explorer. To get TFVC to ignore the packages folder click on the Detected add(s) link and then in the Promoted Candidate Changes dialog box right click one of the files and choose Ignore by folder. A .tfignore file is created which you should check in.

Although that's as far as we are going with version control in this post you should be aware that there are plenty of other configuration options, mostly around making sure check ins are accompanied by comments or work items and so on. All very useful for production environments but out of scope here.

In the next post we'll be configuring continuous integration.

Cheers -- Graham

Continuous Delivery with TFS: Our Sample Application

Posted by Graham Smith on December 27, 2014No Comments (click here to comment)

In this post that is part of my series on implementing continuous delivery with TFS we look at the sample application that will be used to illustrate various aspects of the deployment pipeline. I've chosen Microsoft's fictional Contoso University ASP.NET MVC application as it comprises components that need to be deployed to a web server and a database server and it lends itself to (reasonably) easily demonstrating automated acceptance testing. You can download the completed application here and read more about its construction here.

Out of the box Contoso University uses Entity Framework Code First Migrations for database development however this isn't what I would use for enterprise-level software development. Instead I recommend using a tool such as Microsoft's SQL Server Database Tools (SSDT) and more specifically the SQL Server Database Project component of SSDT that can be added to a Visual Studio solution. The main focus of this post is on converting Contoso University to use a SQL Server Database Project and if you are not already up to speed with this technology I have a Getting Started post here. Please note that I don't describe every mouse-click below so some familiarity will be essential. I'm using the version of LocalDb that corresponds to SQL Server 2012 below as this is what Contoso University has been developed against. If you want to use the LocalDb that corresponds to SQL Server 2014 ((localdb)\ProjectsV12) then it will probably work but watch out for glitches. So, there is a little bit of work to do to get Contoso University converted, and this post will take us to the point of readying it for configuration with TFS.

Getting Started
  1. Download the Contoso University application using the link above and unblock and then extract the zip to a convenient temporary location.
  2. Navigate to ContosoUniversity.sln and open the solution. Build the solution which should cause NuGet packages to be restored using the Automatic Package Restore method.
  3. From Package Manager Console issue an Update-Database command (you may have to close down and restart Visual Studio for the command to become available). This should cause a ContosoUniversity2 database (including data) to be created in LocalDb. (You can verify this by opening the SQL Server Object Explorer window and expanding the (LocalDb)\v11.0 node. ContosoUniversity2 should be visible in the Database folder. Check that data has been added to the tables as we're going to need it.)
Remove EF Code First Migrations
  1. Delete SchoolIniializer.cs from the DAL folder.
  2. Delete the DatabaseInitializer configuration from Web.config (this will probably be commented out but I'm removing it for completeness' sake):
  3. Remove the Migrations folder and all its contents.
  4. Expand the ContosoUniversity2 database from  the SQL Server Object Explorer window and delete dbo._MigrationHistory from the Tables folder.
  5. Run the solution to check that it still builds and data can be edited.
Configure the solution to work with a SQL Server Database Project (SSDP)
  1. Add an SSDP called ContosoUniversity.Database to the solution.
  2. Import the ContosoUniversity2 database to the new project using default values.
  3. In the ContosoUniversity.Database properties enable Code Analysis in the Code Analysis tab.
  4. Create and save a publish profile called CU-DEV.publish.xml to publish to a database called CU-DEV on (LocalDb)\v11.0.
  5. In Web.config change the SchoolContext connection string to point to CU-DEV.
  6. Build the solution to check there are no errors.
Add Dummy Data

The next step is to provide the facility to add dummy data to a newly published version of the database. There are a couple of techniques for doing this depending on requirements -- the one I'm demonstrating only adds the dummy data if a table contains no rows, so ensuring that a live database can't get polluted. I'll be extracting the data from ContosoUniversity2 and I'll want to maintain existing referential integrity, so I'll be using SET IDENTITY_INSERT ON | OFF on some tables to insert values to primary key columns that have the identity property set. Firstly create a new folder in the SSDP called ReferenceData (or whatever pleases you) and then add a post deployment script file (Script.PostDeployment.sql) to the root of the ContosoUniversity.database project (note there can only be one of these). Then follow this general procedure for each table:

  1. In the SQL Server Object Explorer window expand the tree to display the ContosoUniversity2 database tables.
  2. Right click a table and choose View Data. From the table's toolbar click the Script icon to create the T-SQL to insert the data (SET IDENTITY_INSERT ON | OFF should be added by the scripting engine where required).
  3. Amend the script with an IF statement so that the insert will only take place if the table is empty. The result script should look similar to the following:
  4. Save the file in the ReferenceData folder in the format TableName.data.sql and add it to the solution as an existing item.
  5. Use the SQLCMD syntax to call the file in the post deployment script file. (The order the table inserts are executed will need to cater for referential integrity. Person, Department, Course, CourseInstructor, Enrollment and OfficeAssignment should work.) When editing Script.PostDeployment.sql the SQLCMD Mode toolbar button will turn off Transact-SQL IntelliSense and stop ‘errors' from being highlighted.
  6. When all the ReferenceData files have been processed the Script.PostDeployment.sql should look something like:

    You should now be able to use CU-DEV.publish.xml to actually publish a database called CU-DEV to LocalDB that contains both schema and data and which works in the same way as the database created by EF Code First Migrations.
Finishing Touches

For the truly fussy among us (that's me) that like neat and tidy project names in VS solutions there is an optional set of configuration steps that can be performed:

  1. Remove the ContosoUniversity ASP.NET MVC project from the solution and rename it to ContosoUniversity.Web. In the file system rename the containing folder to ContosoUniversity.Web.
  2. Add the renamed project back in to the solution and from the Application tab of the project's Properties change the Assembly name and Default namespace to ContosoUniversity.Web.
  3. Perform the following search and replace actions:
    namespace ContosoUniversity > namespace ContosoUniversity.Web
    using ContosoUniversity > using ContosoUniversity .Web
    ContosoUniversity.ViewModels > ContosoUniversity.Web.ViewModels
    ContosoUniversity.Models > ContosoUniversity.Web.Models
  4. You may need to close the solution and reopen it before checking that nothing is broken and the application runs without errors.

That's it for the moment. In the next post in this series I'll explain how to get the solution under version control in TFS and how to implement continuous integration.

Cheers -- Graham

Continuous Delivery with TFS: Pausing to Consider the Big Picture

Posted by Graham Smith on December 18, 2014No Comments (click here to comment)

In this fifth post in my series about building a continuous delivery pipeline with TFS we pause the practical work and consider the big picture. If you have taken my advice and started to read the Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation book you will know that continuous delivery (or deployment) pipelines are all about helping speed up the process of getting code from being an idea to working software that is in the hands of end users. Specifically, continuous delivery pipelines are concerned with moving code from development, through testing and in to production. In the olden days when our applications were released once or twice a year it didn't much matter how long this phase took because it probably wasn't the bottleneck. Painful yes, but a bottleneck probably not. However with the increasing popularity of agile methods of software development such as Scrum where deployment to live can sometimes be as frequent as several times a day the journey time from development to live can become a crucial limiting step and needs to be as quick as possible. As well as being quick, the journey also needs to be repeatable and reliable and the answer to these three requirements is automation, automation, automation.

The delivery pipeline we are building in this series will use a sample ASP.NET MVC web application that talks to a SQL Server database. The hypothetical requirements are that on the journey from development to production the application is deployed to an environment where automated acceptance tests are run and then optionally (according to an approvals workflow) to an environment where manual and exploratory tests can be carried out. I've chosen this scenario because it's probably a reasonably common one and can illustrate many of the facets of delivery pipelines and the TFS tooling used to manage them. Your circumstances can and will vary though and you will need to take the ideas and techniques I present and adapt them to your situation.

The starting point of the pipeline is the developer workstation -- what I refer to as the DEV environment. I'm slightly opinionated here in that my view of an ideal application is one that can be checked-out from version control and then run with only minimal configuration steps entirely in DEV. If there is some complicated requirement to hook in to other machines or applications then I'd want to be taking a hard look at what is going on. An example of an acceptable post check-out configuration step would be creating a database in LocalDB from the publish profile of a SQL Server Database Project. Otherwise everything else just works. The solution uses automated acceptance tests? They just work. The automated acceptance tests need supporting data? They handle that automatically. The application talks to external systems? It's all taken care of automatically through service virtualisation. You get the idea...

Moving on, when code is checked back in to version control from DEV all of the changes from each developer need merging and building together in a process known as continuous integration. TFS handles this for us very nicely and can also run static code analysis and unit tests as part of the CI process. The result of CI is a build of all of an application's components that could potentially be released to production. (This answers an early question I grappled with -- to build as debug or release?) These components are then deployed to increasingly live like environments where code and configuration can be tested to gain confidence in that build. One of the core tenets of continuous delivery pipelines is that the same build should be deployed to successive environments in the pipeline. If any of the tests fail in an environment the build is deemed void and the process starts again.

The next environment in the pipeline is one where automated acceptance tests will be run. Typically this will be an overnight process, especially if tests number in their hundreds and test runs take some hours to complete. I define this environment to be a test of whether code has broken the tests such that the code needs fixing or the tests need updating to accommodate the changed code. To this end all variables that could affect tests need to be controlled. This includes data, interfaces to external systems and in some cases the environment itself if the poor performance of the environment might cause tests to fail. I refer to this environment as DAT -- development automated test.

If code passes all the tests in the DAT environment a build can optionally be deployed to an environment where exploratory testing or manual test cases can be carried out. I call this DQA -- development quality assurance. This environment should be more live-like than DAT and could contain data that is representative of production and live links to any external systems. For some pipelines DQA could be the final environment before deploying to production. For other pipelines further environments might be needed for load testing (as an example) or to satisfy an organisational governance requirement.

So that's the Big Picture about what this series is all about -- back to building stuff in the next post.

Cheers -- Graham

Continuous Delivery with TFS: Provisioning a Visual Studio Development Machine

Posted by Graham Smith on December 9, 2014No Comments (click here to comment)

In this instalment of my series on building a continuous delivery pipeline with TFS we look at provisioning a Visual Studio development machine. Although we installed Visual Studio on the TFS admin server to support the build process and you may be thinking you could use this instance, in my experience it's a sluggish experience because of all the other components that are installed. You might also have a physical machine on which Visual Studio is installed and you may be wondering if you could use this. Assuming that there are no complications such as firewalls the answer is a cautious yes -- my initial Azure setup involved connecting to a publicly accessible TFS endpoint and it was mostly okay. In this scenario though your development machine isn't part of your Azure network and the experience is a bit clunky. This can be resolved by configuring a Site-to-Site VPN but that isn't something I've done and isn't something I'm going to cover in this series. Rather, my preference is to provision and use an Azure-based development machine. In fact I like this solution so much I don't bother maintaining a physical PC for Visual Studio research and learning any more -- for me it's Azure all the way.

So if like me you decide to go down the Azure path you have a couple of options to choose from, assuming you have an MSDN subscription. You can create a VM with your chosen operating system and install any version of Visual Studio that your MSDN subscription level allows. Alternatively you can create a VM from the gallery with Visual Studio already installed (in the Portal there is a check box to display MSDN images). The first thing to note is that only MSDN subscriptions get the availability to run desktop versions of Windows as VMs, so if you don't have MSDN you will need to run a version of Windows Server. The second thing to note if you are opting for a pre-installed version of Visual Studio is that just because you see Ultimate listed doesn't mean you have access to it. In order to activate Visual Studio Ultimate you will need to log in to Visual Studio with an MSDN Ultimate subscription or provide an Ultimate licence key. I've been there and suffered the disappointment. I have mentioned this to Microsoft but at the time of writing this hasn't been rectified. With all that out of the way, my preference is to create a VM and install Visual Studio myself as I like the flexibility of choosing what components I want installed. Whichever route you choose, ensure that you add this VM to your domain if you are using a domain controller and that you add the domain account that you will use for everyday access with appropriate privileges. You'll also want to make sure that VS is updated to the latest version and that all Windows updates have been applied.

As a final step you might want to install any other development tools that you regularly use. As a minimum you should probably install the Microsoft Visual Studio Team Foundation Server 2013 Power Tools. These provide many useful extras but the one I use regularly is the ability to perform source control operations from Windows Explorer.

Cheers -- Graham

Continuous Delivery with TFS: Creating an All-in-One TFS Server

Posted by Graham Smith on December 7, 2014No Comments (click here to comment)

In this third instalment of my series about creating a continuous delivery pipeline using TFS it's time to actually install TFS. In a production environment you will more than likely -- but not always -- split the installation of TFS across multiple machines, however for demo purposes it's perfectly possible and actually preferable from a management perspective to install everything on to one machine. Do bear in mind that by installing everything on one server you will likely encounter fewer issues (permissions come to mind) than if you are installing across multiple machines. The message here is that the speed of configuring a demo rig in Azure will bear little resemblance to the time it takes to install an on-premises production environment.

Ben Day maintains a great guide to installing TFS and you can find more details here. My recommendation is that you follow Ben's guide and as such I'm not planning to go through the whole process here. Rather , I will document any aspects that are different. As well as reading Ben's guide I also recommend reading the Microsoft documentation on installing TFS, particularly if you will ultimately perform an on-premises installation. See one of my previous posts for more information. One of the problems of writing installation instructions is that they date quite quickly. I've referred to the latest versions of products at the time of writing below but feel free to use whatever is the latest when you come to do it

  • Start by downloading all the bits of software from MSDN or the free versions if you are going down that (untried by me) route. At this stage you will need TFS2013.4 and VS2013.4. I tend to store all the software I use in Azure on a share on my domain controller for ease of access across multiple machines.
  • If you are following my recommendation and using a domain controller the first step is to create service accounts that you will use in the TFS installation. There is comprehensive guidance here but at a minimum you will need TFSREPORTS, TFSSERVICE and TFSBUILD. These are  sample names and you can choose whatever you like of course.
  • The second step is to create your VM with reference to my Azure foundations post. Mine is called ALMTFSADMIN. This is going to be an all-in-one installation if you are following my recommendation in order to keep things simple, so a basic A4 size is probably about right.
  • Ben's guide refers to Windows Server 2012 because of SharePoint Foundation 2013's lack of support for Windows Server 12 R2. This was fixed with SharePoint 2013 SP1 so you can safely create your VM as Windows Server 2012 R2 Datacenter. Having said that, you don't actually need SharePoint for what we're doing here so feel free to leave it out. Probably makes sense as SharePoint is a beast and might slow your VM down. Ben's guide is for an on-premises rather than Azure installation of Windows so some parts are not relevant and can obviously be skipped.
  • Early versions of TFS 2013 didn't support SQL Server 2014 and Ben's guide covers installing both SQL Server 2012 and 14. You might as well go for the 2014 version unless you have reason to stick with 2012.
  • The TFS installation part of Ben's guide starts on page 99 and refers to TFS2013.2. The latest version as of this post is TFS2013.4 and you should go for that. As above my recommendation is to skip SharePoint.
  • Go ahead and install the build service. On a production installation you would almost never install the build service on the TFS application tier but it's fine in this case.
  • The build service (or more correctly the build agents) will need to build Visual Studio applications. The easiest way to enable this is to install Visual Studio itself -- VS2013.4 (whatever is the best SKU you are entitled to use) without the Windows 8 Phone components will do very nicely.
  • You can leave the test controller installation for the time being -- we will look at that in detail in a future post.

When the installations are complete and the VM has been restarted you should be able to access the TFS Administration Console and check that all is in order.  Congratulations -- your TFS admin box is up-and-running! Watch out for the next post where we create a Visual Studio development environment.

Cheers -- Graham

Continuous Delivery with TFS: Creating a Domain Controller

Posted by Graham Smith on December 3, 2014No Comments (click here to comment)

In this second post in my series about creating a continuous delivery pipeline using TFS I describe how to create a domain controller in Azure. It's not mandatory -- it's perfectly possible to use shadow accounts and that's how I started -- however the ability to use domain accounts makes configuring all of the moving parts much simpler. It also turns out that creating a domain controller isn't that much of a chore.

Create the VM

The first step is to create a Windows Server VM using the foundations configured in the first post in the series. I use a naming convention for groups of VMs so my domain controller is ALMDC, and since this VM won't be doing a lot of work size A0 is fine. If you have other VMs already created they should be deallocated so you can specify the first none-reserved IP address in the allocated range as static. For my Virtual Network in the 10.0.0.0/25 address space this will be 10.0.0.4 -- previous slots are reserved. If you create the VM using PowerShell you can specify which IP should be static when the VM is created. If you use the Portal you can do that later which is the technique I'll describe here. See this article for more details.

Configure the VM for DNS

Whilst the VM is being provisioned head over to your virtual network and select the Configure panel and add your new server and its IP address as a DNS server, as it will be also performing this role. You should end up with something like this:

Virtual Network DNS Configuration

Once the DC has been provisioned you use your version of the following PowerShell command to specify a static internal IP for a previously created VM:

This command needs to be run from an admin workstation that has been configured to work with Azure PowerShell and your Azure subscription. You need to install Azure PowerShell (easiest way is via the Microsoft Web Platform Installer) and then work through configuring it to work with your Azure subscription, details here. If all that's too much right now you can just make sure that your DC is the first VM in the cloud service to start so it uses the IP specified as DNS.

Install and Configure Active Directory

One you are logged in to the domain controller install the Active Directory Domain Services role via Server Manager > Add roles and features. After rebooting you will be prompted to install Active Directory and to specify a Fully Qualified Domain Name -- I chose ALM.local. Defaults can be chosen for other options. Next, install  the DNS Server role. I deleted the Forwarder entries (Server Manager > DNS Manager > Tools and choose Properties from the shortcut menu of the  DNS node and select the Forwarders tab) but I'm not sure now if that was absolutely necessary. You can check if everything is working by accessing a well-known website in IE. One point to note is that you shouldn't manually change the NIC settings of an Azure VM as that can lead to all sorts of trouble.

Although I've mentioned previously that you need to shut down your VMs so they show their status as Stopped (Deallocated) in the portal to avoid being charged I actually leave my DC running all the time as it only costs about £4 per month and I like to know that when I start my other VMs I have a fully functioning DC for them to connect to.

Cheers -- Graham

Continuous Delivery with TFS: Laying the Azure Foundations

Posted by Graham Smith on December 2, 20142 Comments (click here to comment)

For anyone interested in creating a continuous delivery pipeline with TFS this is the first article in a series of posts that will explain how I created my demo pipeline. I've used Azure IaaS (ie virtual machines configured with specific roles such as IIS) however if that's not an option for you then it should be easy to translate everything to your way of working -- virtualisation, actual physical servers etc. I won't be going in to very much detail about using Azure so if you are new to Azure see one of my previous posts here for details of how to get started, and make sure you are familiar with its IaaS capabilities, ie how to create and remote desktop to VMs.

Many technologies have both a quick-and-dirty way of doing things and also a more considered way. Azure is no exception and if you don't make any alternative arrangements creating a new virtual machine in Azure will create a new storage account and a new cloud service. Things can quickly get out of hand and you end up with stuff all over the place and the possibility of using credits faster than you intended. To keep things tidy it's best to create some foundations upon which you can create your VMs. Buck Woody has a great post here which explains the steps and is mostly still relevant even though the post dates back to April 2013. I'm not going to repeat the detail of the post here but this is a summary of what you need to set up:

  • Affinity Group -- keeps a set of resources together which can help to minimise costs. Choose a region that makes sense for all of the VMs you will create.
  • Virtual Network -- allows all of the VMs to talk to each other. At the time of Buck Woody's post it was possible to specify an Affinity Group for a Virtual Network but this has now changed and you can only specify a region. You can read more about this here, but when you create your network choose the same region as the Affinity Group.
  • Storage Account and Container -- this is where the VHDs of your VMs are stored. Choose the Affinity Group you created previously. When your Storage Account is running create a Container called vhds.
  • Cloud Service -- this is the container in which all of your VMs will live. You can create a new Cloud Service at the same time as creating a new VM but I prefer to create one up front, specifying the previously created Affinity Group.

In the spirit of keeping things neat and tidy when I create a new set of the above resources I like to use a naming convention which consists of a short prefix plus the name of the resource I'm creating, eg AlmAffinityGroup, AlmVirtualNetwork and so on. Bear in mind that some of the names -- in particular the Cloud Service -- need to be globally unique so it's worth navigating to the first page of the Cloud Service wizard (which will validate names you put in) to find something that is likely to work before you start in earnest.

Once all the above are in place it's worth creating a test VM -- it's simple to delete it once you have finished testing. I tend to create my VMs with PowerShell and will post about this in the future, but for the meantime in the Portal you can choose New > Compute > Virtual Machine > From Gallery. Choose Windows Server 2012 R2 Datacenter (or whatever might have superseded this) and move on to the next step of the wizard. I always use the latest version, and choose the basic tier as it's cheaper, and for a test VM choose an A0 size, again to keep costs down. Supply a name for the VM and a username and strong password and then at the next step in the wizard you can supply all the resources you created above.

On the subject of keeping costs down it's worth noting that to ensure you don't get charged when you are not using your VMs you have to actually shut them down from the portal or use the Stop-AzureVM PowerShell cmdlet so that their status is Stopped (Deallocated). Simply powering off from within the VM isn't enough. Automating this is the way forward -- another future post!

Cheers -- Graham

Creating a Continuous Delivery Pipeline from Scratch using the TFS Ecosystem

Posted by Graham Smith on November 28, 2014No Comments (click here to comment)

For anyone working with the Team Foundation Server ecosystem there are plenty of articles on how to get started with the individual components – TFS itself, Visual Studio, Release Management, Microsoft Test Manager and so on. Frustratingly though there is very little guidance on how to piece everything together to create continuous delivery pipelines that build code, automatically deploy it to an environment and then run automated acceptance tests.

Until now that is, because over the next few months I’m going to document my experience of building a continuous delivery pipeline – in my case using IaaS in Microsoft Azure. As each post is written I’ll update my Continuous Delivery with TFS page which will serve as a handy reference for anyone wanting to work through the process. If you are interested in knowing more about the philosophy about continuous delivery then have a look at this article and also this one. Both of these refer to the Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation book which is the current authoritative text on this subject and which I definitely recommend anyone working in this area should read.

Cheers – Graham