Continuous Delivery with TFS / VSTS – Server Configuration as Code with PowerShell DSC

Posted by Graham Smith on April 7, 201610 Comments (click here to comment)

I suspect I'm on reasonably safe ground when I venture to suggest that most software engineers developing applications for Windows servers (and the organisations they work for) have yet to make the leap from just writing the application code to writing both the application code and the code that will configure the servers the application will run on. Why do I suggest this? It's partly from experience in that I've never come across anyone developing for the Windows platform who is doing this (or at least they haven't mentioned it to me) and partly because up until fairly recently Microsoft haven't provided any tooling for implementing configuration as code (as this engineering practice is sometimes referred to). There are products from other vendors of course but they tend to have their roots in the Linux world and use languages such as Ruby (or DSLs based on Ruby) which is probably going to seriously muddy the waters for organisations trying to get everyone up to speed with PowerShell.

This has all changed relatively recently with the introduction of PowerShell DSC, Microsoft's solution for implementing configuration as code on Windows (and other platforms as it happens). With PowerShell DSC (and related technologies) the configuration of servers is expressed as programming code that can be versioned in source control. When a change is required to a server the code is updated and the new configuration is then applied to the server. This process is usually idempotent, ie the configuration can be applied repeatedly and will always give the same result. It also won't generate errors if the configuration is already in the desired state. Through version control we can audit how a configuration changes over time and being code it can be applied as required to ensure server roles in different environments, or multiple instances of the same server role in the same environment, have a consistent configuration.

So ostensibly Windows server developers now have no excuse not to start implementing configuration as code. But if we've managed so far without this engineering practice why all the fuss now? What benefit is it going to bring to the table? The key benefit is that it's a cure for that age-old problem of servers that might start life from a build script, but over the months (and possibly years) different technicians make necessary tweaks here and there until one day the server becomes a unique work of art that nobody could ever hope to reproduce. Server backups become critical and everyone dreads the day that the server will need to be upgraded or replaced.

If your application is very simple you might just get away with this state of affairs -- not that it makes it right or a best practice. However if your application is constantly evolving with concomitant configuration changes and / or you are going down the microservices route then you absolutely can't afford to hand-crank the configuration of your servers. Not only is the manual approach very error prone it's also hugely time-consuming, and has no place in a world of continuous delivery where shortening lead times and increasing reliability and repeatability is the name of the game.

So if there's no longer an excuse to implement configuration as code on the Windows platform why isn't there a mad rush to adopt it? In my view, for most mid-size IT departments working with existing budgets and staffing levels and an existing landscape of hand-cranked servers it's going to be a real slog to switch the configuration of a live estate to being managed by code. Once you start thinking about the complexities of analysing exiting servers (some of which might have been around for years and which might have all sorts of bespoke applications running on them) combined with devising a system of managing scores or even hundreds of servers it's clear that a task of this nature is almost certainly going to require a dedicated team. And despite the potential benefits that configuration as code promises most mid-size IT departments are likely to struggle to stand-up such a team.

So if it's going to be hard how does an organisation get started with configuration as code and PowerShell DSC? Although I don't have anywhere near all of the answers it is already clear to me that if your organisation is in the business of writing applications for Windows servers then you need to approach the problem from both ends of the server spectrum. At the far end of the spectrum is the live estate where server ‘drift' needs to be controlled using PowerShell DSC's ‘pull' mode. This is where servers periodically reach out to a central repository to pull their ‘true' configuration and make any adjustments accordingly. At the near end of the spectrum are the servers that form the continuous delivery pipeline which need to have configuration changes applied to them just before a new version of the application gets deployed to them. Happily PowerShell has a ‘push' mode which will work nicely for this purpose. There is also the live deployment situation. Here, live servers will need to have configuration changes pushed to them before application deployment takes place and then will need to switch over to pull mode to keep them true.

The way I see things at the moment is that PowerShell DSC pull mode is going to be hard to implement at scale because of the lack of tooling to manage it. Whilst you could probably manage a handful of servers in pull mode using PowerShell DSC script files, any more than a handful is going to cause serious pain without some kind of management framework such as the one that is available for Chef. The good news though is that getting started with PowerShell DSC push mode for configuring servers that comprise the deployment pipeline as part of application development activities is a much more realistic prospect.

Big Picture Time

I'm not going to be able to cover everything about making PowerShell DSC push mode work in one blog post so it's probably worth a few words about the bigger picture. One key concept to establish early on is that the code that will configure the server(s) that an application will reside on has to live and change alongside the application code. At the very least the server configuration code needs to be in the same version control branch as the application code and frequently it will make sense for it to be part of the same Visual Studio solution. I won't be presenting that approach in this blog post and instead will concentrate on the mechanics of getting PowerShell DSC push mode working and writing the configuration code that enables the Contoso University sample application (which requires IIS and SQL Server) to run. In a future post I'll have the code in the same Visual Studio solution as the Contoso University sample application and will explain how to build an artefact that is then deployed by the release management tooling in TFS / VSTS prior to deploying the application.

For anyone who has come across this post by chance it is part of my ongoing series about Continuous Delivery with TFS / VSTS, and you may find it helpful to refer to some of the previous posts to understand the full context of what I'm trying to achieve. I should also mention that this post isn't intended to be a PowerShell DSC tutorial and if you are new to the technology I have a Getting Started post here with a link collection of useful learning resources. With all that out of the way let's get going!

Getting Started

Taking the Infrastructure solution from this blog post as a starting point (available as a code release at my Infrastructure repo on GitHub, final version of this post's code here) add a new PowerShell Script Project called ConfigurationScripts. To this new project add a new PowerShell Script file called ContosoUniversity.ps1 and add a hash table and empty Configuration block called WebAndDatabase as follows:

We're going to need an environment to deploy in to so using the techniques described in previous posts (here and here) create a PRM-DAT-AIO server that is joined to the domain. This server will need to have Windows Management Framework 5.0 installed -- a manual process as far as this particular post is concerned but something that is likely to need automating in the future.

To test a basic working configuration we'll create a folder on PRM-DAT-AIO to act as the IIS physical path to the ContosoUniversity web files. Add the following lines of code to the beginning of the configuration block:

To complete the skeleton code add the following lines of code to the end of ContosoUniversity.ps1:

The code contained in ContosoUniversity.ps1 should now be as follows:

Although you can create this code from any developer workstation you need to ensure that you can run it from a workstation that is joined to the same domain as PRM-DAT-AIO and has a folder called C:\Dsc\Mof. In order to keep authentication simple I'm also assuming that you are logged on to your developer workstation with domain credentials that allow you to perform DSC operations on PRM-DAT-AIO. Running this code will create a PRM-DAT-AIO.mof file in C:\Dsc\Mof which will deploy to PRM-DAT-AIO and create the folder. Magic!

Installing Resource Modules Locally

To do anything much more sophisticated than create a folder we'll need to import resources to our local workstation from the PowerShell Gallery. We'll be working with xWebAdministration and xSQLServer and they can be installed locally as follows:

These same commands will also install the latest version of the resources if a previous version exists. Referencing these resources in our configuration script seems to have changed with the release of DSC 5.0 and versioning information is a requirement. Consequently, these resources are referenced in the configuration as follows:

Obviously change the above code to reference the version of the module that you actually install. The resources are continually being updated with new versions and this requires a strategy to upgrade on a periodic basis.

Making Resource Modules Available Remotely

Whilst the additions in the previous section allows us to create advanced configurations on our developer workstation these configurations are not going to run against target nodes since as things stand the target nodes don't know anything about custom resources (as opposed to resources such as PSDesiredStateConfiguration which ship with the Windows Management Framework). We can fix this by telling the Local Configuration Manager (LCM) of target nodes where to get the custom resources from. The procedure (which I've adapted from Nana Lakshmanan's blog post) is as follows:

  • Choose a server in the domain to host a fileshare. I'm using my domain controller (PRM-CORE-DC) as it's always guaranteed to be available under normal conditions. Create a folder called C:\Dsc\DscResources (Dsc purposefully repeated) and share it as Read/Write for Everyone as \\PRM-CORE-DC\DscResources.
  • Custom resources need to be zipped in a format required by DSC the pull protocol. The PowerShell to do this for version 1.10 of xWebAdministration and 1.5 of xSQLServer (using a local C:\Dsc\Resources folder) is as follows:

    Of course depending on the frequency of your having to do this to cope with updates and the number of resources you end up working with you probably want to re-write all this up in to some sort of reusable package.
  • With the packages now in the right format in the fileshare we need to tell the LCM of target nodes where to look. We do this by creating a new configuration decorated with the [DscLocalConfigurationManager()] attribute:

    The Settings block is used to set various properties of the LCM which are required in order for configurations we'll be writing to run. The ResourceRepositoryShare block obviously specifies the location of the zipped resource packages.
  • The final requirement is to add the line of code (Set-DscLocalConfigurationManager -Path C:\Dsc\Mof -Verbose) to apply the LCM settings.

The revised version of ContosoUniversity.ps1 should now be as follows:

At this stage we now have our complete working framework in place and we can begin writing the configuration blocks that colectively will leave us with a server that is capable of running our Contoso University application.

Writing Configurations for the Web Role

Configuring for the web role requires consideration of the following factors:

  • The server features that are required to run your application. For Contoso University that's IIS, .NET Framework 4.5 Core and ASP.NET 4.5.
  • The mandatory IIS configurations for your application. For Contoso University that's a web site with a custom physical path.
  • The optional IIS configurations for your application. I like things done in a certain way so I want to see an application pool called ContosoUniversity and the Contoso University web site configured to use it.
  • Any tidying-up that you want to do to free resources and start thinking like you are configuring NanoServer. For me this means removing the default web site and default application pools.

Although you'll know if your configurations have generated errors how will you know if they've generated the desired result? The following ‘debugging' options can help:

  • I know that the home page of Contoso University will load without a connection to a database, so I copied a build of the website to C:\inetpub\ContosoUniversity on PRM-DAT-AIO so I could test the site with a browser. You can download a zip of the build from here although be aware that AV software might mistakenly regard it as malware.
  • The IIS management tools can be installed on target nodes whilst you are in configuration mode so you can see graphically what's happening. The following configuration does the trick:
  • If you are testing with a local version of Internet Explorer make sure you turn off Compatibility View or your site may render with odd results. From the IE toolbar choose Tools > Compatibility View Settings and uncheck Display intranet sites in Compatibility View.

Whilst you are in configuration mode the following resources will be of assistance:

  • The xWebAdministration documentation on GitHub: https://github.com/PowerShell/xWebAdministration.
  • The example files that ship with xWebAdministration: C:\Program Files\WindowsPowerShell\Modules\xWebAdministration\n.n.n.n\Examples.
  • A Google search for xWebAdministration.

The configuration settings required to meet my requirements stated above are as follows:

There is one more piece of the jigsaw to finish the configuration and that's amending the application pool to use a domain account that has permissions to talk to SQL Server. That's a more advanced topic so I'm dealing with it later.

Writing Configurations for the Database Role

Configuring for the SQL Server database role is slightly different from the web role since we need to install SQL Server which is a separate application. The installation files need to be made available as follows:

  • Choose a server in the domain to host a fileshare. As above I'm using my domain controller. Create a folder called C:\Dsc\DscInstallationMedia and and share it as Read/Write for Everyone as \\PRM-CORE-DC\DscInstallationMedia.
  • Download a suitable SQL Server ISO image to the server hosting the fileshare -- I used en_sql_server_2014_enterprise_edition_with_service_pack_1_x64_dvd_6669618.iso from MSDN Subscriber Downloads.
  • Mount the ISO and copy the contents of its drive to a folder called SqlServer2014 created under C:\Dsc\DscInstallationMedia.

In contrast to configuring for the web role there are fewer configurations required for the database role. There is a requirement to supply a credential though and for this I'm using the Key Vault technique described in this post. This gives rise to new code within and preceding the configuration hash table as follows:

For a server such as the one we are configuring where the database is on the same machine as the web server and only the database engine is required there are just two configuration blocks needed to install SQL Server. For more complicated scenarios the following resources will be of assistance:

  • The xSQLServer documentation on GitHub: https://github.com/PowerShell/xSQLServer.
  • The example files that ship with xSQLServer: C:\Program Files\WindowsPowerShell\Modules\xSQLServer\n.n.n.n\Examples.
  • A Google search for xSQLServer.

The configuration settings required for the single server scenario are as follows:

In order to assist with ‘debugging' activities I've included the installation of the SQL Server management tools but this can be omitted when the configuration has been tested and deemed fit for purpose. Later in this post we'll manually install the remaining parts of the Contoso University application to prove that the installation worked but for the time being you can run SQL Server Management Studio to see the database engine running in all its glory!

Amending the Application Pool Identity

The Contoso University website is granted access to the database via a domain account that firstly gets configured as the Identity for the website's application pool and then gets configured as a SQL Server login associated with a user which has the appropriate permissions to the database. The SQL Server configuration is taken care of by a permissions script that we'll come to shortly, and the immediate task is concerned with amending the Identity property of the ConsosoUniversity application pool so that it references a domain account.

Initially this looked like it was going to be painful since xWebAdministration doesn't currently have the ability to configure the inner workings of application pools. Whilst investigating the possibilities I had the good fortune to come across a fork of xWebAdministration on the PowerShell.org GitHub site where those guys have created a module which does what we want. I need to introduce a slight element of caution here since the fork doesn't look like it's under active development. On the other hand maybe there are no major issues that need fixing. And if there are and they aren't going to get fixed at least the code is there to be forked. Because this fork isn't in the PowerShell Gallery getting it to work locally is a manual process:

  • Download the code to C:\Dsc\Resources and unblock and extract it. Change the folder name from cWebAdministration-master to cWebAdministration and copy to C:\Program Files\WindowsPowerShell\Modules.
  • In the configuration block reference the module as Import-DscResource –ModuleName @{ModuleName="cWebAdministration";ModuleVersion="2.0.1″}.

The configuration required to make the resource available to target nodes has an extra manual step:

  • In the root of C:\DSC\Resources\cWebAdministration create a folder named 2.0.1 and copy the contents of C:\DSC\Resources\cWebAdministration to this folder.
  • The following code can now be used to package the resource and copy it to the fileshare:

I tend towards using a different domain account for the Identity properties of the website application pools in the different environments that make up the deployment pipeline. In doing so it protects the pipeline form a complete failure if something happens to that domain account -- it gets locked-out for example. To support this scenario the configuration block to configure the application pool identity needs to support dynamic configuration and takes the following form:

The dynamic configuration is supported by Key Vault code to retrieve the password of the domain account used to configure the application pool (not shown) and the following additions to the configuration hash table:

The code does of course rely on the existence of the PRM\CU-DAT domain account (set so the password doesn't expire). This is the last piece of configuration, and you can view the final result on GitHub here.

The Moment of Truth

After all that configuration, is it enough to make the Contoso University application work? To find out:

  • If you haven't already, download, unblock and unzip the ContosoUniversityConfigAsCode package from here, although as mentioned previously be aware that AV software might mistakenly regard it as malware.
  • The contents of the Website folder should be copied (if not already) to C:\inetpub\ContosoUniversity on the target node.
  • Edit the SchoolContext connection string in Web.config if required -- the download has the server set to localhost and the database to ContosoUniversity.
  • On the target node run SQL Server Management Studio and install the database as follows:
    • In Object Explorer right-click the Databases node and choose Deploy Data-tier Application.
    • Navigate through the wizard, and at Select Package choose ContosoUniversity.Database.dacpac from the database folder of the ContosoUniversityConfigAsCode download.
    • Move to the next page of the wizard (Update Configuration) and change the Name to ContosoUniversity.
    • Navigate past the Summary page and the DACPAC will be deployed:
      ssms-deploy-dacpac
  • Still in SSMS, apply the permissions script as follows:
    • Open Create login and database user.sql from the Database\Scripts folder in the ContosoUniversityConfigAsCode download.
    • If the pre-configured login/user (PRM\CU-DAT) is different from the one you are using update accordingly, then execute the script.

You can now navigate to http://prm-dat-aio (or whatever your server is called) and if all is well make a mental note to pour a well-deserved beverage of your choosing.

Looking Forward

Although getting this far is certainly an important milestone it's by no means the end of the journey for the configuration as code story. Our configuration code now needs to be integrated in to the Contoso University Visual Studio solution so that it can be built as an artefact alongside the website and database artefacts. We then need to be able to deploy the configuration before deploying the application -- all automated through the new release management tooling that has just shipped with TFS 2015 Update 2 or through VSTS if you are using that. Until next time...

Cheers -- Graham

  • Zac

    Nice post, I too was unsure of using a fork of xWebAdministration. In the end I used Get,Test and Set script inside my ps1 deployment powershell script.

    The Write-Verbose are so I can see what’s happening in ReleaseManagement when it executes.

    I also have a custom module(cHosts) that gets deployed with a release to add entries to my hosts file.

    #
    # Change the AppPool Identity
    #
    Script ChangeAppPoolIdentity
    {
    GetScript = { return @{ AppPoolName = “$($using:WebAppPoolName)” }}
    TestScript =
    {
    import-module webadministration -verbose:$false
    $pool = get-item(“IIS:\AppPools\$($using:WebAppPoolName)”)
    $TestReturn = $true

    if ($pool.processModel.userName -ne $using:AppPoolUserName) {$TestReturn = $false}
    if ($pool.managedRuntimeVersion -ne $using:AppPoolRuntimeVersion) {$TestReturn = $false}
    if ($pool.processModel.identityType -ne “SpecificUser”) {$TestReturn = $false}

    write-verbose($pool.Name)
    write-verbose “Mismatched Identity Type:- $($pool.processModel.identityType -ne “SpecificUser”) – Current Identity $($pool.processModel.identityType)”
    write-verbose “Mismatched Username :- $($pool.processModel.userName -ne $using:AppPoolUserName) – Current Username $($pool.processModel.userName)”
    write-verbose “Mismatched RuntimeVersion :- $($pool.managedRuntimeVersion -ne $using:AppPoolRuntimeVersion) – Current RuntimeVersion $($pool.managedRuntimeVersion)”
    write-verbose “Reset the AppPool :- $(-NOT($TestReturn))”

    return $TestReturn
    }
    SetScript =
    {
    import-module webadministration -verbose:$false

    $pool = get-item(“IIS:\AppPools\$($using:WebAppPoolName)”);

    $pool.processModel.userName = [String]($using:AppPoolUserName)
    $pool.processModel.password = [String]($using:AppPoolPassword)
    $pool.processModel.identityType = [String](“SpecificUser”);
    $pool.managedRuntimeVersion = [string]($using:AppPoolRuntimeVersion)

    $pool | Set-Item
    }
    DependsOn = “[xWebAppPool]NewWebAppPool”
    }

    Script originally based from Michael Kaufmann i’ll include a link if I can find it again.

    • Graham Smith

      Many thanks for sharing your code Zac – looks like you are doing some great work! I’m just about to start work on implementing DSC with the new RM tooling. Several possible ways to go so I’ll probably end up doing a few options.

      Cheers – Graham

  • Sam

    Nice article as always!
    Just wanted to pick your brain on my new development. I’m developing release pipeline using TFS 2015 U2 for on-prem .Net web apps. Did you ever try of using DSC pull server for sharing custom resources and other common configuration for target nodes and while use TFS RM activities for pushing application specific configurations on those target nodes?

    Thanks for your help!
    Sam

    • Graham Smith

      Hi Sam

      Many thanks for your kind comment! I haven’t used DSC Pull for your suggestions but I think it might be a great use for them for on premises servers. I think I say in a later post that refactoring the common configurations (eg server roles such as IIS) is probably the way forward as you don’t really want these repeated for every application, and on premises DSC Pull would be a good way to ensure each server was properly provisioned before deploying to it. For completeness, in Azure the ARM template would be able to handle this after the infrastructure had been created. Good luck and do post back with how you get on.

      Cheers – Graham

  • Stu

    Hi Graham,

    Another on premise TFS/Release Manager user here.

    Do you have any thoughts on using DSC where you have multiple applications per server?

    As background, to date, we have had a very application centric view of the world, and as such, each application has it’s own GIT repo and release pipeline. When deploying, multiple apps get placed on the same server, and depending on the environment, grouping of applications on the servers may differ. Our manual deployments when manually performed are typically along the lines of :

    1. Stop our bespoke services
    2. robocopy application/service code
    3. Publish website
    4. Manually run database scripts
    5. Restart bespokse services
    6. Done

    On the face of it, this is something that could be automated with TFS/ReleaseManager, using powershell scripts for pre and post deploy actions such as stopping and starting the services. However, despite being fine for updates to an already installed application, it doesn’t get us to the level where DSC would be able to create a fresh installation on a new box (ie, installing the services and setting up the website).

    I have been looking at DSC to possibly help here and would like to ask your opinion on best approaches.

    Firstly, from watching some pluralsight DSC courses, they seem to drill home to concept of having one DSC per SERVER, which makes it difficult for us to manage the applications with DSC. Each of our servers is potentially going to need a different DSC setup if we do it at a server level.

    Q: Do you think it is acceptable to break this and write DSC scripts at an application rather than server level? Is this even possible?

    Secondly, if I am to use DSC here, it looks like there should be three states, and the release pipeline should be:

    1) ** Start-DSCConfiguration : Services + website configured
    2) ** Start-DSCConfiguration : Services Stopped
    3) Copy application/service code to server
    4) Publish website
    5) Apply database updates
    6) ** Start-DSCConfiguration : Services Started
    7) Done

    Though, whilst typing that, it does seem that I’d be configuring a website and service before actually deploying the code to the box… but that said, I can’t deploy if the services are running… and I can’t stop the services unless they are deployed – a nice chicken and egg mess I’ve got myself into there 

    Have you any thoughts on how DSC could be used in this context, or whether it should? Does the above approach look sensible, or should I limit DSC use to configuring a server, and handle the application updates using pre/post powershell scripts?

    Finally, I’ve found your blog really helpful. Thank you very much – it’s really appreciated!

    • Graham Smith

      Hi Stu

      Fantastic to hear that you are finding my blog useful, and thanks for your kind comment!

      Hopefully it’s obvious that my blog posts using DSC were only meant to illustrate the range of possibilitis and I think I say somewhere that production use is likely to be very different from the way I’ve used it in my blog.

      The way things seem to be shaping up is as per your comment near the end of your question where on premises at least (where servers tend to be treated more as pets than livestock) DSC would be used to configure servers for their roles (IIS for example) and drift corrected using DSC Pull, whilst application specific configuration is handled using imperative PowerShell. As I found in my blog post deploying a new set of web files to an already existing webiste that needs to be stopped beofre the filecopy is problematic using DSC. One of those situations where just because you can doesn’t mean you should…

      One of the guys on my team where I work is developing considerable expertise in using PowerShell for configuring the deployment pipeline. I’ll ask him to take a look at your question to see if he has anything to add.

      Cheers – Graham

    • thealmguy

      Hi Stu,

      I’m currently working on Graham’s team, and we’ve recently put together a full deployment solution using ARM, DSC and imperative PowerShell.

      Graham’s pretty much covered it in his comment – in our scenario, we have one product consisting of multiple websites, services and APIs. The approach we settled on was to use DSC to configure the servers in the environment, then use imperative (& idempotent) PowerShell Script to perform the actual application deployment, catering for both a clean environment, and one that has been deployed to previously. We still deploy both the server configuration and the application at the same time through TFS Release Management, server first, then application. This does require some upfront work; however, we found that you only need to write a couple of sets of functions to cover most of a deployment (e.g. Add/Update/Remove App Pools and Websites from IIS, and Add/Edit/Remove Windows Services).

      As you’ve mentioned, your scenario of shared infrastructure does mean you would violate the one MOF per server rule if you used DSC for multiple application deployments to the same server, and DSC is not designed to apply many different configurations to one server. Again, I would lean towards separate imperative PowerShell at an application level.

      The real challenge in a shared infrastructure environment is not alienating server configuration as inevitably it will be held is a separate repository to avoid duplicating it in all your application repositories. (If all your applications’ code is in one repository you have escaped this complication!). The server configuration should be effectively your platform configuration, and should not be duplicated at an application configuration level. You could have a repository for your server DSC, which produces a versioned package which is then deployed with any application. This will enable you to keep the server configuration as part of the deployment process, but also ensure all applications are deploying the same version of the server configuration and that there is only one version of the truth for your server configuration. We also stored the parameters for the server configuration outside the actual DSC script, passing it in as configuration data. This allowed us to also use the server configuration objects in the application configuration to reduce duplication (server names, ip addresses, login credentials etc.)

      Hope this helps! We are still refining our technique, but what this has taught us is that PowerShell (and DSC) is highly versatile, and there will be a way of using it that will fulfil your needs – it’s just a case of finding what works best in your scenario :)

  • Sam

    Hi Graham,

    If I may seek your professional advice here. What’s your take on location of these configuration or deployment scripts? Do you think these should sit along with project as same VS solution or they should be somewhere on DFS shared with all projects. I’m leaning towards the latter approach as it gives me more governance control as project teams don’t want or need to worry about these deployment configuration scripts. And this approach gives me consistency in deployments across applications without worrying about dev teams intruding these configurations. Thoughts?

    • Hi Sam

      Very good question! I think it probably comes down to reusability, by which I mean if you have many servers to configure you probably don’t want to have the code repeated in every VS solution. If that’s the case then I think yes – having this code somewhere central where it can be reused makes good sense. However, the config code that is specific to the application is probably best living with the application code.

      Cheers – Graham

  • Regal Cabs

    If I may seek your professional advice here.
    http://www.regalcab.in/