Continuous Delivery with TFS: Standing up an Environment

Posted by Graham Smith on January 9, 2015No Comments (click here to comment)

At this point in my series of posts on building a continuous delivery pipeline with TFS we have installed and configured quite a lot of the TFS infrastructure that we will need however as yet we don't have an environment to deploy our sample application to. We'll attend to that in this post, but first a few words about the bigger environments picture.

Thinking About the Bigger Environments Picture

When thinking about what can cause an application deployment to fail the first things we probably think about are the application code itself and then any configuration settings the code needs in order to run. That's only one side of the coin though, and the configuration of the environments we are deploying to might equally be to blame when things go wrong. It makes sense then that we need tooling to manage both the application and its configuration and also the environments we deploy to. The TFS ecosystem addresses the former but doesn't address the latter. So what does? There are two large pieces to consider: provisioning the servers that form the environment our application needs to run in and then the actual configuration of those servers our application will run on.

Provisioning servers is very much a horses for courses affair: what you do depends on where your servers will run and whether they need to be created afresh each time they are used. On premises it might be fine for servers to be long-lived and always on. In the cloud you may want the ability to create a test environment for a specific task and then tear it down afterwards to save costs. In the Windows world dynamically creating environments can be achieved pretty much anywhere using scripting tools such as PowerShell.  In Azure there are further options since there is now tooling such as Brewmaster or Resource Manager that can create servers using templates that describe what is required.

Managing the internal configuration of servers using tooling once they have been created can and should be done wherever your servers are running. The idea is that the only way a server gets changed is through the tooling so that if the server needs to be recreated (or more of the same are needed) it's pretty much a push-button exercise. (This is probably in contrast to the position most organisations are in today where servers are tweaked by hand as required until they become works of art that nobody knows how to faithfully recreate. If you are in the position of needing to know the configuration of an existing server then a product such as GuardRail can probably help.) The tooling to manage server configuration includes Puppet, Chef and Microsoft's new offering in this space PowerShell Desired State Configuration (DSC).

The techniques and tools mentioned above are definitely the way to go, however in this post I'm ignoring all that because the aim is to get the simplest thing working possible (although researching and writing about automating infrastructure and server configuration is on my list). Additionally, because we are setting up a demo environment we'll be working in multi-tenancy mode, ie multiple environments are hosted on the same server to keep the number of VMs that will be required to a minimum.

Provisioning Web and Database Servers

Our sample application consists of a web front end talking to a SQL Server database so we'll need two servers -- ALMWEB01 running IIS and ALMSQL01 running SQL Server 2014 or whatever version makes you happy. I use a basic A2 VM for the web server and a basic A3 for SQL Server, both created from a Windows Server 2012 R2 Datacenter image and configured as per my Azure foundations post. Once stood up the VMs need joining to the domain and configuring for their server roles, the details of which I'm not covering here as I'm assuming you already know or can learn. One point to note is that you need to resist the temptation to create the SQL Server VM from a preconfigured SQL Server image from the gallery. This will eat your Azure credits as you pay for the SQL Server licence time with these images. Rather, download SQL Server from MSDN and do the installation yourself on a vanilla Windows Server VM.

Installing the Release Management Deployment Agent

The final configuration we need to perform to make these VMs ready to participate in the delivery pipeline is to install the Release Management Deployment Agent. The agent needs to run with a service account so create a domain account (I use ALM\RMDEPLOYER) and add it to the Administrators group of the two servers. Next open up the Release Management client and add this account (Administration > Manage Users) giving it the Service User role. Back on your deployment servers you can now run the Deployment Agent install and provide the appropriate configuration details:

release-management-deployment-agent-configuration

After clicking Apply settings the installer will run through its list of items to configure and if there are no errors the agent will be up-and-running and ready to communicate with the Release Management server. To check this open up the Release Management client and navigate to Configure Paths > Servers. Click on the down arrow next to the New button and choose Scan for Agents. This will bring up the Unregistered Servers dialog which allows one to scan for and then register servers. If all is well you'll be able to register your two servers which should appear in the Servers tab as follows:

release-management-registered-servers

We're not quite ready to start building a pipeline with Release Management yet and in the next instalment we'll carry out the configuration steps that will take us to that point.

Cheers -- Graham

Continuous Delivery with TFS: Our Sample Application

Posted by Graham Smith on December 27, 2014No Comments (click here to comment)

In this post that is part of my series on implementing continuous delivery with TFS we look at the sample application that will be used to illustrate various aspects of the deployment pipeline. I've chosen Microsoft's fictional Contoso University ASP.NET MVC application as it comprises components that need to be deployed to a web server and a database server and it lends itself to (reasonably) easily demonstrating automated acceptance testing. You can download the completed application here and read more about its construction here.

Out of the box Contoso University uses Entity Framework Code First Migrations for database development however this isn't what I would use for enterprise-level software development. Instead I recommend using a tool such as Microsoft's SQL Server Database Tools (SSDT) and more specifically the SQL Server Database Project component of SSDT that can be added to a Visual Studio solution. The main focus of this post is on converting Contoso University to use a SQL Server Database Project and if you are not already up to speed with this technology I have a Getting Started post here. Please note that I don't describe every mouse-click below so some familiarity will be essential. I'm using the version of LocalDb that corresponds to SQL Server 2012 below as this is what Contoso University has been developed against. If you want to use the LocalDb that corresponds to SQL Server 2014 ((localdb)\ProjectsV12) then it will probably work but watch out for glitches. So, there is a little bit of work to do to get Contoso University converted, and this post will take us to the point of readying it for configuration with TFS.

Getting Started
  1. Download the Contoso University application using the link above and unblock and then extract the zip to a convenient temporary location.
  2. Navigate to ContosoUniversity.sln and open the solution. Build the solution which should cause NuGet packages to be restored using the Automatic Package Restore method.
  3. From Package Manager Console issue an Update-Database command (you may have to close down and restart Visual Studio for the command to become available). This should cause a ContosoUniversity2 database (including data) to be created in LocalDb. (You can verify this by opening the SQL Server Object Explorer window and expanding the (LocalDb)\v11.0 node. ContosoUniversity2 should be visible in the Database folder. Check that data has been added to the tables as we're going to need it.)
Remove EF Code First Migrations
  1. Delete SchoolIniializer.cs from the DAL folder.
  2. Delete the DatabaseInitializer configuration from Web.config (this will probably be commented out but I'm removing it for completeness' sake):
  3. Remove the Migrations folder and all its contents.
  4. Expand the ContosoUniversity2 database from  the SQL Server Object Explorer window and delete dbo._MigrationHistory from the Tables folder.
  5. Run the solution to check that it still builds and data can be edited.
Configure the solution to work with a SQL Server Database Project (SSDP)
  1. Add an SSDP called ContosoUniversity.Database to the solution.
  2. Import the ContosoUniversity2 database to the new project using default values.
  3. In the ContosoUniversity.Database properties enable Code Analysis in the Code Analysis tab.
  4. Create and save a publish profile called CU-DEV.publish.xml to publish to a database called CU-DEV on (LocalDb)\v11.0.
  5. In Web.config change the SchoolContext connection string to point to CU-DEV.
  6. Build the solution to check there are no errors.
Add Dummy Data

The next step is to provide the facility to add dummy data to a newly published version of the database. There are a couple of techniques for doing this depending on requirements -- the one I'm demonstrating only adds the dummy data if a table contains no rows, so ensuring that a live database can't get polluted. I'll be extracting the data from ContosoUniversity2 and I'll want to maintain existing referential integrity, so I'll be using SET IDENTITY_INSERT ON | OFF on some tables to insert values to primary key columns that have the identity property set. Firstly create a new folder in the SSDP called ReferenceData (or whatever pleases you) and then add a post deployment script file (Script.PostDeployment.sql) to the root of the ContosoUniversity.database project (note there can only be one of these). Then follow this general procedure for each table:

  1. In the SQL Server Object Explorer window expand the tree to display the ContosoUniversity2 database tables.
  2. Right click a table and choose View Data. From the table's toolbar click the Script icon to create the T-SQL to insert the data (SET IDENTITY_INSERT ON | OFF should be added by the scripting engine where required).
  3. Amend the script with an IF statement so that the insert will only take place if the table is empty. The result script should look similar to the following:
  4. Save the file in the ReferenceData folder in the format TableName.data.sql and add it to the solution as an existing item.
  5. Use the SQLCMD syntax to call the file in the post deployment script file. (The order the table inserts are executed will need to cater for referential integrity. Person, Department, Course, CourseInstructor, Enrollment and OfficeAssignment should work.) When editing Script.PostDeployment.sql the SQLCMD Mode toolbar button will turn off Transact-SQL IntelliSense and stop ‘errors' from being highlighted.
  6. When all the ReferenceData files have been processed the Script.PostDeployment.sql should look something like:

    You should now be able to use CU-DEV.publish.xml to actually publish a database called CU-DEV to LocalDB that contains both schema and data and which works in the same way as the database created by EF Code First Migrations.
Finishing Touches

For the truly fussy among us (that's me) that like neat and tidy project names in VS solutions there is an optional set of configuration steps that can be performed:

  1. Remove the ContosoUniversity ASP.NET MVC project from the solution and rename it to ContosoUniversity.Web. In the file system rename the containing folder to ContosoUniversity.Web.
  2. Add the renamed project back in to the solution and from the Application tab of the project's Properties change the Assembly name and Default namespace to ContosoUniversity.Web.
  3. Perform the following search and replace actions:
    namespace ContosoUniversity > namespace ContosoUniversity.Web
    using ContosoUniversity > using ContosoUniversity .Web
    ContosoUniversity.ViewModels > ContosoUniversity.Web.ViewModels
    ContosoUniversity.Models > ContosoUniversity.Web.Models
  4. You may need to close the solution and reopen it before checking that nothing is broken and the application runs without errors.

That's it for the moment. In the next post in this series I'll explain how to get the solution under version control in TFS and how to implement continuous integration.

Cheers -- Graham

Getting Started with SQL Server Database Projects

Posted by Graham Smith on December 13, 2014No Comments (click here to comment)

By now hopefully all developers understand the importance of keeping their source code under version control and are actually practising this for any non-throwaway code. That's all fine and dandy for your application, but what about your database? In my experience it's pretty rare for databases to be under version control, probably because in the past the tooling has been inadequate or simply off developer radars. There are a number of tools that can help with database version control but one of most readily accessible for Visual Studio developers is the SQL Server Database Project that can be added to a Visual Studio solution. SQL Server Database Projects are part of the Microsoft SQL Server Data Tools ( SSDT) package, which is obviously aimed at developing against SQL Server. You can start with a blank database but most likely you will already have an existing database in which case the database project has the ability to reverse engineer the schema. The result of this process is a series of files containing CREATE statements for the objects that comprise your database (tables, stored procedures and so on), with the files themselves (usually one per object) organised in a folder structure. Since these are essentially text files just like any other code file you can check them in to version control and have any changes recorded just like you would with, for example, a C# file.

In addition to facilitating version control database projects offer a wealth of extra functionality. A declarative approach is used with database projects, ie you state how you want your database to be via CREATE statements and then another process is responsible for making the schema of one or more target databases the same as the schema of your database project. You can also publish your schema to a new database -- ideal if you need to create a LocalDB version on a new development workstation for example. This is really only the tip of the iceberg and I encourage you to use the resources below as a starting point for learning about database projects and SSDT:

Since SSDT is built-in to Visual Studio 2013 the barrier to getting started is very low. Be sure to check for any updates from within Visual Studio (Tools > Extensions and Updates) before you begin. Finally, anyone who has spotted that the SQL Server installation wizard has an option to install SQL Server Data Tools has every right to be confused, since at one point in time this was also the new name for what was once BIDS (Business Intelligence Developer Studio). If you want to know more then this post and also this one will help clarify. Maybe.

Cheers -- Graham