Deploy a Dockerized ASP.NET Core Application to Kubernetes on Azure Using a VSTS CI/CD Pipeline: Part 2

Posted by Graham Smith on March 21, 2018No Comments (click here to comment)

If you need to provision a new environment for your deployment pipeline, what's your process and how long does it take? For many of us the process probably starts with a request to an infrastructure team for new virtual machines. If the new VMs are in Azure the request might be completed quite quickly; if they are on premises it might take much longer. In both scenarios you might have to justify your request: there will be actual cost in Azure and on premises it's another chunk of the datacentre ‘gone'.

With the help of containers and container orchestrators I predict (and sincerely hope) that this sort of pain will become a distant memory for much of the software development community for whom it is currently an issue. The reason is that container orchestration technologies abstract away the virtual (or physical) server layer and allow you to focus on configuring services and how they communicate with each other—all through configuration files. The only time you'd need to think of virtual (or physical) servers is if the cluster running your orchestrator needed more capacity, in which case someone will need to add more nodes. A whole new environment for your pipeline just by doing some work with a configuration file? What's not to like?

In this blog post I hope to make my prediction come alive by showing you how new environments can be quickly created using Kubernetes running in Microsoft's Azure Container Service (AKS), crucially using declarative configuration files that get deployed as part of a VSTS release pipeline. This post follows directly on from a previous post, both in terms of understanding and also the components that were built in that first post, so if you haven't already done so I recommend working your way through that post before going further.

Housekeeping

In the previous post we deployed to the default namespace so it probably makes sense to clean all this up. This can all be done by the command line of course but to mix it up a bit I'll illustrate using the Kubernetes Dashboard. You can start the dashboard using the following command, substituting in the name of your resource group and the name of the cluster:

This should open the dashboard in a browser displaying the default namespace. Navigate to Workloads > Deployments and using the hamburger menu delete the deployment:

Navigate to Discovery and Load Balancing > Services and delete the service:

Navigate to Config and Storage > Secret and delete the secret:

Environments and Namespaces

The Kubernetes feature that we'll use to create environments that together form part of our pipeline is Namespaces. You can think of namespaces as a way to divide the Kubernetes cluster in to virtual clusters. Within a namespace resource names need to be unique but they don't have to be across namespaces. This is great because effectively we have network isolation so that across each environment resource names stay the same. Say goodbye to having to append the environment name to all the resources in your environment to make them unique.

In this post I'll make a pipeline consisting of two environments. I'm sticking with a convention I established several years ago so I'll be creating DAT (developer automated test) and PRD (production) environments. In a complete pipeline I might also create a DQC (developer quality control) environment to sit between DAT and PRD but that won't really add anything extra to this exercise.

First up is to create the namespaces. There is an argument for saying that namespace creation should be part of the release pipeline however in this post I'm going to create everything manually as I think it helps to understand what's going on. Create a file called namespaces.yaml and add the following contents:

Note that namespace name needs to be in lower case as it needs to be DNS compatible. Open a command prompt at the same location as namespaces.yaml and execute the the following command: kubectl create -f namespaces.yaml. You should get a message back advising the namespaces have been created and at one level that's all there is to it. However there's a couple of extra bits worth knowing.

When you first start working with kubectl at the command line you are working in the default namespace. To work with other namespaces needs some configuration.

To return details of the configuration stored in C:\Users\<username>\.kube\config use:

My cluster returned the following output:

From this output you need to determine your cluster name (which you probably already know) as well as the name of the user. These details are fed in to the following command for creating a new context for an environment (in this case the DAT environment):

To switch to working to this context (and hence the dat namespace) use:

To confirm (or check) the current context use:

To get back to the default namespace use:

Normally that would be most of what you need to know to work with namespaces, however as of the time of writing there is a bug in the VSTS Deploy to Kubernetes task which requires some extra work. The bug may be fixed by the time you read this however it's handy to examine the issue to further understand what is going on behind the scenes.

Each namespace needs to access the Azure Container Registry (ACR) we created in the previous post to pull down images. This is a private registry so we don't want open access and so some form of authentication is required. This is provided by the creation of a Kubernetes secret that holds the authentication details to the ACR. The VSTS Deploy to Kubernetes task can create this secret for us however the bug is that it only creates the secret for the default namespace and fails to create the secret when a different namespace is specified. The workaround is to create the secret manually in each namespace using the following command:

In the above command secret-name is any arbitrary name you choose for the secret, namespace is the namespace in which to create the secret, acr-name is the name of your ACR, acr-admin-password is the password from the Access keys panel of your ACR and any-valid-email-address is just that. You'll need to run this command for each namespace of course. One final thing: you'll need to make sure that in the codebase the imagePullSecrets name in deployment.yaml matches the name of the secret you just created.

Amend the VSTS Pipeline to Support Multiple Environments

In this section we amend the release pipeline that was built in the previous post to support multiple environments.

  1. In the Pipeline tab rename Environment 1 to DAT:
  2. In the Variables tab create a variable to hold the name of the secret created above to authenticate with ACR. Create a second variable for the DAT environment namespace and change its scope to DAT. Remember that the value needs to be lower case:
  3. In the Tasks tab amend all three Deploy to Kubernetes tasks so that the Namespace field contains the $(DatEnvironment) variable. At the same time ensure that Secret name field matches the name of the secret variable created above:
  4. In order to test that deploying to DAT works, either trigger a build or, if you updated deployment.yaml above on your workstation commit your code. If the deployment was successful find the external IP address of the LoadBalancer by executing kubectl get services --namespace=dat and paste in to a browser to confirm that the ASP.NET Core website is running.

Amend the VSTS Pipeline to Support a New Environment

Now for the fun bit where we see just how easy it is to configure a new, network-isolated environment.

  1. In the Pipeline tab use the arrow next to Environments > Add to show and then select Clone environment:
  2. Rename the cloned environment to PRD. Create a new variable (ie PrdEnvironment) scoped to PRD to hold the prd namespace and amend each of the three Deploy to Kubernetes tasks so that the Namespace field contains the $(PrdEnvironment) variable.
  3. Trigger a build and check the deployment was successful by executing kubectl get services --namespace=prd to get the external IP address of the LoadBalancer which you can paste in to a browser to confirm that the ASP.NET Core website is running.

And That's It!

Yep—that really is all there is to it! Okay, this is just a trivial example, however even with more services the procedure would be the same. Granted, in a more complex application there might be some environment variables or secrets that might change but even so, it's just configuration.

I'm thrilled by the power that Kubernetes gives to developers—no more thinking about VMs or tin, no more having to append resources with environment names, and the ability to create a new environment in the blink of an eye—wow!

There's lots more I'm planning to cover in the deployment pipeline space however next time I'll be looking at the development inner loop and the options for running Kubernetes whilst developing code.

Cheers—Graham

Deploy a Dockerized ASP.NET Core Application to Kubernetes on Azure Using a VSTS CI/CD Pipeline: Part 1

Posted by Graham Smith on February 20, 2018No Comments (click here to comment)

Over the past 18 months or so I've written a handful of blog posts about deploying Docker containers using Visual Studio Team Services (VSTS). The first post covered deploying a container to a Linux VM running Docker and other posts covered deploying containers to a cluster running DC/OS—all running in Microsoft Azure. Fast forward to today and everything looks completely different from when I wrote that first post: Docker is much more mature with features such as multi-stage builds dramatically streamlining the process of building source code and packaging it in to containers, and Kubernetes has emerged as a clear leader in the container orchestration battle and looks set to be a game-changing technology. (If you are new to Kubernetes I have a Getting Started blog post here with plenty of useful learning resources and tips for getting started.)

One of the key questions that's been on my mind recently is how to use Kubernetes as part of a CI/CD pipeline, specifically using VSTS to deploy to Microsoft's Azure Container Service (AKS), which is now specifically targeted at managing hosted Kubernetes environments. So in a new series of posts I'm going to be examining that very question, with each post building on previous posts as I drill deeper in to the details. In this post I'm starting as simply as I possibly can whilst still answering the key question of how to use VSTS to deploy to Kubernetes. Consequently I'm ignoring the Kubernetes experience on the development workstation, I only deploy a very simple application to one environment and I'm not looking at scaling or rolling updates. All this will come later, but meantime I hope you'll find that this walkthrough will whet your appetite for learning more about CI/CD and Kubernetes.

Development Workstation Configuration

These are the main tools you'll need on a Windows 10 Pro development workstation (I've documented the versions of certain tools at the time of writing but in general I'm always on the latest version):

  • Visual Studio 2017—version 15.5.6 with the ASP.NET and web development workload.
  • Docker for Windows—stable channel 17.12.0-ce.
  • Windows Subsystem for Linux (WSL)—see here for installation details. I'm still using Bash on Ubuntu on Windows that I installed before WSL moved to the Microsoft Store and in this post I assume you are using Ubuntu. The aim of installing WSL is to run Azure CLI, although technically you don't need WSL as Azure CLI will run happily under a Windows command prompt. However using WSL facilitates running Azure CLI commands from a Bash script.
  • Azure CLI on Windows Subsystem for Linux—see here for installation (and subsequent upgrade) instructions. There are several ways to login to Azure from the CLI however I've found that the interactive log-in works well since once you're logged-in you remain so for quite a long time (many days for me so far). Use az -v to check which version you are on (2.0.27 was latest at time of writing).
  • kubectl on Azure CLI—the kubectl CLI is used to interact with a Kubernetes cluster. Install using sudo az aks install-cli.

Create Services in Microsoft Azure

There are several services you will need to set up in Microsoft Azure:

  • Azure Container Registry—see here for an overview and links to the various methods for creating an ACR. I use the Standard SKU for the better performance and increased storage.
  • Azure Container Service (AKS) cluster—see here for more details about AKS and how to create a cluster, however you may find it easier to use my script below. I started off by creating a cluster and then destroying it after each use until I did some tests and found that a one-node cluster was costing pennies per day rather than the pounds per day I had assumed it would cost and now I just keep the cluster running.
    • From a WSL Bash prompt run nano create_k8s_cluster.sh to bring up the nano editor with a new empty file. Copy and paste (by pressing right mouse key) the following script:
    • Change the variables to your suit your requirements. If you only have one Azure subscription you can delete the lines that set a particular subscription as the default, otherwise use az account list to list your subscriptions to find the ID.
    • Exit out of nano making sure you save the changes (Ctrl +X, Y) and then apply permissions to make it executable by running chmod 700 create_k8s_cluster.sh.
    • Next run the script using ./create_k8s_cluster.sh.
    • One the cluster is fully up-and-running you can show the Kubernetes dashboard using az aks browse --resource-group $resourceGroup --name $clusterName.
    • You can also start to use the kubectl CLI to explore the cluster. Start with kubectl get nodes and then have a look at this cheat sheet for more commands to run.
    • The cluster will probably be running an older version of Kubernetes—you can check and find the procedure for upgrading here.
  • Private VSTS Agent on Linux—you can use the hosted agent (called Hosted Linux Preview at time of writing) but I find it runs very slowly and additionally because a new agent is used every time you perform a build it has to pull docker images down each time which adds to the slowness. In a future post I'll cover running a VSTS agent from a Docker image running on the Kubernetes cluster but for now you can create a private Linux agent running on a VM using these instructions. Although they date back to October 2016 they still work fine (I've checked them and tweaked them slightly).
    • Since we will only need this agent to build using Docker you can skip steps 5b, 5c and 5d.
    • Install a newer version of Git—I used these instructions.
    • Install docker-compose using these instructions and choosing the Linux tab.
    • Make the docker-user a member of the docker group by executing usermod -aG docker ${USER}.

Create VSTS Endpoints

In order to talk to the various Azure services you will need to create the following endpoints in VSTS (from the cog icon on the toolbar choose Services > New Service Endpoint):

  • Azure Resource Manager—to point to your MSDN subscription. You'll need to authenticate as part of the process.
  • Kubernetes Service Connection—to point to your Kubernetes cluster. You'll need the FQDN to the cluster (prepended with https://) which you can get from the Azure CLI by executing az aks show --resource-group $resourceGroup --name $clusterName, passing in your own resource group and cluster names. You'll also need the contents of the kubeconfig file. If you used the script above to create the cluster then the script copied the config file to C:\Users\Public and you can use Notepad to copy the contents.

Configure a CI Build

The first step to deploying containers to a Kubernetes cluster is to configure a CI build that creates a container and then pushes the container to a Docker registry—Azure Container Registry in this case.

Create a Sample App
  • Within an existing Team Project create a new Git repository (Code > $current repository$ > New repository) called k8s-aspnetcore. Feel free to select the options to add a README and a VisualStudio .gitignore.
  • Clone this repo on your development workstation:
    • Open PowerShell at the desired root folder.
    • Copy the URL from the VSTS code view of the new repository.
    • At the PowerShell prompt execute git clone along with the pasted URL.
  • Make sure Docker for Windows is running.
  • In Visual Studio create an ASP.NET Core Web Application in the folder the git clone command created.
  • Choose an MVC app and enable Docker support for Linux.
  • You should now be able to run your application using the green Docker run button on the Standard toolbar. What is interesting here is that the build process is using a multi-stage Dockerfile, ie the tooling to build the application is running from a Docker container. See Steve Lasker's post here for more details.
  • In the root of the repository folder create a folder named k8s-config, which we'll use later to store Kubernetes configuration files. In Visual Studio create a New Solution Folder with the same name and back in the file system folder create empty files named service.yaml and deployment.yaml. In Visual Studio add these files as existing items to the newly created solution folder.
  • The final step here is to commit the code and sync it with VSTS.
Create a VSTS Build
  • In VSTS create a new build based on the repository created above and start with an empty process.
  • After the wizard stage of the setup supply an appropriate name for the build and select the Agent queue created above if you are using the recommended private agent or Hosted Linux Preview if not.
  • Go ahead and perform a Save & queue to make sure this initial configuration succeeds.
  • In the Phase 1 panel use + to add two Docker Compose tasks and one Publish Build Artifacts task.
  • If you want to be able to perform a Save & queue after configuring each task (recommended) then right-click the second and third tasks and disable them.
  • Configure the first Docker Compose task as follows:
    • Display name = Build service images
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Docker Compose File = **/docker-compose.yml
    • Project Name = $(Build.Repository.Name)
    • Qualify Image Names = checked
    • Action = Build service images
    • Additional Image Tags = $(Build.BuildId)
    • Include Latest Tag = checked
  • Configure the second Docker Compose task as follows:
    • Display name = Push service images
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Docker Compose File = **/docker-compose.yml
    • Project Name = $(Build.Repository.Name)
    • Qualify Image Names = checked
    • Action = Push service images
    • Additional Image Tags = $(Build.BuildId)
    • Include Latest Tag = checked
  • Configure the Publish Build Artifacts task as follows:
    • Display name = Publish k8s config
    • Path to publish = k8s-config (this is the folder we created earlier in the repository root folder)
    • Artifact name = k8s-config
    • Artifact publish location = Visual Studio Team Services/TFS
  • Finally, in the Triggers section of the build editor check Enable continuous integration so that the build will trigger on a commit from Visual Studio.

So what does this build do? The first Docker Compose task uses the docker-compose.yml file to work out what images need building as specified by Dockerfile file(s) for different services. We only have one service (k8s-aspnetcore) but there could (and usually would) be more. With the image built on the VSTS agent the second Docker Compose task pushes the image to the Azure Container Registry. If you navigate to this ACR in the Azure portal and drill in to the Repositories section you should see your image. The build also publishes the yaml configuration files needed to deploy to the cluster.

Configure a Release Pipeline

We are now ready to configure a release to deploy the image that's hosted in ACR to our Kubernetes cluster. Note that you'll need to complete all of this section before you can perform a release.

Create a VSTS Release Definition
  • In VSTS create a new release definition, starting with an empty process and changing the name to k8s-aspnetcore.
  • In the Artifacts panel click on Add artifact and wire-up the build we created above.
  • With the build now added as an artifact click on the lightning bolt to enable the Continuous deployment trigger.
  • In the default Environment 1 click on 1phase, 0 task and in the Agent phase click on + to create three Deploy to Kubernetes tasks.
  • Configure the first Deploy to Kubernetes task as follows:
    • Display name = Create Service
    • Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint created above]
    • Command = apply
    • Use Configuration files = checked
    • Configuration File = $(System.DefaultWorkingDirectory)/k8s-aspnetcore/k8s-config/service.yaml
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Secret name [any secret word of your choosing, to be used consistently across all tasks]
  • Configure the second Deploy to Kubernetes task as follows:
    • Display name = Create Deployment
    • Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint created above]
    • Command = apply
    • Use Configuration files = checked
    • Configuration File = $(System.DefaultWorkingDirectory)/k8s-aspnetcore/k8s-config/deployment.yaml
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Secret name [any secret word of your choosing, to be used consistently across all tasks]
  • Configure the third Deploy to Kubernetes task as follows:
    • Display name = Update with Latest Image
    • Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint created above]
    • Command = set
    • Arguments = image deployment/k8s-aspnetcore-deployment k8s-aspnetcore=$yourAcrNameHere$.azurecr.io/k8s-aspnetcore:$(Build.BuildId)
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Secret name [any secret word of your choosing, to be used consistently across all tasks]
  • Make sure you save the release but don't bother testing it out just yet as it won't work.
Create the Kubernetes configuration
  • In Visual Studio paste the following code in to the service.yaml file created above.
  • Paste the following code in to the deployment.yaml file created above. The code is for my ACR so you will need to amend accordingly.
  • You can now commit these changes and then head over to VSTS to check that the release was successful.
  • If the release was successful you should be able to see the ASP.NET Core website in your browser. You can find the IP address by executing kubectl get services from wherever you installed kubectl.
  • Another command you might try running is kubectl describe deployment $nameOfYourDeployment, where $nameOfYourDeployment is the metadata > name in deployment.yaml. A quick tip here is that if you only have one deployment you only need to type the first letter of it.
  • It's worth noting that splitting the service and deployment configurations in to separate files isn't necessarily a best practice however I'm doing it here to try and help clarify what's going on.

In terms of a very high level explanation of what we've just configured in the release pipeline, for a simple application such as an ASP.NET Core website we need to deploy two key objects:

  1. A Kubernetes Service which (in our case) is configured with an external IP address and acts as an abstraction layer for Pods which are killed off and recreated every time a new release is triggered. This is handled by the first Deploy to Kubernetes task.
  2. A Kubernetes Deployment which describes the nature of the deployment—number of Pods (via Replica Sets), how they will be upgraded and so on. This is handled by the second Deploy to Kubernetes task.

On first deployment these two objects are all that is needed to perform a release. However, because of the declarative nature of these objects they do nothing on subsequent release if they haven't changed. This is where the third Deploy to Kubernetes task comes in to play—ensuring that after the first release subsequent releases do cause the container to be updated.

Wrapping Up

That concludes our initial look at CI/CD with VSTS and Azure Container Service (AKS)! As I mentioned at the beginning of the post I've purposely tried to keep this walkthrough as simple as possible, so watch out for the next installment where I'll build on what I've covered here.

Cheers—Graham

Getting Started with Kubernetes

Posted by Graham Smith on February 1, 2018No Comments (click here to comment)

If you've been following the containers story you'll probably know that 2017 was a big year for Docker. You may also know that 2018 looks set to be a big year for Kubernetes, "an open-source system for automating deployment, scaling, and management of containerized applications". There are several systems competing in the same space as Kubernetes but for many the jury has voted and Kubernetes is the winner.

For many of us ‘containerized applications' means applications that have been containerised using Docker, and if you've been learning and working with Docker then learning Kubernetes is an obvious next step. I've been learning Kubernetes for a couple of months now and in this post I share some of the resource links that I've found most useful and provide pointers to the different ways I've created Kubernetes environments to provide practical hands-on experience.

Learning Resources

Run Kubernetes on your Development Machine Using Minikube

A quick and easy way to get started with Kubernetes is to install Minikube on your development machine. Minikube is a tool that runs a single-node Kubernetes cluster on a virtual machine running on your laptop or workstation. You can find the installation guide here and the getting started guide here. I installed Minikube on my Windows 10 workstation running Hyper-V and the minikube start command just worked. Don't forget you'll need to install kubectl as well as Minikube. As part of the installation process a kubeconfig file is created at %userprofile%\.kube\config (config is the actual file) which ‘connects' kubectl to Minikube. If you are connecting to different Kubernetes installations from your development machine you'll need to manage kubeconfig files—see later for more details.

If you've got Minikube installed and working you might be wondering what next, especially if you are a Windows user as the documentation isn't hugely Windows-friendly. If you are in this situation head over to this Getting Started with Kubernetes on your Windows Laptop with Minikube tutorial. Skip past the installation instructions to Starting our Cluster and follow on from there.

Run Kubernetes in Microsoft Azure

If you have a Microsoft Azure subscription or are prepared to sign up for a free trial it's ridiculously easy to start working with Kubernetes in Azure. There's actually a couple of ways to do it but the easiest is to create an Azure Container Service (AKS) cluster as this service abstracts away much of the complicated cluster stuff leaving you to focus on Kubernetes itself.

The Deploy an Azure Container Service (AKS) cluster walkthrough gets you up-an-running in no time with an actual app that you can run in your browser. Using Azure Cloud Shell is the easier way to use the Azure CLI although it does have an annoying habit of timing out on you. If you do switch to using the local version of the CLI watch out for the az aks get-credentials --resource-group myResourceGroup --name myK8sCluster comand which will merge connection information about the cluster you are creating with any previously created %userprofile%\.kube\config file that might be present (after installing Minikube for example) which may not be what you want.

Run Kubernetes on a Raspberry Pi Cluster

If you want to take your knowledge a step further and learn how to perform a bare-metal installation of Kubernetes then one option is to create a Raspberry Pi cluster and install and run Kubernetes on it. I've gone down this route and have had great fun doing so. There are a couple of key resources that will help you get this project off the ground:

My cluster ended up looking like this:

Some familiarity with Raspberry Pi obviously helps with this sort of project however I wouldn't say it's a definite prerequisite as there is plenty of help out there for anyone getting started with Raspberry Pi. You do really need to start off with at least three Pis so there is a modest cost involved but if you don't want your cluster to be portable then you don't need to hook it up to a switch or a router and WiFi works fine for me. A tip worth mentioning is that the 6-port RAVpower USB charger is slightly smaller than the 6-port Anker USB charger and fitted my enclosure much better.

Dealing with Multiple Kubernetes Instances

If you end up managing more than one instance of Kubernetes with the same instance of kubectl you'll somehow need to manage the issue of multiple kubeconfig files. There is detailed guidance about this here. My needs are very modest and at the moment I simply save different kubeconfig files with different extensions and then remove the extension of the one I want to work with. Not very elegant but it serves my simple needs for the moment.

And There's More...

The three techniques I've described for working with Kubernetes are only the tip of the iceberg. Google has a similar offering to Microsoft Azure with its Google Kubernetes Engine for example as does CodeFresh. I haven't tried these services but the point is that there are lots of options if none of the ones I've covered takes your fancy.

One particularly exciting development that I haven't tried yet but will soon is Kubernetes running in Docker for Windows. Scott Hanselman has a nice walkthrough here as does Stefan Stranger here. Enjoy!

Cheers—Graham