Deploy a Dockerized ASP.NET Core Application to Azure Kubernetes Service Using a VSTS CI/CD Pipeline: Part 3

Posted by Graham Smith on May 24, 2018No Comments (click here to comment)

In this blog post series I'm working my way through the process of deploying and running an ASP.NET Core application on Microsoft's hosted Kubernetes environment. Formerly known as Azure Container Service (AKS), it has recently been renamed Azure Kubernetes Service, which is why the title of my blog series has changed slightly. In previous posts in this series I covered the key configuration elements both on a developer workstation and in Azure and VSTS and then how to actually deploy a simple ASP.NET Core application to AKS using VSTS. This is the full series of posts to date:

In this post I introduce MegaStore (just a fictional name), a more complicated ASP.NET Core application (in the sense that it has more moving parts), and I show how to deploy MegaStore to an AKS cluster using VSTS. Future posts will use MegaStore as I work through more advanced Kubernetes concepts. To follow along with this post you will need to have completed the following, variously from parts 1 and 2:

Introducing MegaStore

MegaStore was inspired by Elton Stoneman's evolution of NerdDinner for his excellent book Docker on Windows, which I have read and can thoroughly recommend. The concept is a sales application that rather than saving a ‘sale' directly to a database, instead adds it to a message queue. A handler monitors the queue and pulls new messages for saving to an Azure SQL Database. The main components are as follows:

  • MegaStore.Web—an ASP.NET Core MVC application with a CreateSale method in the HomeController that gets called every time there is a hit on the home page.
  • NATS message queue—to which a new sale is published.
  • MegaStore.SaveSalehandler—a .NET Core console application that monitors the NATS message queue and saves new messages.
  • Azure SQL Database—I recently heard Brendan Burns comment in a podcast that hardly anybody designing a new cloud application should be managing storage themselves. I agree and for simplicity I have chosen to use Azure SQL Database for all my environments including development.

You can clone MegaStore from my GitHub repository here.

In order to run the complete application you will first need to create an Azure SQL Database. The easiest way is probably to create a new database (also creates a server at the same time) via the portal and manage with SQL Server Management Studio. The high-level procedure is as follows:

  1. In the portal create a new database called MegaStoreDev and at the same time create a new server (name needs to be unique). To keep costs low I start with the Basic configuration knowing I can scale up and down as required.
  2. Still in the portal add a client IP to the firewall so you can connect from your development machine.
  3. Connect to the server/database in SSMS and create a new table called dbo.Sale:
  4. In Security > Logins create a New Login called sales_user_dev, noting the password.
  5. In Databases > MegaStoreDev > Security > Users create a New User called sales_user mapped to the sales_user_dev login and with the db_owner role.

In order to avoid exposing secrets via GitHub the credentials to access the database are stored in a file called db-credentials.env which I've not committed to the repo. You'll need to create this file in the docker-compose project in your VS solution and add the following, modified for your server name and database credentials:

If you are using version control make sure you exclude db-credentials.env from being committed.

With docker-compose set as the startup project and Docker for Windows running set to Linux containers you should now be able to run the application. If everything is working you should be able to see sales being created in the database.

To understand how the components are configured you need to look at docker-compose.yml and docker-compose-override.yml. Image building is handled by docker-compose.yml, which can't have anything else in it otherwise VSTS complains if you want to use the compose file to build the images. The configuration of the components is specified in docker-compose-override.yml which gets merged with docker-compose.yml at run time. Notice the k8s folder. This contains the configuration files needed to deploy the application to AKS.

By now you may be wondering if MegaStore should be running locally under Kubernetes rather than in Docker via docker-compose. It's a good question and the answer is probably yes. However at the time of writing there isn't a great story to tell about how Visual Studio integrates with Kubernetes on a developer workstation (ie to allow debugging as is possible with Docker) so I'm purposely ignoring this for the time being. This will change over time though, and I will cover this when I think there is more to tell.

Create Azure SQL Databases for Different Release Pipeline Environments

I'll be creating a release pipeline consisting of DAT and PRD environments. I explain more about these below but to support these environments you'll need to create two new databases—MegaStoreDat and MegaStorePrd. You can do this either through the Azure portal or through SQL Server Management Studio, however be aware that if you use SSMS you'll end up on the standard pricing tier rather than the cheaper basic tier. Either way, you then use SQL Server Management Studio to create dbo.Sale and set up security as described above, ensuring that you create different logins for the different environments.

Create a Build in VSTS

Once everything is working locally the next step is to switch over to VSTS and create a build. I'm assuming that you've cloned my GitHub repo to your own GitHub account however if you are doing it another way (your repo is in VSTS for example) you'll need to amend accordingly.

  1. Create a new Build definition in VSTS. The first thing you get asked is to select a repository—link to your GitHub account and select the MegaStore repo:
  2. When you get asked to Choose a template go for the Empty process option.
  3. Rename the build to something like MegaStore and under Agent queue select your private build agent.
  4. In the Triggers tab check Enable continuous integration.
  5. In the Options tab set Build number format to $(Date:yyyyMMdd)$(Rev:.rr), or something meaningful to you based on the available options described here.
  6. In the Tasks tab use the + icon to add two Docker Compose tasks and a Publish Build Artifacts task. Note that when configuring the tasks below only the required entries and changes to defaults are listed.
  7. Configure the first Docker Compose task as follows:
    1. Display name = Build service images
    2. Action = Build service images
    3. Azure subscription = [name of existing Azure Resource Manager endpoint]
    4. Azure Container Registry = [name of existing Azure Container Registry]
    5. Additional Image Tags = $(Build.BuildNumber)
  8. Configure the first Docker Compose task as follows:
    1. Display name = Push service images
    2. Azure subscription = [name of existing Azure Resource Manager endpoint]
    3. Azure Container Registry = [name of existing Azure Container Registry]
    4. Action = Push service images
    5. Additional Image Tags = $(Build.BuildId)
  9. Configure the Publish Build Artifacts task as follows:
    1. Display name = Publish k8s config
    2. Path to publish = k8s
    3. Artifact name = k8s-config
    4. Artifact publish location = Visual Studio Team Services/TFS

You should now be able to test the build by committing a minor change to the source code. The build should pass and if you look in the Repositories section of your Container Registry you should see megastoreweb and megastoresavesalehandler repositories with newly created images.

Create a DAT Release Environment in VSTS

With the build working it's now time to create the release pipeline, starting with an environment I call DAT which is where automated acceptance testing might take place. At this point there is a style choice to be made for creating Kubernetes Secrets and ConfigMaps. They can be configured from files or from literal values. I've gone down the literal values route since the files route needs to specify the namespace and this would require either a separate file for each namespace creating a DRY problem or editing the config files as part of the release pipeline. To me the literal values technique seems cleaner. Either way, as far as I can tell there is no way to update a Secret or ConfigMap via a VSTS Deploy to Kubernetes task as it's a two step process and the task can't handle this. The workaround is a task to delete the Secret or ConfigMap and then a task to create it. You'll see that I've also chosen to explicitly create the image pull secret. This is partly because of a bug in the Deploy to Kubernetes task however it also avoids having to repeat a lot of the Secrets configuration in Deploy to Kubernetes tasks that deploy service or deployment configurations.

  1. Create a new release definition in VSTS, electing to start with an empty process and rename it MegaStore.
  2. In the Pipeline tab click on Add artifact and link the build that was just created which in turn makes the k8s-config artifact from step 9 above available in the release.
  3. Click on the lightning bolt to enable the Continuous deployment trigger.
  4. Still in the Pipeline tab rename Environment 1 to DAT, with the overall changes resulting in something like this:
  5. In the Tasks tab click on Agent phase and under Agent queue select your private build agent.
  6. In the Variables tab create the following variables with Release Scope:
    1. AcrAuthenticationSecretName = prmcrauth (or the name you are using for imagePullSecrets in the Kubernetes config files)
    2. AcrName = [unique name of your Azure Container Registry, eg mine is prmcr]
    3. AcrPassword = [password of your Azure Container Registry from Settings > Access keys], use the padlock to make it a secret
  7.  In the Variables tab create the following variables with DAT Scope:
    1. DatDbConn = Server=tcp:megastore.database.windows.net,1433;Initial Catalog=MegaStoreDat;Persist Security Info=False;User ID=sales_user;Password=mystrongpwd;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30; (you will need to alter this connection string for your own Azure SQL server and database)
    2. DatEnvironment = dat (ie in lower case)
  8. In the Tasks tab add 15 Deploy to Kubernetes tasks and disable all but the first one so the release can be tested after each task is configured. Note that when configuring the tasks below only the required entries and changes to defaults are listed.
  9. Configure the first Deploy to Kubernetes task as follows:
    1. Display name = Delete image pull secret
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = delete
    5. Arguments = secret $(AcrAuthenticationSecret)
    6. Control Options > Continue on error = checked
  10. Configure the second Deploy to Kubernetes task as follows:
    1. Display name = Create image pull secret
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = create
    5. Arguments = secret docker-registry $(AcrAuthenticationSecretName) --namespace=$(DatEnvironment) --docker-server=$(AcrName).azurecr.io --docker-username=$(AcrName) --docker-password=$(AcrPassword) --docker-email=fred@bloggs.com (note that the email address can be anything you like)
  11. Configure the third Deploy to Kubernetes task as follows:
    1. Display name = Delete ASPNETCORE_ENVIRONMENT config map
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = delete
    5. Arguments = configmap aspnetcore.env
    6. Control Options > Continue on error = checked
  12. Configure the fourth Deploy to Kubernetes task as follows:
    1. Display name = Create ASPNETCORE_ENVIRONMENT config map
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = create
    5. Arguments = configmap aspnetcore.env --from-literal=ASPNETCORE_ENVIRONMENT=$(DatEnvironment)
  13. Configure the fifth Deploy to Kubernetes task as follows:
    1. Display name = Delete DB_CONNECTION_STRING secret
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = delete
    5. Arguments = secret db.connection
    6. Control Options > Continue on error = checked
  14. Configure the sixth Deploy to Kubernetes task as follows:
    1. Display name = Create DB_CONNECTION_STRING secret
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = create
    5. Arguments = secret generic db.connection --from-literal=DB_CONNECTION_STRING="$(DatDbConn)"
  15. Configure the seventh Deploy to Kubernetes task as follows:
    1. Display name = Delete MESSAGE_QUEUE_URL config map
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = delete
    5. Arguments = configmap message.queue
    6. Control Options > Continue on error = checked
  16. Configure the eighth Deploy to Kubernetes task as follows:
    1. Display name = Create MESSAGE_QUEUE_URL config map
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = create
    5. Arguments = configmap message.queue --from-literal=MESSAGE_QUEUE_URL=nats://message-queue-service.$(DatEnvironment):4222
  17. Configure the ninth Deploy to Kubernetes task as follows:
    1. Display name = Create message-queue service
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = apply
    5. Use Configuration files = checked
    6. Configuration File = $(System.DefaultWorkingDirectory)/_MegaStore/k8s-config/message-queue-service.yaml
  18. Configure the tenth Deploy to Kubernetes task as follows:
    1. Display name = Create megastore-web service
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = apply
    5. Use Configuration files = checked
    6. Configuration File = $(System.DefaultWorkingDirectory)/_MegaStore/k8s-config/megastore-web-service.yaml
  19. Configure the eleventh Deploy to Kubernetes task as follows:
    1. Display name = Create message-queue deployment
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = apply
    5. Use Configuration files = checked
    6. Configuration File = $(System.DefaultWorkingDirectory)/_MegaStore/k8s-config/message-queue-deployment.yaml
  20. Configure the twelfth Deploy to Kubernetes task as follows:
    1. Display name = Create megastore-web deployment
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = apply
    5. Use Configuration files = checked
    6. Configuration File = $(System.DefaultWorkingDirectory)/_MegaStore/k8s-config/message-queue-deployment.yaml
  21. Configure the thirteenth Deploy to Kubernetes task as follows:
    1. Display name = Update megastore-web with latest image
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = set
    5. Arguments = image deployment/megastore-web-deployment megastoreweb=$(AcrName).azurecr.io/megastoreweb:$(Build.BuildNumber)
  22. Configure the fourteenth Deploy to Kubernetes task as follows:
    1. Display name = Create megastore-savesalehandler deployment
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = apply
    5. Use Configuration files = checked
    6. Configuration File = $(System.DefaultWorkingDirectory)/_MegaStore/k8s-config/megastore-savesalehandler-deployment.yaml
  23. Configure the fifthteenth Deploy to Kubernetes task as follows:
    1. Display name = Update megastore-savesalehandler with latest image
    2. Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint]
    3. Namespace = $(DatEnvironment)
    4. Command = set
    5. Arguments = image deployment/megastore-savesalehandler-deployment megastoresavesalehandler=$(AcrName).azurecr.io/megastoresavesalehandler:$(Build.BuildNumber)

That's a heck of a lot of configuration, so what exactly have we built?

The first eight tasks deal with the configuration that support the services and deployments:

  • The image pull secret stores the credentials to the Azure Container Registry so that deployments that need to pull images from the ACR can authenticate.
  • The ASPNETCORE_ENVIRONMENT config map sets the environment for ASP.NET Core. I don't do anything with this but it could be handy for troubleshooting purposes.
  • The DB_CONNECTION_STRING secret stores the connection string to the Azure SQL database and is used by the megastore-savesalehandler-deployment.yaml configuration.
  • The MESSAGE_QUEUE_URL config map stores the URL to the NATS message queue and is used by the megastore-web-deployment.yaml and megastore-savesalehandler-deployment.yaml configurations.

As mentioned above, a limitation of the VSTS Deploy to Kubernetes task means that in order to be able to update Secrets and ConfigMaps they need to be deleted first and then created again. This does mean that an exception is thrown the first time a delete task is run however the Continue on error option ensures that the release doesn't fail.

The remaining seven tasks deal with the deployment and configuration of the components (other than the Azure SQL database) that make up the MegaStore application:

  • The NATS message queue requires a service so other components can talk to it and the deployment that specifies the specification for the image.
  • The MegaStore.Web front end requires a service so that it is exposed to the outside world and the deployment that specifies the specification for the image.
  • MegaStore.SaveSalehandler monitoring component only needs the deployment that specifies the specification for the image as nothing connects to it directly.

If everything has been configured correctly then triggering a release should result in a megastore-web-service being created. You can check the deployment was successful by executing kubectl get services --namespace=dat to get the external IP address of the LoadBalancer which you can paste in to a browser to confirm that the ASP.NET Core website is running. On the backend, you can use SQL Server Management Studio to connect to the database and confirm that records are being created in dbo.Sale.

If you are running in to problems, you can run the Kubernetes Dashboard to find out what is failing. Typically it's deployments that fail, and navigating to Workloads > Deployments can highlight the failing deployment. You can find out what the error is from the New Replica Set panel by clicking on the Logs icon which brings up a new browser tab with a command line style output of the error. If there is no error it displays any Console.WiteLine output. Very neat:

Create a PRD Release Environment in VSTS

With a DAT environment created we can now create other environments on the route to production. This could be whatever else is needed to test the application, however here I'm just going to create a production environment I'll call PRD. I described this process in my previous post so here I'll just list the high level process:

  1. Clone the DAT environment and rename it PRD.
  2. In the Variables tab rename the cloned DatDbConn and DatEnvironment variables (the ones with PRD scope) to PrdDbConn and PrdEnvironment and change their values accordingly.
  3. In the Tasks tab visit each task and change all references of $(DatDbConn) and $(DatEnvironment) to $(PrdDbConn) and $(PrdEnvironment). All Namespace fields will need changing and many of the tasks with use the Arguments fields will need attention.
  4. Trigger a build and check the deployment was successful by executing kubectl get services --namespace=prd to get the external IP address of the LoadBalancer which you can paste in to a browser to confirm that the ASP.NET Core website is running.

Wrapping Up

Although the final result is a CI/CD pipeline that certainly works there are more tasks than I'm happy with due to the need to delete and then recreate Secrets and ConfigMaps and this also adds quite a bit of overhead to the time it takes to deploy to an environment. There's bound to be a more elegant way of doing this that either exists now and I just don't know about it or that will exist in the future. Do post in the comments if you have thoughts.

Although I'm three posts in I've barely scratched the surface of the different topics that I could cover, so plenty more to come in this series. Next time it will probably be around health and / or monitoring.

Cheers—Graham

Deploy a Dockerized ASP.NET Core Application to Kubernetes on Azure Using a VSTS CI/CD Pipeline: Part 2

Posted by Graham Smith on March 21, 2018No Comments (click here to comment)

If you need to provision a new environment for your deployment pipeline, what's your process and how long does it take? For many of us the process probably starts with a request to an infrastructure team for new virtual machines. If the new VMs are in Azure the request might be completed quite quickly; if they are on premises it might take much longer. In both scenarios you might have to justify your request: there will be actual cost in Azure and on premises it's another chunk of the datacentre ‘gone'.

With the help of containers and container orchestrators I predict (and sincerely hope) that this sort of pain will become a distant memory for much of the software development community for whom it is currently an issue. The reason is that container orchestration technologies abstract away the virtual (or physical) server layer and allow you to focus on configuring services and how they communicate with each other—all through configuration files. The only time you'd need to think of virtual (or physical) servers is if the cluster running your orchestrator needed more capacity, in which case someone will need to add more nodes. A whole new environment for your pipeline just by doing some work with a configuration file? What's not to like?

In this blog post I hope to make my prediction come alive by showing you how new environments can be quickly created using Kubernetes running in Microsoft's Azure Container Service (AKS), crucially using declarative configuration files that get deployed as part of a VSTS release pipeline. This post follows directly on from a previous post, both in terms of understanding and also the components that were built in that first post, so if you haven't already done so I recommend working your way through that post before going further.

Housekeeping

In the previous post we deployed to the default namespace so it probably makes sense to clean all this up. This can all be done by the command line of course but to mix it up a bit I'll illustrate using the Kubernetes Dashboard. You can start the dashboard using the following command, substituting in the name of your resource group and the name of the cluster:

This should open the dashboard in a browser displaying the default namespace. Navigate to Workloads > Deployments and using the hamburger menu delete the deployment:

Navigate to Discovery and Load Balancing > Services and delete the service:

Navigate to Config and Storage > Secret and delete the secret:

Environments and Namespaces

The Kubernetes feature that we'll use to create environments that together form part of our pipeline is Namespaces. You can think of namespaces as a way to divide the Kubernetes cluster in to virtual clusters. Within a namespace resource names need to be unique but they don't have to be across namespaces. This is great because effectively we have network isolation so that across each environment resource names stay the same. Say goodbye to having to append the environment name to all the resources in your environment to make them unique.

In this post I'll make a pipeline consisting of two environments. I'm sticking with a convention I established several years ago so I'll be creating DAT (developer automated test) and PRD (production) environments. In a complete pipeline I might also create a DQC (developer quality control) environment to sit between DAT and PRD but that won't really add anything extra to this exercise.

First up is to create the namespaces. There is an argument for saying that namespace creation should be part of the release pipeline however in this post I'm going to create everything manually as I think it helps to understand what's going on. Create a file called namespaces.yaml and add the following contents:

Note that namespace name needs to be in lower case as it needs to be DNS compatible. Open a command prompt at the same location as namespaces.yaml and execute the the following command: kubectl create -f namespaces.yaml. You should get a message back advising the namespaces have been created and at one level that's all there is to it. However there's a couple of extra bits worth knowing.

When you first start working with kubectl at the command line you are working in the default namespace. To work with other namespaces needs some configuration.

To return details of the configuration stored in C:\Users\<username>\.kube\config use:

My cluster returned the following output:

From this output you need to determine your cluster name (which you probably already know) as well as the name of the user. These details are fed in to the following command for creating a new context for an environment (in this case the DAT environment):

To switch to working to this context (and hence the dat namespace) use:

To confirm (or check) the current context use:

To get back to the default namespace use:

Normally that would be most of what you need to know to work with namespaces, however as of the time of writing there is a bug in the VSTS Deploy to Kubernetes task which requires some extra work. The bug may be fixed by the time you read this however it's handy to examine the issue to further understand what is going on behind the scenes.

Each namespace needs to access the Azure Container Registry (ACR) we created in the previous post to pull down images. This is a private registry so we don't want open access and so some form of authentication is required. This is provided by the creation of a Kubernetes secret that holds the authentication details to the ACR. The VSTS Deploy to Kubernetes task can create this secret for us however the bug is that it only creates the secret for the default namespace and fails to create the secret when a different namespace is specified. The workaround is to create the secret manually in each namespace using the following command:

In the above command secret-name is any arbitrary name you choose for the secret, namespace is the namespace in which to create the secret, acr-name is the name of your ACR, acr-admin-password is the password from the Access keys panel of your ACR and any-valid-email-address is just that. You'll need to run this command for each namespace of course. One final thing: you'll need to make sure that in the codebase the imagePullSecrets name in deployment.yaml matches the name of the secret you just created.

Amend the VSTS Pipeline to Support Multiple Environments

In this section we amend the release pipeline that was built in the previous post to support multiple environments.

  1. In the Pipeline tab rename Environment 1 to DAT:
  2. In the Variables tab create a variable to hold the name of the secret created above to authenticate with ACR. Create a second variable for the DAT environment namespace and change its scope to DAT. Remember that the value needs to be lower case:
  3. In the Tasks tab amend all three Deploy to Kubernetes tasks so that the Namespace field contains the $(DatEnvironment) variable. At the same time ensure that Secret name field matches the name of the secret variable created above:
  4. In order to test that deploying to DAT works, either trigger a build or, if you updated deployment.yaml above on your workstation commit your code. If the deployment was successful find the external IP address of the LoadBalancer by executing kubectl get services --namespace=dat and paste in to a browser to confirm that the ASP.NET Core website is running.

Amend the VSTS Pipeline to Support a New Environment

Now for the fun bit where we see just how easy it is to configure a new, network-isolated environment.

  1. In the Pipeline tab use the arrow next to Environments > Add to show and then select Clone environment:
  2. Rename the cloned environment to PRD. Create a new variable (ie PrdEnvironment) scoped to PRD to hold the prd namespace and amend each of the three Deploy to Kubernetes tasks so that the Namespace field contains the $(PrdEnvironment) variable.
  3. Trigger a build and check the deployment was successful by executing kubectl get services --namespace=prd to get the external IP address of the LoadBalancer which you can paste in to a browser to confirm that the ASP.NET Core website is running.

And That's It!

Yep—that really is all there is to it! Okay, this is just a trivial example, however even with more services the procedure would be the same. Granted, in a more complex application there might be some environment variables or secrets that might change but even so, it's just configuration.

I'm thrilled by the power that Kubernetes gives to developers—no more thinking about VMs or tin, no more having to append resources with environment names, and the ability to create a new environment in the blink of an eye—wow!

There's lots more I'm planning to cover in the deployment pipeline space however next time I'll be looking at the development inner loop and the options for running Kubernetes whilst developing code.

Cheers—Graham

Deploy a Dockerized ASP.NET Core Application to Kubernetes on Azure Using a VSTS CI/CD Pipeline: Part 1

Posted by Graham Smith on February 20, 2018No Comments (click here to comment)

Over the past 18 months or so I've written a handful of blog posts about deploying Docker containers using Visual Studio Team Services (VSTS). The first post covered deploying a container to a Linux VM running Docker and other posts covered deploying containers to a cluster running DC/OS—all running in Microsoft Azure. Fast forward to today and everything looks completely different from when I wrote that first post: Docker is much more mature with features such as multi-stage builds dramatically streamlining the process of building source code and packaging it in to containers, and Kubernetes has emerged as a clear leader in the container orchestration battle and looks set to be a game-changing technology. (If you are new to Kubernetes I have a Getting Started blog post here with plenty of useful learning resources and tips for getting started.)

One of the key questions that's been on my mind recently is how to use Kubernetes as part of a CI/CD pipeline, specifically using VSTS to deploy to Microsoft's Azure Container Service (AKS), which is now specifically targeted at managing hosted Kubernetes environments. So in a new series of posts I'm going to be examining that very question, with each post building on previous posts as I drill deeper in to the details. In this post I'm starting as simply as I possibly can whilst still answering the key question of how to use VSTS to deploy to Kubernetes. Consequently I'm ignoring the Kubernetes experience on the development workstation, I only deploy a very simple application to one environment and I'm not looking at scaling or rolling updates. All this will come later, but meantime I hope you'll find that this walkthrough will whet your appetite for learning more about CI/CD and Kubernetes.

Development Workstation Configuration

These are the main tools you'll need on a Windows 10 Pro development workstation (I've documented the versions of certain tools at the time of writing but in general I'm always on the latest version):

  • Visual Studio 2017—version 15.5.6 with the ASP.NET and web development workload.
  • Docker for Windows—stable channel 17.12.0-ce.
  • Windows Subsystem for Linux (WSL)—see here for installation details. I'm still using Bash on Ubuntu on Windows that I installed before WSL moved to the Microsoft Store and in this post I assume you are using Ubuntu. The aim of installing WSL is to run Azure CLI, although technically you don't need WSL as Azure CLI will run happily under a Windows command prompt. However using WSL facilitates running Azure CLI commands from a Bash script.
  • Azure CLI on Windows Subsystem for Linux—see here for installation (and subsequent upgrade) instructions. There are several ways to login to Azure from the CLI however I've found that the interactive log-in works well since once you're logged-in you remain so for quite a long time (many days for me so far). Use az -v to check which version you are on (2.0.27 was latest at time of writing).
  • kubectl on Azure CLI—the kubectl CLI is used to interact with a Kubernetes cluster. Install using sudo az aks install-cli.

Create Services in Microsoft Azure

There are several services you will need to set up in Microsoft Azure:

  • Azure Container Registry—see here for an overview and links to the various methods for creating an ACR. I use the Standard SKU for the better performance and increased storage.
  • Azure Container Service (AKS) cluster—see here for more details about AKS and how to create a cluster, however you may find it easier to use my script below. I started off by creating a cluster and then destroying it after each use until I did some tests and found that a one-node cluster was costing pennies per day rather than the pounds per day I had assumed it would cost and now I just keep the cluster running.
    • From a WSL Bash prompt run nano create_k8s_cluster.sh to bring up the nano editor with a new empty file. Copy and paste (by pressing right mouse key) the following script:
    • Change the variables to your suit your requirements. If you only have one Azure subscription you can delete the lines that set a particular subscription as the default, otherwise use az account list to list your subscriptions to find the ID.
    • Exit out of nano making sure you save the changes (Ctrl +X, Y) and then apply permissions to make it executable by running chmod 700 create_k8s_cluster.sh.
    • Next run the script using ./create_k8s_cluster.sh.
    • One the cluster is fully up-and-running you can show the Kubernetes dashboard using az aks browse --resource-group $resourceGroup --name $clusterName.
    • You can also start to use the kubectl CLI to explore the cluster. Start with kubectl get nodes and then have a look at this cheat sheet for more commands to run.
    • The cluster will probably be running an older version of Kubernetes—you can check and find the procedure for upgrading here.
  • Private VSTS Agent on Linux—you can use the hosted agent (called Hosted Linux Preview at time of writing) but I find it runs very slowly and additionally because a new agent is used every time you perform a build it has to pull docker images down each time which adds to the slowness. In a future post I'll cover running a VSTS agent from a Docker image running on the Kubernetes cluster but for now you can create a private Linux agent running on a VM using these instructions. Although they date back to October 2016 they still work fine (I've checked them and tweaked them slightly).
    • Since we will only need this agent to build using Docker you can skip steps 5b, 5c and 5d.
    • Install a newer version of Git—I used these instructions.
    • Install docker-compose using these instructions and choosing the Linux tab.
    • Make the docker-user a member of the docker group by executing usermod -aG docker ${USER}.

Create VSTS Endpoints

In order to talk to the various Azure services you will need to create the following endpoints in VSTS (from the cog icon on the toolbar choose Services > New Service Endpoint):

  • Azure Resource Manager—to point to your MSDN subscription. You'll need to authenticate as part of the process.
  • Kubernetes Service Connection—to point to your Kubernetes cluster. You'll need the FQDN to the cluster (prepended with https://) which you can get from the Azure CLI by executing az aks show --resource-group $resourceGroup --name $clusterName, passing in your own resource group and cluster names. You'll also need the contents of the kubeconfig file. If you used the script above to create the cluster then the script copied the config file to C:\Users\Public and you can use Notepad to copy the contents.

Configure a CI Build

The first step to deploying containers to a Kubernetes cluster is to configure a CI build that creates a container and then pushes the container to a Docker registry—Azure Container Registry in this case.

Create a Sample App
  • Within an existing Team Project create a new Git repository (Code > $current repository$ > New repository) called k8s-aspnetcore. Feel free to select the options to add a README and a VisualStudio .gitignore.
  • Clone this repo on your development workstation:
    • Open PowerShell at the desired root folder.
    • Copy the URL from the VSTS code view of the new repository.
    • At the PowerShell prompt execute git clone along with the pasted URL.
  • Make sure Docker for Windows is running.
  • In Visual Studio create an ASP.NET Core Web Application in the folder the git clone command created.
  • Choose an MVC app and enable Docker support for Linux.
  • You should now be able to run your application using the green Docker run button on the Standard toolbar. What is interesting here is that the build process is using a multi-stage Dockerfile, ie the tooling to build the application is running from a Docker container. See Steve Lasker's post here for more details.
  • In the root of the repository folder create a folder named k8s-config, which we'll use later to store Kubernetes configuration files. In Visual Studio create a New Solution Folder with the same name and back in the file system folder create empty files named service.yaml and deployment.yaml. In Visual Studio add these files as existing items to the newly created solution folder.
  • The final step here is to commit the code and sync it with VSTS.
Create a VSTS Build
  • In VSTS create a new build based on the repository created above and start with an empty process.
  • After the wizard stage of the setup supply an appropriate name for the build and select the Agent queue created above if you are using the recommended private agent or Hosted Linux Preview if not.
  • Go ahead and perform a Save & queue to make sure this initial configuration succeeds.
  • In the Phase 1 panel use + to add two Docker Compose tasks and one Publish Build Artifacts task.
  • If you want to be able to perform a Save & queue after configuring each task (recommended) then right-click the second and third tasks and disable them.
  • Configure the first Docker Compose task as follows:
    • Display name = Build service images
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Docker Compose File = **/docker-compose.yml
    • Project Name = $(Build.Repository.Name)
    • Qualify Image Names = checked
    • Action = Build service images
    • Additional Image Tags = $(Build.BuildId)
    • Include Latest Tag = checked
  • Configure the second Docker Compose task as follows:
    • Display name = Push service images
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Docker Compose File = **/docker-compose.yml
    • Project Name = $(Build.Repository.Name)
    • Qualify Image Names = checked
    • Action = Push service images
    • Additional Image Tags = $(Build.BuildId)
    • Include Latest Tag = checked
  • Configure the Publish Build Artifacts task as follows:
    • Display name = Publish k8s config
    • Path to publish = k8s-config (this is the folder we created earlier in the repository root folder)
    • Artifact name = k8s-config
    • Artifact publish location = Visual Studio Team Services/TFS
  • Finally, in the Triggers section of the build editor check Enable continuous integration so that the build will trigger on a commit from Visual Studio.

So what does this build do? The first Docker Compose task uses the docker-compose.yml file to work out what images need building as specified by Dockerfile file(s) for different services. We only have one service (k8s-aspnetcore) but there could (and usually would) be more. With the image built on the VSTS agent the second Docker Compose task pushes the image to the Azure Container Registry. If you navigate to this ACR in the Azure portal and drill in to the Repositories section you should see your image. The build also publishes the yaml configuration files needed to deploy to the cluster.

Configure a Release Pipeline

We are now ready to configure a release to deploy the image that's hosted in ACR to our Kubernetes cluster. Note that you'll need to complete all of this section before you can perform a release.

Create a VSTS Release Definition
  • In VSTS create a new release definition, starting with an empty process and changing the name to k8s-aspnetcore.
  • In the Artifacts panel click on Add artifact and wire-up the build we created above.
  • With the build now added as an artifact click on the lightning bolt to enable the Continuous deployment trigger.
  • In the default Environment 1 click on 1phase, 0 task and in the Agent phase click on + to create three Deploy to Kubernetes tasks.
  • Configure the first Deploy to Kubernetes task as follows:
    • Display name = Create Service
    • Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint created above]
    • Command = apply
    • Use Configuration files = checked
    • Configuration File = $(System.DefaultWorkingDirectory)/k8s-aspnetcore/k8s-config/service.yaml
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Secret name [any secret word of your choosing, to be used consistently across all tasks]
  • Configure the second Deploy to Kubernetes task as follows:
    • Display name = Create Deployment
    • Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint created above]
    • Command = apply
    • Use Configuration files = checked
    • Configuration File = $(System.DefaultWorkingDirectory)/k8s-aspnetcore/k8s-config/deployment.yaml
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Secret name [any secret word of your choosing, to be used consistently across all tasks]
  • Configure the third Deploy to Kubernetes task as follows:
    • Display name = Update with Latest Image
    • Kubernetes Service Connection = [name of Kubernetes Service Connection endpoint created above]
    • Command = set
    • Arguments = image deployment/k8s-aspnetcore-deployment k8s-aspnetcore=$yourAcrNameHere$.azurecr.io/k8s-aspnetcore:$(Build.BuildId)
    • Container Registry Type = Azure Container Registry
    • Azure subscription = [name of Azure Resource Manager endpoint created above]
    • Azure Container Registry = [name of Azure Container Registry created above]
    • Secret name [any secret word of your choosing, to be used consistently across all tasks]
  • Make sure you save the release but don't bother testing it out just yet as it won't work.
Create the Kubernetes configuration
  • In Visual Studio paste the following code in to the service.yaml file created above.
  • Paste the following code in to the deployment.yaml file created above. The code is for my ACR so you will need to amend accordingly.
  • You can now commit these changes and then head over to VSTS to check that the release was successful.
  • If the release was successful you should be able to see the ASP.NET Core website in your browser. You can find the IP address by executing kubectl get services from wherever you installed kubectl.
  • Another command you might try running is kubectl describe deployment $nameOfYourDeployment, where $nameOfYourDeployment is the metadata > name in deployment.yaml. A quick tip here is that if you only have one deployment you only need to type the first letter of it.
  • It's worth noting that splitting the service and deployment configurations in to separate files isn't necessarily a best practice however I'm doing it here to try and help clarify what's going on.

In terms of a very high level explanation of what we've just configured in the release pipeline, for a simple application such as an ASP.NET Core website we need to deploy two key objects:

  1. A Kubernetes Service which (in our case) is configured with an external IP address and acts as an abstraction layer for Pods which are killed off and recreated every time a new release is triggered. This is handled by the first Deploy to Kubernetes task.
  2. A Kubernetes Deployment which describes the nature of the deployment—number of Pods (via Replica Sets), how they will be upgraded and so on. This is handled by the second Deploy to Kubernetes task.

On first deployment these two objects are all that is needed to perform a release. However, because of the declarative nature of these objects they do nothing on subsequent release if they haven't changed. This is where the third Deploy to Kubernetes task comes in to play—ensuring that after the first release subsequent releases do cause the container to be updated.

Wrapping Up

That concludes our initial look at CI/CD with VSTS and Azure Container Service (AKS)! As I mentioned at the beginning of the post I've purposely tried to keep this walkthrough as simple as possible, so watch out for the next installment where I'll build on what I've covered here.

Cheers—Graham

Getting Started with Kubernetes

Posted by Graham Smith on February 1, 2018No Comments (click here to comment)

If you've been following the containers story you'll probably know that 2017 was a big year for Docker. You may also know that 2018 looks set to be a big year for Kubernetes, "an open-source system for automating deployment, scaling, and management of containerized applications". There are several systems competing in the same space as Kubernetes but for many the jury has voted and Kubernetes is the winner.

For many of us ‘containerized applications' means applications that have been containerised using Docker, and if you've been learning and working with Docker then learning Kubernetes is an obvious next step. I've been learning Kubernetes for a couple of months now and in this post I share some of the resource links that I've found most useful and provide pointers to the different ways I've created Kubernetes environments to provide practical hands-on experience.

Learning Resources

Run Kubernetes on your Development Machine Using Minikube

A quick and easy way to get started with Kubernetes is to install Minikube on your development machine. Minikube is a tool that runs a single-node Kubernetes cluster on a virtual machine running on your laptop or workstation. You can find the installation guide here and the getting started guide here. I installed Minikube on my Windows 10 workstation running Hyper-V and the minikube start command just worked. Don't forget you'll need to install kubectl as well as Minikube. As part of the installation process a kubeconfig file is created at %userprofile%\.kube\config (config is the actual file) which ‘connects' kubectl to Minikube. If you are connecting to different Kubernetes installations from your development machine you'll need to manage kubeconfig files—see later for more details.

If you've got Minikube installed and working you might be wondering what next, especially if you are a Windows user as the documentation isn't hugely Windows-friendly. If you are in this situation head over to this Getting Started with Kubernetes on your Windows Laptop with Minikube tutorial. Skip past the installation instructions to Starting our Cluster and follow on from there.

Run Kubernetes in Microsoft Azure

If you have a Microsoft Azure subscription or are prepared to sign up for a free trial it's ridiculously easy to start working with Kubernetes in Azure. There's actually a couple of ways to do it but the easiest is to create an Azure Container Service (AKS) cluster as this service abstracts away much of the complicated cluster stuff leaving you to focus on Kubernetes itself.

The Deploy an Azure Container Service (AKS) cluster walkthrough gets you up-an-running in no time with an actual app that you can run in your browser. Using Azure Cloud Shell is the easier way to use the Azure CLI although it does have an annoying habit of timing out on you. If you do switch to using the local version of the CLI watch out for the az aks get-credentials --resource-group myResourceGroup --name myK8sCluster comand which will merge connection information about the cluster you are creating with any previously created %userprofile%\.kube\config file that might be present (after installing Minikube for example) which may not be what you want.

Run Kubernetes on a Raspberry Pi Cluster

If you want to take your knowledge a step further and learn how to perform a bare-metal installation of Kubernetes then one option is to create a Raspberry Pi cluster and install and run Kubernetes on it. I've gone down this route and have had great fun doing so. There are a couple of key resources that will help you get this project off the ground:

My cluster ended up looking like this:

Some familiarity with Raspberry Pi obviously helps with this sort of project however I wouldn't say it's a definite prerequisite as there is plenty of help out there for anyone getting started with Raspberry Pi. You do really need to start off with at least three Pis so there is a modest cost involved but if you don't want your cluster to be portable then you don't need to hook it up to a switch or a router and WiFi works fine for me. A tip worth mentioning is that the 6-port RAVpower USB charger is slightly smaller than the 6-port Anker USB charger and fitted my enclosure much better.

Dealing with Multiple Kubernetes Instances

If you end up managing more than one instance of Kubernetes with the same instance of kubectl you'll somehow need to manage the issue of multiple kubeconfig files. There is detailed guidance about this here. My needs are very modest and at the moment I simply save different kubeconfig files with different extensions and then remove the extension of the one I want to work with. Not very elegant but it serves my simple needs for the moment.

And There's More...

The three techniques I've described for working with Kubernetes are only the tip of the iceberg. Google has a similar offering to Microsoft Azure with its Google Kubernetes Engine for example as does CodeFresh. I haven't tried these services but the point is that there are lots of options if none of the ones I've covered takes your fancy.

One particularly exciting development that I haven't tried yet but will soon is Kubernetes running in Docker for Windows. Scott Hanselman has a nice walkthrough here as does Stefan Stranger here. Enjoy!

Cheers—Graham

Build a Raspberry Pi Vehicle Interior Monitor – Running Code at Startup

Posted by Graham Smith on November 21, 2017No Comments (click here to comment)

In this blog series I'm documenting my maker journey as I build a Raspberry Pi-based vehicle interior monitor (PiVIM). This is the full list of posts in the series:

Originally I'd assumed that the topic of this blog—running code at startup—was so straightforward that it wouldn't warrant a post of its own. However I ran in to a nasty gotcha, the fix for which might help somebody else, and I also learned a continuous delivery lesson (I'm slightly ashamed to say, given that much of what I blog about is continuous delivery) which is worth explaining.

When it comes to running code or programs at startup on Linux there are several ways to tackle the problem—see this useful Dexter Industries post which describes five different methods. I had no prior experience of this, and since I'd seen quite a few forum posts advising using rc.local and it was the first in the Dexter Industries list of five I started with that. The starting point is to edit the file:

and then add this command before the exit 0 line:

In case you were wondering, the long alphanumeric being passed in to my_pivim.py is the (obscured) access key for my InitialState streaming analytics account, and the ampersand at the end ‘forks the process' so the Raspberry Pi can continue booting. Anyway, quick reboot and sit back to watch my code spring in to life...

Or not since nothing happened. Check the syntax and try again—nothing. So I tried running the command from a prompt and ran in to this error:

So the requests module can't be found. Very weird since the shortened version of the syntax works absolutely fine from the command line in the PiVIM-py directory:

Next stop was to try without sudo, since everything in rc.local runs (apparently) as root anyway. The result of that was that the command didn't run in rc.local but did run from a command prompt. So, some sort of permissions thing, but was the problem limited to just the requests module? To cut a long story short I found out that it was just the requests module that was causing the problem, which led to installing requests again but ultimately no joy. And Google searches didn't turn up anything useful at that stage either.

I decided to abandon rc.local and try systemd instead based on the recommendation of the Dexter Industries post. My initial unit file (as /lib/systemd/system/pivim.service) looked like this:

With the permissions set on the file and the service enabled I rebooted...to nothing. Fortunately, I was able to see what the service was doing using this command:

This resulted in some useful, albeit frustrating output:

Back to the same old problem. However, this time a Google search with systemd as part of the query came up trumps with this StackOverflow question where a respondent had the same problem as me. The answer was simple: add a User=pi line beneath the [Service] section.

With this change PiVIM's control panel sprang in to life on startup—as I had expected it to several hours earlier:

Why the requests module alone caused this issue I didn't manage to find out and probably never will. On the plus side I now know how useful systemd is, not least because of the range of useful commands that work with it (see here and here for useful guides).

And so to the moral of this story, which is that if you don't practice continuous delivery—ie if you are not frequently pushing your code to ‘production'—you are likely to get bitten by problems that only surface towards the end of your development effort. This wasn't too much of an issue for me at home but in a professional setting not practicing some form of continuous delivery causes major headaches for developers all the time. Even if you don't develop professionally you might get caught out at a fun event such as a hackathon, so it's definitely a habit worth trying to adopt.

Cheers—Graham

Build a Raspberry Pi Vehicle Interior Monitor – Temperature Monitoring

Posted by Graham Smith on September 9, 2017No Comments (click here to comment)

In this blog series I'm documenting my maker journey as I build a Raspberry Pi-based vehicle interior monitor (PiVIM). This is the full list of posts in the series:

In this post I get to the main aim of the project which is to be able to monitor temperature. In so doing I'm entering the exciting world of physical computing with the Raspberry Pi by hooking up a temperature probe to the Pi's GPIO pins. I'm using the Dallas DS18B20 sensor which comes in two forms: one looks like a transistor and the other is packaged in to a probe with a long wire attached. I'm using both: the transistor format for prototyping and the probe version for the final version of PiVIM.

The big question that had been on my mind was how to get temperature measurements displayed on a mobile phone. In the previous post in this series I described how I'm giving my Raspberry Pi connectivity through a mobile broadband connection, but then what? There are probably multiple ways to do this but for now at least I've solved it using a data analytics service for IoT projects called Initial State.

Monitoring Temperature with the DS18B20 Sensor

Using the DS18B20 in conjunction with the Raspberry Pi is straightforward and there are a couple of handy tutorials that explain the process:

It's fairly straightforward and there's little point in repeating it in full here, however in summary the steps are:

  1. Build the circuit on a breadboard and connect the jumper wires to the Raspberry Pi header pins.
  2. Configure the Raspberry Pi to work with 1-wire devices (the DS18B20 is a 1-wire device).
  3. Write code (ie a Python script) to read the temperature from the 1-wire interface.

You can find the code which I adapted from the two tutorials listed above on my GitHub site as temperature.py. Outside of the PiVIM module I wrote a simple script called temperature_debug.py to ensure temperature.py was working.

Streaming Temperature Data to Initial State

Initial State is a cloud service that allows you to stream data to its portal and then analyse and display it in various formats. For Python users there is a supplied library that does all of the heavy lifting and it's surprisingly easy to get started. I recommend watching their "From Login to Live Data Stream in 2 Minutes" YouTube video:

Do be sure to create the example (explained in the video) as it's a great way to get a feel for how everything works. You can find their complete list of tutorial videos here.

For my project I created a Python module called data_portal.py which you can find on my GitHub site by following the link. The module is as follows:

The essence of Initial State is that you stream data to its portal as key-value pairs in to what Initial State terms buckets -- very simply containers for your data. A bucket is configured by instantiating the streamer object with the name of the bucket, the bucket's key (which must be unique in your account) and your personal access key. (If you are using a public code repository such as GitHub make sure to pass your access_key value in on the command line so you don't expose it to the world.) In the free version of Initial State data only persists for a day which is fine for me since I'm not interested in doing any historical analysis. I did want to create a new bucket every day so I could see the latest bucket to select in the web portal and achieved this by appending the current date to the bucket name and key. A bucket can receive multiple key-value pairs but in my case only one is needed—a key of T (for temperature) and the temperature value for each measurement.

In order to test the module I created a very simple data_portal_debug.py script, the key parts of which are as follows:

Note how the code is written to facilitate passing in the access_key in on the command line.

So how does this look in the Initial State portal? While there are several ways to view your data I prefer a simple tile displaying the latest temperature:

I created this myself by ensuring I'd selected my bucket and then clicking on the View Tiles App icon and then Edit Tiles. The screenshot below shows how to configure the settings to achieve the desired result:

So far so good, but I'm viewing this in a browser on my workstation and I need to be able to access it on my mobile phone. That's no problem of course because I can just access the Initial State portal on my phone. With Chrome on Android I can actually make this even slicker by using Chrome's Add to Home screen feature (when logged in to Initial State) to place an icon on my Home screen that gets me straight to Initial State's portal, making it feel as if it's an app even though it's not. This is what I see on my phone after selecting the desired bucket:

It's pretty neat considering it's free and on the portal side all I've had to do is a bit of quick configuration. Of course, if you want more features and / or want to store data for future analysis Initial State have paid-for subscriptions.

Cooling Down

That about wraps-up this part of my maker journey. I might revisit this section in the future and have a look at other options, since I would like to have the ability for advanced features such as my phone receiving an alert if the temperature reached a threshold level. This would likely require the development of a smartphone app as well as integration with a cloud service to host the data. A fun project, but not until I have a first version of PiVIM finished and working! Next time I'll be looking at options for powering PiVIM and also turning the Raspberry Pi off.

Cheers -- Graham

Ubiquiti WiFi: How I Got Started with this Fantastic Kit on a Modest Budget

Posted by Graham Smith on August 3, 20176 Comments (click here to comment)

It all started a few weeks ago when I was sat out in the garden on a sunny day with my wife. She was trying to do something on her tablet and was bemoaning the poor WiFi outdoors. At the time I was coincidentally reading an article on WiFi mesh systems and since WiFi wasn't too great in some parts of indoors either I briefly flirted with the idea of buying something like Google Wifi or BT's Whole Home Wi-Fi. However on looking in to this in more depth none of the products seemed to tick all the boxes, either being very expensive or lacking in what I would consider an essential feature. For example Google Wifi is administered by an app rather than by a browser application. Fine for some perhaps but not for me thank you.

I thought I could fix things on the cheap and bought a Netgear EX3700 WiFi Range Extender. I used this in both extender mode (I think of this as WiFi in serial with the router's WiFi) and also in access point (AP) mode via an Ethernet connection (I think of this as WiFi in parallel with the router's WiFi) however I wasn't thrilled with the results. The main gripe was that the mobile devices in my home at least (phones, tablets etc) all wanted to hang on to their existing connection for grim death. So even when standing next to the EX3700 in AP mode blasting out a 100% signal, my phone could still be hanging on to almost no signal from the router. Perhaps it was something wrong with my setup—the EX3700 too close to my router perhaps? Either way it wasn't wholly satisfactory.

Fast forward a couple of weeks and I found myself working through Troy Hunt's excellent Pluralsight course on What Every Developer Must Know About HTTPS. One of the slides had a screenshot of a blog post by Troy on fixing dodgy WiFi on his jet ski with Ubiquiti's UniFi Mesh. I vaguely remembered reading about Ubiquiti somewhere and with my interest piqued I started checking out Troy's blog.

And as it has been with so many others it seems, that's where my love affair began...

Warning! Reading Further WILL Cause you to be Parted from Your Hard-Earned Cash

There are many places on the Internet that eulogise about Ubiquiti products so I'm going to resist the temptation here. These are the key posts I read (specifically about the UniFi rage of products) and which I think you will enjoy and find useful and informative:

Make sure you don't miss the video in Troy's first post of him unboxing a load of Ubiquiti kit. This does a great job of explaining what all the main bits of kit are, and if you watch this in conjunction with reading the posts above you'll have a good idea of the key products in the UniFi range.

Needless to say, I was instantly hooked and I wanted in. However my existing WiFi setup wasn't so bad that I could justify spending over a £1,000 on new kit. Feeling slightly deflated I continued to research the UniFi range of products, to the point where it dawned on me that you don't need to start off with a big investment, and you don't need to buy every component to make a working system. And so the fun began...

Starting off with an Access Point

My journey began by adding a wireless access point (AP) to my home network. A few things need to be in place to make this work:

  • The first thing of course is an AP. There are several in the UniFi range and like many others I plumped for the AP-AC-PRO on the basis that it was only a little more expensive than the less capable models but vastly cheaper than the AP-AC-HD daddy.
  • Generally speaking APs require an Ethernet connection so you are going to need an Ethernet connection near to where you will site the AP. I'm lucky in that my home had CAT 5e wired-in when it was built and I have 40+ sockets all over the house and garage. An alternative would be running a dedicated cable from your modem/router or more likely powerline networking using the domestic electricity supply.
  • In addition to Ethernet providing a data connection, UniFi APs also need to get their power over an Ethernet connection (logically known as power over Ethernet—PoE). Although Ubiquiti sell some lovely switches that have PoE ports (see here for an example) you don't actually need one of these because the APs (if you buy them singly at least) ship with a PoE adaptor (the POE-48-24W-G model). As long as you have an electrical power socket near your Ethernet connection you are good to go.
  • The final piece of this jigsaw is the UniFi Controller software. Ubiquiti sell a dedicated device that runs the software (the Cloud Key) but again, you don't need this. The software is free to download and runs happily on the usual platforms—even on the Raspberry Pi. Furthermore, if you are just running an AP the UniFi Controller software doesn't need to be running all the time and can be installed on a PC or a Mac and spun up as and when is needed to configure the AP.

Putting all of this together was pretty straightforward. The AP-AC-PRO simply linked in to my Ethernet network via the PoE adaptor, and I opted to position it in the middle of the house on top of a unit in our open-plan kitchen / dining room. I have an always-on Windows Server 2012 R2 machine on my network and I installed the UniFi Controller software on that. There are a few considerations to be aware of when running on Windows:

  • Java is a requirement and whilst the installation wizard takes you to a download page you seem to end up installing 32-bit Java. For reasons I'll explain below you probably don't want this so instead make sure you download and install the 64-bit version.
  • In it's default configuration UniFi Controller doesn't run as a Windows service. It's easy to configure using these instructions however it only works with 64-bit Java—see above.
  • You access UniFi Controller using a browser (https://localhost:8443 if running locally) however it's not compatible with browsers that ship with Windows Server 2012 R2 or Windows Server 2016 and if this is a problem you can easily get round this by accessing from a different machine replacing localhost with the machine's IP address or FQDN.
  • UniFi Controller ships with a self-signed SSL certificate which causes browsers to raise warnings. These can be safely bypassed but it does leave the browser address bar looking a bit ugly.

The UniFi Controller installation wizard is a doddle and doesn't need explaining. At the end of the process you are presented with a nice dashboard:

So far so good, but it's clear that there are a lot of greyed-out features. The fix? Just a bit more expenditure to buy the UniFi Security Gateway, commonly known as the USG.

You Probably Will Want to get a UniFi Security Gateway

That was my initial reaction on seeing the Controller dashboard without the USG. There is a choice between the rackmount USG‑PRO‑4 or the standalone USG. The former is enterprise grade and much more expensive than the USG, which is perfectly adequate for a home network and the one I opted for. There are a few steps to incorporating the USG in to your home network and it helps to be clear about which roles each piece of kit will perform when the USG is in and working. In my case I'm on VDSL broadband and my original setup consisted of a Netgear D6400 performing the roles of both modem and router (as well as DHCP and a few other things of course but I'm keeping it simple). With the USG in the mix, the D6400 is configured to work in modem only mode and the USG takes on the router function. Crucially in my case, I needed to configure the USG to be the device that supplies the PPPoE credentials my broadband provider needs for a successful connection. This was a bit of a head-scratcher at first since the USG can work in two other modes (DHCP and Static IP) and I wasn't entirely sure how much configuration would be down to the D6400. None as it turns out.

Because the default D6400 gateway configuration is 192.168.0.1 and the USG is configured as 192.168.1.1 and I wasn't sure what would happen if I changed the USG to 192.169.0.1 as well, I decided to change my network to fit in with the USG. I planned to perform the initial USG configuration directly from my always-on server (running UniFi Controller on Windows Server 2012 R2) which I knew would cause issues with Internet Explorer so I planned ahead and installed FireFox. I also made sure that my broadband provider's PPPoE credentials were available locally on that box as well as the credentials to log in to UniFi Controller. The procedure was then as follows:

  • Configure the USG to work in PPPoE mode by attaching it directly to a laptop that did not already have a connection to another gateway (ie WiFi turned off and no Ethernet connected) and running the setup routine by pointing a browser to http://setup.ubnt.com/. This didn't work for me but pointing a browser to http://192.168.1.1 did. An Edit Configuration button allows you to change from the default DHCP setting to PPPoE.
  • Convert the Netgear D6400 from modem/router mode to modem only mode. This wasn't too hard to find in the advanced settings—you'll have to dig around for this on your own device. At this point you'll loose your broadband connection and for many devices it seems the ability to connect to them without performing a factory reset.
  • Because I was planning to bring my wired devices back one-by-one I unplugged everything from my switch and the D6400. I then plugged the machine running UniFi Controller directly in to the LAN 1 port. Because this machine had a static IP on the D6400's subnet I changed this temporarily back to DHCP so it could communicate properly with the USP. (I could have course given it a static IP on the USP's subnet.)
  • In UniFi Controller > Settings > Networks I amended the DHCP Range (I leave space for static IP addresses). You should end up with something like this:
  • After saving the network settings I navigated to UniFi Controller > Devices and located the USG. Under the Actions column I clicked Adopt to configure the USG with the previously defined settings.
  • Following the adoption process, I accessed the USG's properties by clicking its name (not the IP address). On the Configuration tab the WAN section allowed me to supply my ISP's PPPoE credentials and DNS details (I have an OpenDNS account):
  • Once the WAN changes had been provisioned to the USG I connected the WAN port of the USG to an Ethernet port on the D6400 in order to check broadband connectivity and speed. Note that both the WAN and LAN 1 ports should be connected at 1 Gbps. Initially my LAN 1 was showing 100/10 Mbps and it was due to a dodgy cable.
  • With broadband now connected again I took the opportunity of upgrading the USG's firmware using the handy button in the Actions column:
  • The final bit of this configuration was to plug the USG in to my switch (a ZyXEL GS1100-16) and plug my always-on server running UniFi Controller in to the switch and configure it with a static IP address.

With the core configuration completed I reconnected my wired devices one-by-one, fixing up any static IP address issues (due to the change of subnet) where required and giving each device (or client as they are known) a friendly name in UniFi Controller (click a client to open its properties and then navigate to Configuration > General > Alias). With this done the dashboard looks much better:

Troubleshooting and Disaster Recovery

If you do run in to problems you can find logs in the UniFi Controller installation folder (C:\Users\<profile name>\Ubiquiti UniFi\logs on Windows). It's also worth enabling Auto Backup from the Settings area. I configured mine to backup every day at 1am and then added C:\Users\<profile name>\Ubiquiti UniFi\data\backup to my CrashPlan configuration. Obviously do whatever works for you.

Outstanding Issues and Future Plans

One facility which I had taken for granted with my Netgear D6400 was some local DNS resolution. I first realised this was an issue when I couldn't get to my Windows Server 2012 R2 machine using its hostname. Long story short, it would appear that many SOHO routers use a tool called Dnsmasq for DNS forwarding and as a DHCP server. This apparently allows Dnsmasq to resolve DHCP client names. The USG doesn't really do DNS (which is fair enough since it's part of an ecosystem where different boxes are expected to do specific jobs) however I've seen a few posts in the forums where some scripting has been used to implement local DNS. It's not a major deal breaker for me and for the time being I've edited the hosts file on my Windows machines whilst I figure out what, if anything, I'm going to do about it.

EDIT: My conclusion about local DNS resolution is wrong. I traced the problem back to static IP addresses, specifically with me assigning static IP addresses from within clients themselves. (Most of my network is DHCP however there are a few clients on my network which I like to give static IP addresses. Probably pointless though—old habits die hard.) It turns out that if you assign IP addresses from within the clients DHCP is bypassed (of course) and the IP address doesn't get registered for DNS loookup. (It's something like that anyway.) The procedure to follow instead if you want a known IP address is to use IP address reservations. You can set these from the Properties window of a client by navigating to the Network tab under Configuration. Once I'd done this everything started working!

In terms of what's next, it will probably be a second AP-AC-PRO so I can have one at either end of the house. After that I will probably look at configuring some serious outdoor coverage via the UniFi Mesh devices. There's a huge amount to like about Ubiquiti products, but the ability to add new bits in as budget allows is one that I really appreciate.

Cheers -- Graham

Build a Raspberry Pi Vehicle Interior Monitor – Mobile Broadband

Posted by Graham Smith on July 20, 2017No Comments (click here to comment)

In this blog series I'm documenting my maker journey as I build a Raspberry Pi-based vehicle interior monitor (PiVIM). This is the full list of posts in the series:

In this post I'm configuring PiVIM with mobile broadband connectivity. At this stage I don't yet know whether I will connect to PiVIM to query its status or whether I'll have PiVIM push notifications out (for example by SMS), however either way I do know I need some sort of connectivity. Setting the Raspberry Pi up as a WiFi hotspot would be a neat solution however since I need a range of up to 1 km I ruled this option out in favour of mobile broadband.

Let's Get Physical

My first task was to choose the physical mobile broadband device. A Google search for raspberry pi mobile broadband turns up quite a few hits for Huawei mobile USB dongles and what seem to be quite a lot of configuration steps to get them to work. However a friend recommended the ZTE range of USB dongles and I ended up buying a ZTE MF730M for testing purposes. This is a 3G unit and is well under under half the cost of the ZTE MF823 4G unit, however at some point I'll upgrade to the 4G version since it's more flexible. I was prepared for a painful experience to get it working but on an updated version of the latest Raspbian Jessie the ZTE MF730M just worked in true plug and play fashion.

In order to get the ZTE MF730M working I needed a SIM. I wanted to avoid a plan where monthly credit would be lost if it weren't used, since PiVIM won't get much use in winter but will get a lot of use in summer. The Three network have a PAYG SIM which fits the bill perfectly since the credit lasts for as long as it isn't used. In the UK these can be bought from Tesco for £0.99. You'll need to install it in to the dongle and leave it to activate (somewhere it can get a signal) before registering the mobile number on the Three website and adding credit.

Mobile Broadband Status

If all I wanted to do in this project was to use my mobile broadband dongle then the good news in the plug-and-play department would make for a very short blog post. I don't just want to use the dongle though, I want to display information about its status (network type, signal strength etc) on my Display-O-Tron control panel. The ZTE MF devices incorporate a web page (accessible at http://m.home) that displays status information, as well as functionality that allows the management of a phonebook and the ability to send SMSs:

It turns out that this web page gets its data via a REST API and it's possible to tap in to this API to retrieve information programatically. It's easy to see the API being used from a browser's developer tools (on the Network tab in the Chromium version that ships with Raspbian), however the good people on this GitHub site have taken the trouble to document some of the commands and have some example code.

I used their code as starting point and created a Python class to return the status of the mobile dongle via instance attributes. You can find the code on my PiVIM GitHub site as mobile_broadband.py and there is an accompanying mobile_broadband_debug.py file that has code to put the class though its paces. The Python class minus a few docstrings is as follows:

I'm only returning three instance variables but clearly the code can be easily amended to return as many as are needed. One slightly ugly feature of my code is a hard coded response from the REST API to cater for when the mobile dongle isn't plugged in. My code should probably throw an exception if the mobile dongle isn't plugged in however when it is it's potentially using credit which I don't really want it to do for debugging purposes. So for the time being I'll live with my hack.

Screen Scraping for Remaining Credit

One piece of data that doesn't seem to be available from the REST API is the credit remaining on the SIM. In my case though it is available by logging in to the three.co.uk website with the SIM phone number and password and navigating to the Account balance page. There's no API in use on this website as far as I can tell so retrieving the actual value is down to screen scraping. Python has several libraries that can help here and I've been using requests and the BeautifulSoup class that's part of bs4. Long story short with this is that I've burned numerous hours trying to make this work and so far have drawn a blank. The problem is in authenticating properly with the Three website so that navigating to another page is successful. Although this aspect is work in progress I'm mentioning it because in a roundabout way I learned what I think are two great Python tips:

  • If you find that a Python library fails to install on Windows with the standard pip command it might well be that a compilation step failed. In this case you can try downloading an already compiled version of the library from the Unofficial Windows Binaries for Python Extension Packages site. (Note that AFAIK the 32/64-bit versions relate to the version of Python you are running and not whether you are running 32 or 64-bit Windows. Unless you have gone out of your way to install 64-bit Python you're probably running the 32-bit version.) Open a command prompt where you downloaded the file and type pip install followed by the first few characters of the library. Then use tab completion to complete the library name. In using this technique pip knows to install a library from your download rather than from the Internet.
  • Jupyter Notebooks are great for working with code on a ‘trial and error' basis where you want to repeatedly evaluate the output of a statement without having to run the whole program every time. For me this was working out which BeautifulSoup syntax would return the value of an HTML element that I was interested in:

    In the example notebook above, once the first four code blocks have been run I can repeatedly run the fifth block until I get the correct syntax for the statement that returns the authenticity_token. It's a real time saver over working in a more traditional code editor where the whole program needs to be run each time. You can find a good guide to getting started with Jupyter Notebooks here.

Hopefully I'll have time to pick up this screen scraping challenge again in the future. Meantime, if you are in the UK and fancy a crack at this then all you need to do is buy a £0.99 123 SIM from Tesco, pop it in your phone to activate over the Three network and then register the SIM on the Three website.

Tune in next time when I turn my attention to the hot topic of temperature measurement!

Cheers -- Graham

Build a Raspberry Pi Vehicle Interior Monitor – Screen Test

Posted by Graham Smith on July 2, 2017No Comments (click here to comment)

In this blog series I'm documenting my maker journey as I build a Raspberry Pi-based vehicle interior monitor (PiVIM). This is the full list of posts in the series:

In this post I'm getting started by configuring the Raspberry Pi with a mini display which will act as a control panel. The unit I chose to put through its paces was Pimoroni's Display-O-Tron HAT (DotHAT):

Why did I choose DotHAT? Actually for no other reason than I've seen it in action and I was impressed, and it's a reasonably low cost component given the functionality on offer.

Tour Starts Here

In order to make full use of the DotHAT you need to be aware that the HAT is actually a composite of several bits of hardware. The obvious component is a 16×3 character LCD display, and this is complemented by a six-zone RGB backlight, an array of six LEDs and six capacitive touch buttons (think joystick controls). Each component can be programmed separately as required—or not as the case may be.

Physical installation—as with all HATs—is straightforward as it just sits on the GPIO pins. An initial concern was whether the DotHAT would require the BCM 4 GPIO pin that I was planning to use for the DS18B20 temperature sensor however the DotHAT pinout shows that it's not used.

In order to easily control the components of the DotHAT Pimoroni have created high-level Python libraries that wrap the lower-level libraries that interface with the hardware—a function reference is provided here. Installation of these (and supporting) libraries is straightforward with just one line of code:

As always it's best to make sure your OS is up-to-date first. The above command will ask if you want to install the example code and I definitely recommend this so you can see for yourself the highly creative ways you can use the DotHAT. One of the examples is a fully-functioning Internet radio with a menu system driven by the capacitive buttons—very impressive. Do take the time to run the examples and explore the code as there is oodles of functionality to play with.

PiVIM Control Panel

In my PiVIM project I'm planning to use the DotHAT as a control panel to display information about PiVIM's status such as current vehicle temperature, mobile broadband signal strength, error messages and so on. At this point I don't know exactly what items I want to display, however I do know it will need to be fairly simple so I probably won't use the menu feature for example. Instead I will most likely write information as either Left or Right-aligned and to each of the three rows of the LCD (Top, Middle, Bottom), giving six locations to write to:

In order to simplify writing to the six locations I wrote my own module with functions such as message_left_top() and message_right_middle(). You can find the code in my PiVIM-py GitHub repository as PiVIM-py/pivim/control_panel.py. There is also a PiVIM/control_panel_debug.py module which contains some code to put control_panel through its paces. The core ‘message' functions are straightforward, however there is an issue to be aware of regarding writing a new message that is shorter than the previous message because you'll find fragments of the previous message will still be displayed. I envisage updating all six positions together in a loop and will get round this problem by calling the clear_screen() function before each iteration. If you are doing something different you'll need to code accordingly.

One interesting touch (pun intended) I added was to configure the left and right capacitive buttons to turn the backlight on and off respectively. With battery life in mind I then took this a step further by implementing a function that creates a thread which calls a timer to turn the backlight off after a delay:

The display_config() function (not shown above) also calls backlight_auto_off() to help conserve battery life.

Future Functionality

In the interests of YAGNI that's all the functionality I'm going to write for the time being, however I do have some ideas for the future. One exciting possibility is concerned with how I represent mobile broadband signal strength. The DotHAT supports the full ASCII character set of course, but intriguingly it also supports up to eight custom glyphs. So on the one hand I could use, for example, asterisks to represent signal strength. On the other though, I could have a go at creating the sort of glyphs that are used to represent signal strength ‘bars' on mobile phones. If you are already itching to have a go at this the documentation is here. Until next time—happy coding!

Cheers -- Graham

Build a Raspberry Pi Vehicle Interior Monitor – Overview

Posted by Graham Smith on June 20, 2017No Comments (click here to comment)

Over the past year or so I've been teaching myself whole new areas of learning based around the Raspberry Pi, including Linux, GPIO programming, basic electronics and Windows 10 IoT Core. I'm now at a point where I'm ready to build something that might be half useful, and I thought it might be helpful to someone if I blogged about my fledgling maker journey.

For my first project I'm going to build a Raspberry Pi Vehicle Interior Monitor—PiVIM. The idea is that on the odd occasion when we need to leave our dogs in the car for a few minutes, PiVIM will provide extra reassurance that all the ventilation and safety measures we've provided (windows partially open, tailgate open but secured with a Ventlock Tailgate Lock type device, reflective windscreen shade and so on) are actually working, through some sort of messaging to our mobile phones.

Important notice: The aim of PiVIM is only to provide extra reassurance on top of an already very cautious approach to reluctantly leaving dogs in the car for very short periods. Dogs die in hot cars!

With the sombre stuff out of the way, sure you can buy something but where's the fun in that? Making something from scratch offers an opportunity to learn a whole new set of skills, and in a new series of blog posts I'm planning to share my journey building PiVIM. In this first post I'm setting out the big picture—the features I hope to incorporate in to PiVIM and the developer tools I'll be using.

This is the full list of posts in this series:

PiVIM Features

Here's a list of potential features that I'm considering for this project:

  • Temperature measurement. The key requirement for this project is to monitor the temperature of a vehicle's interior. A popular component for temperature measurement is the DS18B20. This comes as a small three-pin unit that looks like a transistor and also a waterproof version with the sensor embedded in a metal tube at the end of an attached wire. The waterproof version looks most useful for my project due to ruggedness and flexibility of being on the end of a wire.
  • Mobile connectivity. Since PiVIM will need to work in remote locations it will need a mobile internet connection. There's a cost to this of course, and I want to keep costs as low as possible. One of the problems with most mobile broadband plans is that they are based on a monthly data allowance and at the end of each monthly period any unused allowance is lost. Given that PiVIM might be used a lot in summer and very little in winter such a plan would likely be wasteful and uneconomic. Happily the Three network have a PAYG SIM where the data allowance lasts for as long as it isn't used. I'm planning to partner this SIM with either the ZTE MF730 3G USB dongle or the ZTE MF823 4G USB dongle, and both, if Google searches are anything to go by, should work with the Raspberry Pi.
  • Data access. Related to mobile connectivity is how to access the data that PiVIM generates. In addition to sending SMS alerts, the options that I'm considering are to store all data locally and make it accessible via a website running on the Pi, or to upload it to somewhere like Microsoft Azure and access it from there. Lots to research needed here, not least because although I have plenty of experience with Microsoft Azure right now I have no idea if it's possible to host a website on a Raspberry Pi that's accessible via a mobile broadband connection.
  • Battery powered. Although PiVIM could use a vehicle's 12V power supply via a USB adaptor the cabling would be messy and a dedicated battery feels more suitable. Tests with a RAVPower 22000mAh portable charger and a Raspberry Pi Model B with camera attached showed that the RAVPower could keep the Pi going for at least 36 hours (I stopped the test before the RAVPower was fully drained) so a unit like this feels like it will be a good choice. It would also be useful to have some power management system to monitor the battery's charge status.
  • Onboard display. I want to be able to see some basic information about PiVIM whilst it's running—mobile broadband signal strength, current temperature and so on. I've seen the Pimoroni Display-O-Tron HAT used for this purpose and was impressed, so that will probably be my starting point.
  • Power button. Raspberry Pis don't come with a power button and if left connected they will also gradually drain a battery even when powered down so I'll want some sort of solution to these problems.
  • Camera pictures. More of a nice to have rather than a necessity, but since the Raspberry Pi has a very handy camera module available as an accessory I might try and see if it's viable to access pictures over mobile broadband.
  • Robust case. The PiVIM internals will need to be well protected so some sort of robust case will be essential. It will need to be able to house the battery as well as the Pi, ZTE USB dongle and the Display-O-Tron. Current thinking is an electrical junction box such as the one here might be a good starting point, with the Display-O-Tron screwed to the exterior surface of the lid and connected to the Pi with something like the Pimoroni Mini Black HAT Hack3r.
  • Raspberry Pi model. I'll be prototyping on a Pi 3 Model B but might switch to a lower-powered board when it comes to building something that will be used out in the field.

Development Environment and Tools

I'll be starting off coding in Python, however a developer friend has very good things to say about developing with Kotlin for the Raspberry Pi so I'll probably try my hand at a Kotlin port once I have a Python version working.

In an ideal world I'd do all development directly on the Pi since there will be quite a lot of Python libraries that are talking directly to Pi hardware or to hardware attached to the Pi. In practice though I find that the development experience on the Pi doesn't give me what I want either in terms of performance or in the coding tools I want to use. Since I do a lot of work with Microsoft technologies my current development workstation is running Windows 10 and I use scp to push code out to the Pi which is running in headless mode on my local network. My configuration is as follows:

  • Windows 10 Pro with the Windows Subsystem for Linux (WSL) installed and a registry setting to ‘Open Bash window here‘.
  • I used to go to the trouble of giving my Pis fixed IP addresses so I could always be certain which one I was connecting to. I don't bother now and instead have Bonjour Print Services for Windows installed so that I can remote to a Pi using the hostname.local format. This works a treat in applications such as FileZilla and PuTTY. Unfortunately there is currently a bug in WSL which stops this from working. WSL is still in beta so hopefully this will be fixed soon.
  • I do find it's worth configuring SSH to use certificate authentication to avoid having to deal with passwords, and have the same certificate set up for both Windows 10 and WSL.
  • Python obviously needs to be installed—I just go for the latest version from the website here which also installs pip.
  • One of the issues with Python development is that if you don't do anything about it packages are installed globally. This creates problems if you need to create or edit Python code that needs a specific version of a package, or indeed Python itself. The solution to this is to use virtual environments courtesy of Virtualenv and (on Windows)  virtualenvwrapper-win. There's a great guide to configuring and using virtual environments on Windows here.
  • I'm using Git for version control and the Python version of PiVIM is on my GitHub site here.
  • My lightweight code editor of choice is Visual Studio Code. It's free and Python is fully supported with the help of Don Jayamanne's Python extension. The best way to start Visual Studio Code if you are using virtual environments is from the command line of a virtual environment using code . (make sure you don't miss off the period). Whilst you are at the command line make sure you install pylint (pip install pylint) in to your virtual environment and any other packages your code needs.
  • My heavyweight IDE of choice is Visual Studio. A free version is available and it's got a huge amount of support for Python via the Python tools. Whilst I don't use it on a daily basis for Python development it's great for remote debugging using the ptvsd package. Anyone who's used Visual Studio to develop .NET applications will love and appreciate the debugging experience and there are details on how to set up this awesomeness here.
  • I have FileZilla and PuTTY installed and have them configured to connect to my Raspberry Pi devices using SSH and certificate authentication. I have a bash script under version control on my Windows 10 workstation file system which I run from WSL (one of the handy things about WSL is that it can see the Windows 10 file system). The bash script uses scp to copy Python files to the Pi, after which I switch to PuTTY to run the code. A bit clunky but it works. (UPDATE: I've stopped using the bash script as it was too cumbersome. I now clone my code from GitHub to the Pi and and then in a PuTTY connection to the Pi—after having pushed code to GitHub—I run a command such as git pull && python3 module_to_run.py).

That's it for now! Watch out for my next post in this series where I'll be getting stuck in to the details.

Cheers—Graham