Build a Raspberry Pi Vehicle Interior Monitor – Temperature Monitoring

Posted by Graham Smith on September 9, 2017No Comments (click here to comment)

In this blog series I'm documenting my maker journey as I build a Raspberry Pi-based vehicle interior monitor (PiVIM). Please refer to the previous posts in this series for the story so far:

In this post I get to the main aim of the project which is to be able to monitor temperature. In so doing I'm entering the exciting world of physical computing with the Raspberry Pi by hooking up a temperature probe to the Pi's GPIO pins. I'm using the Dallas DS18B20 sensor which comes in two forms: one looks like a transistor and the other is packaged in to a probe with a long wire attached. I'm using both: the transistor format for prototyping and the probe version for the final version of PiVIM.

The big question that had been on my mind was how to get temperature measurements displayed on a mobile phone. In the previous post in this series I described how I'm giving my Raspberry Pi connectivity through a mobile broadband connection, but then what? There are probably multiple ways to do this but for now at least I've solved it using a data analytics service for IoT projects called Initial State.

Monitoring Temperature with the DS18B20 Sensor

Using the DS18B20 in conjunction with the Raspberry Pi is straightforward and there are a couple of handy tutorials that explain the process:

It's fairly straightforward and there's little point in repeating it in full here, however in summary the steps are:

  1. Build the circuit on a breadboard and connect the jumper wires to the Raspberry Pi header pins.
  2. Configure the Raspberry Pi to work with 1-wire devices (the DS18B20 is a 1-wire device).
  3. Write code (ie a Python script) to read the temperature from the 1-wire interface.

You can find the code which I adapted from the two tutorials listed above on my GitHub site as temperature.py. Outside of the PiVIM module I wrote a simple script called temperature_debug.py to ensure temperature.py was working.

Streaming Temperature Data to Initial State

Initial State is a cloud service that allows you to stream data to its portal and then analyse and display it in various formats. For Python users there is a supplied library that does all of the heavy lifting and it's surprisingly easy to get started. I recommend watching their "From Login to Live Data Stream in 2 Minutes" YouTube video:

Do be sure to create the example (explained in the video) as it's a great way to get a feel for how everything works. You can find their complete list of tutorial videos here.

For my project I created a Python module called data_portal.py which you can find on my GitHub site by following the link. The module is as follows:

The essence of Initial State is that you stream data to its portal as key-value pairs in to what Initial State terms buckets -- very simply containers for your data. A bucket is configured by instantiating the streamer object with the name of the bucket, the bucket's key (which must be unique in your account) and your personal access key. (If you are using a public code repository such as GitHub make sure to pass your access_key value in on the command line so you don't expose it to the world.) In the free version of Initial State data only persists for a day which is fine for me since I'm not interested in doing any historical analysis. I did want to create a new bucket every day so I could see the latest bucket to select in the web portal and achieved this by appending the current date to the bucket name and key. A bucket can receive multiple key-value pairs but in my case only one is needed—a key of T (for temperature) and the temperature value for each measurement.

In order to test the module I created a very simple data_portal_debug.py script, the key parts of which are as follows:

Note how the code is written to facilitate passing in the access_key in on the command line.

So how does this look in the Initial State portal? While there are several ways to view your data I prefer a simple tile displaying the latest temperature:

I created this myself by ensuring I'd selected my bucket and then clicking on the View Tiles App icon and then Edit Tiles. The screenshot below shows how to configure the settings to achieve the desired result:

So far so good, but I'm viewing this in a browser on my workstation and I need to be able to access it on my mobile phone. That's no problem of course because I can just access the Initial State portal on my phone. With Chrome on Android I can actually make this even slicker by using Chrome's Add to Home screen feature (when logged in to Initial State) to place an icon on my Home screen that gets me straight to Initial State's portal, making it feel as if it's an app even though it's not. This is what I see on my phone after selecting the desired bucket:

It's pretty neat considering it's free and on the portal side all I've had to do is a bit of quick configuration. Of course, if you want more features and / or want to store data for future analysis Initial State have paid-for subscriptions.

Cooling Down

That about wraps-up this part of my maker journey. I might revisit this section in the future and have a look at other options, since I would like to have the ability for advanced features such as my phone receiving an alert if the temperature reached a threshold level. This would likely require the development of a smartphone app as well as integration with a cloud service to host the data. A fun project, but not until I have a first version of PiVIM finished and working! Next time I'll be looking at options for powering PiVIM and also turning the Raspberry Pi off.

Cheers -- Graham

Ubiquiti WiFi: How I Got Started with this Fantastic Kit on a Modest Budget

Posted by Graham Smith on August 3, 20176 Comments (click here to comment)

It all started a few weeks ago when I was sat out in the garden on a sunny day with my wife. She was trying to do something on her tablet and was bemoaning the poor WiFi outdoors. At the time I was coincidentally reading an article on WiFi mesh systems and since WiFi wasn't too great in some parts of indoors either I briefly flirted with the idea of buying something like Google Wifi or BT's Whole Home Wi-Fi. However on looking in to this in more depth none of the products seemed to tick all the boxes, either being very expensive or lacking in what I would consider an essential feature. For example Google Wifi is administered by an app rather than by a browser application. Fine for some perhaps but not for me thank you.

I thought I could fix things on the cheap and bought a Netgear EX3700 WiFi Range Extender. I used this in both extender mode (I think of this as WiFi in serial with the router's WiFi) and also in access point (AP) mode via an Ethernet connection (I think of this as WiFi in parallel with the router's WiFi) however I wasn't thrilled with the results. The main gripe was that the mobile devices in my home at least (phones, tablets etc) all wanted to hang on to their existing connection for grim death. So even when standing next to the EX3700 in AP mode blasting out a 100% signal, my phone could still be hanging on to almost no signal from the router. Perhaps it was something wrong with my setup—the EX3700 too close to my router perhaps? Either way it wasn't wholly satisfactory.

Fast forward a couple of weeks and I found myself working through Troy Hunt's excellent Pluralsight course on What Every Developer Must Know About HTTPS. One of the slides had a screenshot of a blog post by Troy on fixing dodgy WiFi on his jet ski with Ubiquiti's UniFi Mesh. I vaguely remembered reading about Ubiquiti somewhere and with my interest piqued I started checking out Troy's blog.

And as it has been with so many others it seems, that's where my love affair began...

Warning! Reading Further WILL Cause you to be Parted from Your Hard-Earned Cash

There are many places on the Internet that eulogise about Ubiquiti products so I'm going to resist the temptation here. These are the key posts I read (specifically about the UniFi rage of products) and which I think you will enjoy and find useful and informative:

Make sure you don't miss the video in Troy's first post of him unboxing a load of Ubiquiti kit. This does a great job of explaining what all the main bits of kit are, and if you watch this in conjunction with reading the posts above you'll have a good idea of the key products in the UniFi range.

Needless to say, I was instantly hooked and I wanted in. However my existing WiFi setup wasn't so bad that I could justify spending over a £1,000 on new kit. Feeling slightly deflated I continued to research the UniFi range of products, to the point where it dawned on me that you don't need to start off with a big investment, and you don't need to buy every component to make a working system. And so the fun began...

Starting off with an Access Point

My journey began by adding a wireless access point (AP) to my home network. A few things need to be in place to make this work:

  • The first thing of course is an AP. There are several in the UniFi range and like many others I plumped for the AP-AC-PRO on the basis that it was only a little more expensive than the less capable models but vastly cheaper than the AP-AC-HD daddy.
  • Generally speaking APs require an Ethernet connection so you are going to need an Ethernet connection near to where you will site the AP. I'm lucky in that my home had CAT 5e wired-in when it was built and I have 40+ sockets all over the house and garage. An alternative would be running a dedicated cable from your modem/router or more likely powerline networking using the domestic electricity supply.
  • In addition to Ethernet providing a data connection, UniFi APs also need to get their power over an Ethernet connection (logically known as power over Ethernet—PoE). Although Ubiquiti sell some lovely switches that have PoE ports (see here for an example) you don't actually need one of these because the APs (if you buy them singly at least) ship with a PoE adaptor (the POE-48-24W-G model). As long as you have an electrical power socket near your Ethernet connection you are good to go.
  • The final piece of this jigsaw is the UniFi Controller software. Ubiquiti sell a dedicated device that runs the software (the Cloud Key) but again, you don't need this. The software is free to download and runs happily on the usual platforms—even on the Raspberry Pi. Furthermore, if you are just running an AP the UniFi Controller software doesn't need to be running all the time and can be installed on a PC or a Mac and spun up as and when is needed to configure the AP.

Putting all of this together was pretty straightforward. The AP-AC-PRO simply linked in to my Ethernet network via the PoE adaptor, and I opted to position it in the middle of the house on top of a unit in our open-plan kitchen / dining room. I have an always-on Windows Server 2012 R2 machine on my network and I installed the UniFi Controller software on that. There are a few considerations to be aware of when running on Windows:

  • Java is a requirement and whilst the installation wizard takes you to a download page you seem to end up installing 32-bit Java. For reasons I'll explain below you probably don't want this so instead make sure you download and install the 64-bit version.
  • In it's default configuration UniFi Controller doesn't run as a Windows service. It's easy to configure using these instructions however it only works with 64-bit Java—see above.
  • You access UniFi Controller using a browser (https://localhost:8443 if running locally) however it's not compatible with browsers that ship with Windows Server 2012 R2 or Windows Server 2016 and if this is a problem you can easily get round this by accessing from a different machine replacing localhost with the machine's IP address or FQDN.
  • UniFi Controller ships with a self-signed SSL certificate which causes browsers to raise warnings. These can be safely bypassed but it does leave the browser address bar looking a bit ugly.

The UniFi Controller installation wizard is a doddle and doesn't need explaining. At the end of the process you are presented with a nice dashboard:

So far so good, but it's clear that there are a lot of greyed-out features. The fix? Just a bit more expenditure to buy the UniFi Security Gateway, commonly known as the USG.

You Probably Will Want to get a UniFi Security Gateway

That was my initial reaction on seeing the Controller dashboard without the USG. There is a choice between the rackmount USG‑PRO‑4 or the standalone USG. The former is enterprise grade and much more expensive than the USG, which is perfectly adequate for a home network and the one I opted for. There are a few steps to incorporating the USG in to your home network and it helps to be clear about which roles each piece of kit will perform when the USG is in and working. In my case I'm on VDSL broadband and my original setup consisted of a Netgear D6400 performing the roles of both modem and router (as well as DHCP and a few other things of course but I'm keeping it simple). With the USG in the mix, the D6400 is configured to work in modem only mode and the USG takes on the router function. Crucially in my case, I needed to configure the USG to be the device that supplies the PPPoE credentials my broadband provider needs for a successful connection. This was a bit of a head-scratcher at first since the USG can work in two other modes (DHCP and Static IP) and I wasn't entirely sure how much configuration would be down to the D6400. None as it turns out.

Because the default D6400 gateway configuration is 192.168.0.1 and the USG is configured as 192.168.1.1 and I wasn't sure what would happen if I changed the USG to 192.169.0.1 as well, I decided to change my network to fit in with the USG. I planned to perform the initial USG configuration directly from my always-on server (running UniFi Controller on Windows Server 2012 R2) which I knew would cause issues with Internet Explorer so I planned ahead and installed FireFox. I also made sure that my broadband provider's PPPoE credentials were available locally on that box as well as the credentials to log in to UniFi Controller. The procedure was then as follows:

  • Configure the USG to work in PPPoE mode by attaching it directly to a laptop that did not already have a connection to another gateway (ie WiFi turned off and no Ethernet connected) and running the setup routine by pointing a browser to http://setup.ubnt.com/. This didn't work for me but pointing a browser to http://192.168.1.1 did. An Edit Configuration button allows you to change from the default DHCP setting to PPPoE.
  • Convert the Netgear D6400 from modem/router mode to modem only mode. This wasn't too hard to find in the advanced settings—you'll have to dig around for this on your own device. At this point you'll loose your broadband connection and for many devices it seems the ability to connect to them without performing a factory reset.
  • Because I was planning to bring my wired devices back one-by-one I unplugged everything from my switch and the D6400. I then plugged the machine running UniFi Controller directly in to the LAN 1 port. Because this machine had a static IP on the D6400's subnet I changed this temporarily back to DHCP so it could communicate properly with the USP. (I could have course given it a static IP on the USP's subnet.)
  • In UniFi Controller > Settings > Networks I amended the DHCP Range (I leave space for static IP addresses). You should end up with something like this:
  • After saving the network settings I navigated to UniFi Controller > Devices and located the USG. Under the Actions column I clicked Adopt to configure the USG with the previously defined settings.
  • Following the adoption process, I accessed the USG's properties by clicking its name (not the IP address). On the Configuration tab the WAN section allowed me to supply my ISP's PPPoE credentials and DNS details (I have an OpenDNS account):
  • Once the WAN changes had been provisioned to the USG I connected the WAN port of the USG to an Ethernet port on the D6400 in order to check broadband connectivity and speed. Note that both the WAN and LAN 1 ports should be connected at 1 Gbps. Initially my LAN 1 was showing 100/10 Mbps and it was due to a dodgy cable.
  • With broadband now connected again I took the opportunity of upgrading the USG's firmware using the handy button in the Actions column:
  • The final bit of this configuration was to plug the USG in to my switch (a ZyXEL GS1100-16) and plug my always-on server running UniFi Controller in to the switch and configure it with a static IP address.

With the core configuration completed I reconnected my wired devices one-by-one, fixing up any static IP address issues (due to the change of subnet) where required and giving each device (or client as they are known) a friendly name in UniFi Controller (click a client to open its properties and then navigate to Configuration > General > Alias). With this done the dashboard looks much better:

Troubleshooting and Disaster Recovery

If you do run in to problems you can find logs in the UniFi Controller installation folder (C:\Users\<profile name>\Ubiquiti UniFi\logs on Windows). It's also worth enabling Auto Backup from the Settings area. I configured mine to backup every day at 1am and then added C:\Users\<profile name>\Ubiquiti UniFi\data\backup to my CrashPlan configuration. Obviously do whatever works for you.

Outstanding Issues and Future Plans

One facility which I had taken for granted with my Netgear D6400 was some local DNS resolution. I first realised this was an issue when I couldn't get to my Windows Server 2012 R2 machine using its hostname. Long story short, it would appear that many SOHO routers use a tool called Dnsmasq for DNS forwarding and as a DHCP server. This apparently allows Dnsmasq to resolve DHCP client names. The USG doesn't really do DNS (which is fair enough since it's part of an ecosystem where different boxes are expected to do specific jobs) however I've seen a few posts in the forums where some scripting has been used to implement local DNS. It's not a major deal breaker for me and for the time being I've edited the hosts file on my Windows machines whilst I figure out what, if anything, I'm going to do about it.

EDIT: My conclusion about local DNS resolution seems to be wrong. I'm researching this further but meantime please look in the comments where Chris Buechler (Principal Engineer with Ubiquiti) has posted about USG's ability to do DNS.

In terms of what's next, it will probably be a second AP-AC-PRO so I can have one at either end of the house. After that I will probably look at configuring some serious outdoor coverage via the UniFi Mesh devices. There's a huge amount to like about Ubiquiti products, but the ability to add new bits in as budget allows is one that I really appreciate.

Cheers -- Graham

Build a Raspberry Pi Vehicle Interior Monitor – Mobile Broadband

Posted by Graham Smith on July 20, 2017No Comments (click here to comment)

In this blog series I'm documenting my maker journey as I build a Raspberry Pi-based vehicle interior monitor (PiVIM). Please refer to the previous posts in this series for the story so far:

In this post I'm configuring PiVIM with mobile broadband connectivity. At this stage I don't yet know whether I will connect to PiVIM to query its status or whether I'll have PiVIM push notifications out (for example by SMS), however either way I do know I need some sort of connectivity. Setting the Raspberry Pi up as a WiFi hotspot would be a neat solution however since I need a range of up to 1 km I ruled this option out in favour of mobile broadband.

Let's Get Physical

My first task was to choose the physical mobile broadband device. A Google search for raspberry pi mobile broadband turns up quite a few hits for Huawei mobile USB dongles and what seem to be quite a lot of configuration steps to get them to work. However a friend recommended the ZTE range of USB dongles and I ended up buying a ZTE MF730M for testing purposes. This is a 3G unit and is well under under half the cost of the ZTE MF823 4G unit, however at some point I'll upgrade to the 4G version since it's more flexible. I was prepared for a painful experience to get it working but on an updated version of the latest Raspbian Jessie the ZTE MF730M just worked in true plug and play fashion.

In order to get the ZTE MF730M working I needed a SIM. I wanted to avoid a plan where monthly credit would be lost if it weren't used, since PiVIM won't get much use in winter but will get a lot of use in summer. The Three network have a PAYG SIM which fits the bill perfectly since the credit lasts for as long as it isn't used. In the UK these can be bought from Tesco for £0.99. You'll need to install it in to the dongle and leave it to activate (somewhere it can get a signal) before registering the mobile number on the Three website and adding credit.

Mobile Broadband Status

If all I wanted to do in this project was to use my mobile broadband dongle then the good news in the plug-and-play department would make for a very short blog post. I don't just want to use the dongle though, I want to display information about its status (network type, signal strength etc) on my Display-O-Tron control panel. The ZTE MF devices incorporate a web page (accessible at http://m.home) that displays status information, as well as functionality that allows the management of a phonebook and the ability to send SMSs:

It turns out that this web page gets its data via a REST API and it's possible to tap in to this API to retrieve information programatically. It's easy to see the API being used from a browser's developer tools (on the Network tab in the Chromium version that ships with Raspbian), however the good people on this GitHub site have taken the trouble to document some of the commands and have some example code.

I used their code as starting point and created a Python class to return the status of the mobile dongle via instance attributes. You can find the code on my PiVIM GitHub site as mobile_broadband.py and there is an accompanying mobile_broadband_debug.py file that has code to put the class though its paces. The Python class minus a few docstrings is as follows:

I'm only returning three instance variables but clearly the code can be easily amended to return as many as are needed. One slightly ugly feature of my code is a hard coded response from the REST API to cater for when the mobile dongle isn't plugged in. My code should probably throw an exception if the mobile dongle isn't plugged in however when it is it's potentially using credit which I don't really want it to do for debugging purposes. So for the time being I'll live with my hack.

Screen Scraping for Remaining Credit

One piece of data that doesn't seem to be available from the REST API is the credit remaining on the SIM. In my case though it is available by logging in to the three.co.uk website with the SIM phone number and password and navigating to the Account balance page. There's no API in use on this website as far as I can tell so retrieving the actual value is down to screen scraping. Python has several libraries that can help here and I've been using requests and the BeautifulSoup class that's part of bs4. Long story short with this is that I've burned numerous hours trying to make this work and so far have drawn a blank. The problem is in authenticating properly with the Three website so that navigating to another page is successful. Although this aspect is work in progress I'm mentioning it because in a roundabout way I learned what I think are two great Python tips:

  • If you find that a Python library fails to install on Windows with the standard pip command it might well be that a compilation step failed. In this case you can try downloading an already compiled version of the library from the Unofficial Windows Binaries for Python Extension Packages site. (Note that AFAIK the 32/64-bit versions relate to the version of Python you are running and not whether you are running 32 or 64-bit Windows. Unless you have gone out of your way to install 64-bit Python you're probably running the 32-bit version.) Open a command prompt where you downloaded the file and type pip install followed by the first few characters of the library. Then use tab completion to complete the library name. In using this technique pip knows to install a library from your download rather than from the Internet.
  • Jupyter Notebooks are great for working with code on a ‘trial and error' basis where you want to repeatedly evaluate the output of a statement without having to run the whole program every time. For me this was working out which BeautifulSoup syntax would return the value of an HTML element that I was interested in:

    In the example notebook above, once the first four code blocks have been run I can repeatedly run the fifth block until I get the correct syntax for the statement that returns the authenticity_token. It's a real time saver over working in a more traditional code editor where the whole program needs to be run each time. You can find a good guide to getting started with Jupyter Notebooks here.

Hopefully I'll have time to pick up this screen scraping challenge again in the future. Meantime, if you are in the UK and fancy a crack at this then all you need to do is buy a £0.99 123 SIM from Tesco, pop it in your phone to activate over the Three network and then register the SIM on the Three website.

Tune in next time when I turn my attention to the hot topic of temperature measurement!

Cheers -- Graham

Build a Raspberry Pi Vehicle Interior Monitor – Screen Test

Posted by Graham Smith on July 2, 2017No Comments (click here to comment)

In this blog series I'm documenting my maker journey as I build a Raspberry Pi-based vehicle interior monitor (PiVIM). Please refer to the previous post in this series for an overview of the project:

In this post I'm getting started by configuring the Raspberry Pi with a mini display which will act as a control panel. The unit I chose to put through its paces was Pimoroni's Display-O-Tron HAT (DotHAT):

Why did I choose DotHAT? Actually for no other reason than I've seen it in action and I was impressed, and it's a reasonably low cost component given the functionality on offer.

Tour Starts Here

In order to make full use of the DotHAT you need to be aware that the HAT is actually a composite of several bits of hardware. The obvious component is a 16×3 character LCD display, and this is complemented by a six-zone RGB backlight, an array of six LEDs and six capacitive touch buttons (think joystick controls). Each component can be programmed separately as required—or not as the case may be.

Physical installation—as with all HATs—is straightforward as it just sits on the GPIO pins. An initial concern was whether the DotHAT would require the BCM 4 GPIO pin that I was planning to use for the DS18B20 temperature sensor however the DotHAT pinout shows that it's not used.

In order to easily control the components of the DotHAT Pimoroni have created high-level Python libraries that wrap the lower-level libraries that interface with the hardware—a function reference is provided here. Installation of these (and supporting) libraries is straightforward with just one line of code:

As always it's best to make sure your OS is up-to-date first. The above command will ask if you want to install the example code and I definitely recommend this so you can see for yourself the highly creative ways you can use the DotHAT. One of the examples is a fully-functioning Internet radio with a menu system driven by the capacitive buttons—very impressive. Do take the time to run the examples and explore the code as there is oodles of functionality to play with.

PiVIM Control Panel

In my PiVIM project I'm planning to use the DotHAT as a control panel to display information about PiVIM's status such as current vehicle temperature, mobile broadband signal strength, error messages and so on. At this point I don't know exactly what items I want to display, however I do know it will need to be fairly simple so I probably won't use the menu feature for example. Instead I will most likely write information as either Left or Right-aligned and to each of the three rows of the LCD (Top, Middle, Bottom), giving six locations to write to:

In order to simplify writing to the six locations I wrote my own module with functions such as message_left_top() and message_right_middle(). You can find the code in my PiVIM-py GitHub repository as PiVIM-py/pivim/control_panel.py. There is also a PiVIM/control_panel_debug.py module which contains some code to put control_panel through its paces. The core ‘message' functions are straightforward, however there is an issue to be aware of regarding writing a new message that is shorter than the previous message because you'll find fragments of the previous message will still be displayed. I envisage updating all six positions together in a loop and will get round this problem by calling the clear_screen() function before each iteration. If you are doing something different you'll need to code accordingly.

One interesting touch (pun intended) I added was to configure the left and right capacitive buttons to turn the backlight on and off respectively. With battery life in mind I then took this a step further by implementing a function that creates a thread which calls a timer to turn the backlight off after a delay:

The display_config() function (not shown above) also calls backlight_auto_off() to help conserve battery life.

Future Functionality

In the interests of YAGNI that's all the functionality I'm going to write for the time being, however I do have some ideas for the future. One exciting possibility is concerned with how I represent mobile broadband signal strength. The DotHAT supports the full ASCII character set of course, but intriguingly it also supports up to eight custom glyphs. So on the one hand I could use, for example, asterisks to represent signal strength. On the other though, I could have a go at creating the sort of glyphs that are used to represent signal strength ‘bars' on mobile phones. If you are already itching to have a go at this the documentation is here. Until next time—happy coding!

Cheers -- Graham

Build a Raspberry Pi Vehicle Interior Monitor – Overview

Posted by Graham Smith on June 20, 2017No Comments (click here to comment)

Over the past year or so I've been teaching myself whole new areas of learning based around the Raspberry Pi, including Linux, GPIO programming, basic electronics and Windows 10 IoT Core. I'm now at a point where I'm ready to build something that might be half useful, and I thought it might be helpful to someone if I blogged about my fledgling maker journey.

For my first project I'm going to build a Raspberry Pi Vehicle Interior Monitor—PiVIM. The idea is that on the odd occasion when we need to leave our dogs in the car for a few minutes, PiVIM will provide extra reassurance that all the ventilation and safety measures we've provided (windows partially open, tailgate open but secured with a Ventlock Tailgate Lock type device, reflective windscreen shade and so on) are actually working, through some sort of messaging to our mobile phones.

Important notice: The aim of PiVIM is only to provide extra reassurance on top of an already very cautious approach to reluctantly leaving dogs in the car for very short periods. Dogs die in hot cars!

With the sombre stuff out of the way, sure you can buy something but where's the fun in that? Making something from scratch offers an opportunity to learn a whole new set of skills, and in a new series of blog posts I'm planning to share my journey building PiVIM. In this first post I'm setting out the big picture—the features I hope to incorporate in to PiVIM and the developer tools I'll be using.

PiVIM Features

Here's a list of potential features that I'm considering for this project:

  • Temperature measurement. The key requirement for this project is to monitor the temperature of a vehicle's interior. A popular component for temperature measurement is the DS18B20. This comes as a small three-pin unit that looks like a transistor and also a waterproof version with the sensor embedded in a metal tube at the end of an attached wire. The waterproof version looks most useful for my project due to ruggedness and flexibility of being on the end of a wire.
  • Mobile connectivity. Since PiVIM will need to work in remote locations it will need a mobile internet connection. There's a cost to this of course, and I want to keep costs as low as possible. One of the problems with most mobile broadband plans is that they are based on a monthly data allowance and at the end of each monthly period any unused allowance is lost. Given that PiVIM might be used a lot in summer and very little in winter such a plan would likely be wasteful and uneconomic. Happily the Three network have a PAYG SIM where the data allowance lasts for as long as it isn't used. I'm planning to partner this SIM with either the ZTE MF730 3G USB dongle or the ZTE MF823 4G USB dongle, and both, if Google searches are anything to go by, should work with the Raspberry Pi.
  • Data access. Related to mobile connectivity is how to access the data that PiVIM generates. In addition to sending SMS alerts, the options that I'm considering are to store all data locally and make it accessible via a website running on the Pi, or to upload it to somewhere like Microsoft Azure and access it from there. Lots to research needed here, not least because although I have plenty of experience with Microsoft Azure right now I have no idea if it's possible to host a website on a Raspberry Pi that's accessible via a mobile broadband connection.
  • Battery powered. Although PiVIM could use a vehicle's 12V power supply via a USB adaptor the cabling would be messy and a dedicated battery feels more suitable. Tests with a RAVPower 22000mAh portable charger and a Raspberry Pi Model B with camera attached showed that the RAVPower could keep the Pi going for at least 36 hours (I stopped the test before the RAVPower was fully drained) so a unit like this feels like it will be a good choice. It would also be useful to have some power management system to monitor the battery's charge status.
  • Onboard display. I want to be able to see some basic information about PiVIM whilst it's running—mobile broadband signal strength, current temperature and so on. I've seen the Pimoroni Display-O-Tron HAT used for this purpose and was impressed, so that will probably be my starting point.
  • Power button. Raspberry Pis don't come with a power button and if left connected they will also gradually drain a battery even when powered down so I'll want some sort of solution to these problems.
  • Camera pictures. More of a nice to have rather than a necessity, but since the Raspberry Pi has a very handy camera module available as an accessory I might try and see if it's viable to access pictures over mobile broadband.
  • Robust case. The PiVIM internals will need to be well protected so some sort of robust case will be essential. It will need to be able to house the battery as well as the Pi, ZTE USB dongle and the Display-O-Tron. Current thinking is an electrical junction box such as the one here might be a good starting point, with the Display-O-Tron screwed to the exterior surface of the lid and connected to the Pi with something like the Pimoroni Mini Black HAT Hack3r.
  • Raspberry Pi model. I'll be prototyping on a Pi 3 Model B but might switch to a lower-powered board when it comes to building something that will be used out in the field.

Development Environment and Tools

I'll be starting off coding in Python, however a developer friend has very good things to say about developing with Kotlin for the Raspberry Pi so I'll probably try my hand at a Kotlin port once I have a Python version working.

In an ideal world I'd do all development directly on the Pi since there will be quite a lot of Python libraries that are talking directly to Pi hardware or to hardware attached to the Pi. In practice though I find that the development experience on the Pi doesn't give me what I want either in terms of performance or in the coding tools I want to use. Since I do a lot of work with Microsoft technologies my current development workstation is running Windows 10 and I use scp to push code out to the Pi which is running in headless mode on my local network. My configuration is as follows:

  • Windows 10 Pro with the Windows Subsystem for Linux (WSL) installed and a registry setting to ‘Open Bash window here‘.
  • I used to go to the trouble of giving my Pis fixed IP addresses so I could always be certain which one I was connecting to. I don't bother now and instead have Bonjour Print Services for Windows installed so that I can remote to a Pi using the hostname.local format. This works a treat in applications such as FileZilla and PuTTY. Unfortunately there is currently a bug in WSL which stops this from working. WSL is still in beta so hopefully this will be fixed soon.
  • I do find it's worth configuring SSH to use certificate authentication to avoid having to deal with passwords, and have the same certificate set up for both Windows 10 and WSL.
  • Python obviously needs to be installed—I just go for the latest version from the website here which also installs pip.
  • One of the issues with Python development is that if you don't do anything about it packages are installed globally. This creates problems if you need to create or edit Python code that needs a specific version of a package, or indeed Python itself. The solution to this is to use virtual environments courtesy of Virtualenv and (on Windows)  virtualenvwrapper-win. There's a great guide to configuring and using virtual environments on Windows here.
  • I'm using Git for version control and the Python version of PiVIM is on my GitHub site here.
  • My lightweight code editor of choice is Visual Studio Code. It's free and Python is fully supported with the help of Don Jayamanne's Python extension. The best way to start Visual Studio Code if you are using virtual environments is from the command line of a virtual environment using code . (make sure you don't miss off the period). Whilst you are at the command line make sure you install pylint (pip install pylint) in to your virtual environment and any other packages your code needs.
  • My heavyweight IDE of choice is Visual Studio. A free version is available and it's got a huge amount of support for Python via the Python tools. Whilst I don't use it on a daily basis for Python development it's great for remote debugging using the ptvsd package. Anyone who's used Visual Studio to develop .NET applications will love and appreciate the debugging experience and there are details on how to set up this awesomeness here.
  • I have FileZilla and PuTTY installed and have them configured to connect to my Raspberry Pi devices using SSH and certificate authentication. I have a bash script under version control on my Windows 10 workstation file system which I run from WSL (one of the handy things about WSL is that it can see the Windows 10 file system). The bash script uses scp to copy Python files to the Pi, after which I switch to PuTTY to run the code. A bit clunky but it works. (UPDATE: I've stopped using the bash script as it was too cumbersome. I now clone my code from GitHub to the Pi and and then in a PuTTY connection to the Pi—after having pushed code to GitHub—I run a command such as git pull && python3 module_to_run.py).

That's it for now! Watch out for my next post in this series where I'll be getting stuck in to the details.

Cheers—Graham

Continuous Delivery with Containers – Say Goodbye to IIS Express and LocalDB, with Visual Studio 2017, Docker and Windows Containers

Posted by Graham Smith on May 10, 20172 Comments (click here to comment)

A view I've heard expressed a few times recently, and which I completely agree with, is that we need to be discovering problems with our applications as far to the left as possible since it's much cheaper to fix problems there than further down the line towards—or even in—production. So with this in mind is it just me who feels slightly uneasy that in the Visual Studio world the development and debugging of applications destined for Windows servers tends be on Windows desktop machines using lightweight counterparts of server applications such as IIS Express to host ASP.NET websites and LocalDB to host SQL Server databases? With this setup it seems like we could be storing up trouble for later in the pipeline...

Whether my unease is justified or not, I need feel troubled no more since the world of containers offer us a solution! Since Docker for Windows now supports Windows Containers and Visual Studio 2017 has support for Docker built-in we can now develop server applications on Windows 10 and run and debug them on the exact same operating systems they will run on in production.

In this post I take my version of Contoso University that I've been using for several years now and amend it so that in the developer inner loop phase (ie everything that happens before code is checked in to the build server) the website runs in a Windows Server 2016 container running IIS (rather than IIS Express) and the SQL Server Database Project runs on SQL Server 2016 (rather than LocalDB).

Development Environment

The world of containers is evolving rapidly and the tooling might have changed by the time you read this. At the time of writing my environment is as follows:

  • Windows 10 Professional version 1703 (OS Build 15063.250)
  • Visual Studio Enterprise 2017 version 15.1 (26403.7) with the ASP.NET and web development workload
  • Docker for Windows 17.03.1-ce running Windows containers (I recommend the stable channel as at the time of writing the edge version had a bug that caused a problem for Docker support in Visual Studio)

Depending on the speed of your internet connection you might want to docker pull the following images if you are planning on following along:

It's perhaps worth saying here that I'm using these images for convenience because they are available on Docker Hub. In a production scenario you probably wouldn't want to rely on an image as fully formed as microsoft/aspnet and you would probably start with microsoft/windowsservercore or microsoft/nanoserver and have full control of what is installed. You definitely wouldn't start with microsoft/mssql-server-windows-developer of course.

The Contoso University sample application is essentially the same as Microsoft's version except I've changed the database from Entity Framework Code First to a SQL Server Database Project. I've also changed the application to work with SQL Server authentication (rather than Windows authentication) thus removing the need for a domain controller to supply a domain account. You can get the starting point code from here and the final code here.

Adding Initial Docker Support

The first step towards Dockerizing Contoso University is to add initial Docker support for the ASP.NET web application (out-of-the-box support for SQL Server Database Projects isn't available). This is a simple as right-clicking the ContosoUniversity.Web project and choosing Add > Docker Support. This has three main visible effects:

  • A new docker-compose ‘project' is added at Solution level and is made the Startup Project. This project contains several .yml files.
  • A Dockerfile file and a (nested) .dockerignore file are added to ContosoUniversity.Web.
  • The toolbar button that normally launches a browser has now switched to launching Docker:

The Dockerfile added to ContosoUniversity.Web is based on the microsoft/aspnet image so at this point you should now be able to run the application using the Docker toolbar button and have the website run in a Windows Container based on that image. The database side of things isn't working at this stage of course—Web.config is pointing to LocalDB and the container running the website can't see LocalDB.

To understand what has been created, open a PowerShell session and run docker images followed by docker ps. You should see that an image called contosouniversity.web has been created with a dev tag, and that this image has been used to create a container called something like dockercompose362878786_contosouniversity.web_1.

Adding Docker Support for the SQL Server Database Project

Adding Docker support for the SQL Server Database Project requires the following steps:

  1. Manually add a Dockerfile file and .dockerignore file to the root of ContosoUniversity.Database. Given that these files don't have file extensions and that database projects are quite prescriptive about what they think you should be adding it's easier to add them outside of Visual Studio and then add them in as existing items. (Note that if you are using Windows Explorer you'll need to create .dockerignore as .dockerignore.—Windows will drop the trailing period).
  2. Optionally, close Visual Studio and reopen the solution folder in a text editor such as Visual Studio Code. Open ContosoUniversity.Database.sqlproj and search for the Dockerfile and .dockerignore entries. Change them to look as follows to achieve the nested file effect in Visual Studio:
  3. .dockerignore just needs to contain an asterisk—meaning everything should be ignored.
  4. Dockerfile should contain the following code:
  5. Switching to the docker-compose ‘project', docker-compose.yml should be amended to the following:
  6. A change is also needed to docker-compose.vs.debug.yml which should be amended to the following:

At this point you should be able to run the application using the Docker toolbar button and again see the website running—in a Windows container. However this time a second image (contosouniversity.database, tagged with dev) and corresponding container (named something like dockercompose362878786_contosouniversity.database_1) will have been created, with the container now running SQL Server. This is a newly minted instance of SQL Server and doesn't have a database for our website to connect to, which is the next issue to address.

Connecting the Contoso University Website to its Database

These next steps assume you are following on from the previous section, ie that the website is open in a browser and that Visual Studio is still debugging.

  1. Leave the browser open but stop debugging in Visual Studio.
  2. In ContosoUniversity.Web edit Web.config so that the connection string Data Source points to contosouniversity.database:
  3. In a PowerShell session, find the IP address of the container running SQL Server using docker inspect and passing in enough of the container's ID to make it unique:
  4. In ContosoUniversity.Database edit ContosoUniversity.publish.xml so that the Target database connection points to the IP address of the SQL Server container and change the the authentication to SQL Server Authentication. The User Name should be sa (yes—I know) the password should be the same as the one specified in the Dockerfile used to build the database image. Save the profile and then click Publish.
  5. Back in the web browser running the Contoso University website, click on one of the menu bar links (eg Departments) that causes a database query. If everything has worked you should now have a fully functioning application.

Understanding the Developer Inner Loop Workflow

At this point we have achieved our aim of running and debugging both the website and database components of Contoso University in containers running operating systems that are the same as would be used in production. Once the images and containers have been created they will—as far as my testing is concerned—continue to be used as long as nothing changes. This is the case even if Visual Studio, Docker or even the workstation are restarted. The great thing is that any changes made to the containers—for example updating the database schema—will be preserved. Of course, if something changes in one of the Dockerfile files the images and containers will be rebuilt and in the case of the database the publish file will need to be updated with a new IP address and the database will need to be published again from scratch. Also, if the solution is cleaned (ie Build > Clean Solution) the containers are removed and rebuilt, again necessitating publishing the database from scratch. Overall though, the developer inner loop workflow feels quite slick.

Next Steps

As things stand the compose and Dockerfile files are not ready to be used in a continuous delivery pipeline. The website Dockerfile for example has Contoso University being deployed as the Default Web Site rather than a ContosoUniversity website and the database Dockerfile doesn't cater for any persistent storage. There is also the problem of checking in the database project's publish profile with an IP address specific to one developer's workstation—a real pain for other developers. I'll address these issues as part of getting Contoso University working in a Docker-based continuous delivery pipeline in the next post in this series.

Cheers -- Graham

Continuous Delivery with Containers – Azure CLI Command for Creating a Docker Release Pipeline with VSTS Part 2

Posted by Graham Smith on March 14, 2017One Comment (click here to comment)

In my previous post I described my experience of working through Microsoft's Continuous Integration and Deployment of Multi-Container Docker Applications to Azure Container Service tutorial which is a walkthrough of how to use an Azure CLI 2.0 command to create a VSTS deployment pipeline to push Docker images to an Azure Container Registry and then deploy and run them on an Azure Container Service running a DC/OS cluster. Whilst it's great to be able to issue some commands and have stuff magically appear it's unlikely that you would use this approach to create production-grade infrastructure: having precise control over naming things is one good reason. Another problem with commands that create infrastructure is that you don't always get a good sense of what they are up to, and that's what I found with the az container release create command.

So I spent quite a bit of time ‘reverse engineering' az container release create in order to understand what it's doing and in this post I describe, step-by-step, how to build what the command creates. In doing so I gained first-hand experience of what I think will be an import pattern for the future -- running VSTS agents in a container. If your infrastructure is in place it's quick and easy to set up and if you want more agents it takes just seconds to scale to as many as you need. In fact, once I had figured what was going on I found that working with Azure Container Service and DC/OS was pretty straightforward and even a great deal of fun. Perhaps it's just me but I found being able to create 50 VSTS agents at the ‘flick of a switch' put a big smile on my face. Read on to find out just how awesome all this is...

Getting Started

If you haven't already worked through Microsoft's tutorial and my previous post I strongly recommend those as a starting point so you understand the big picture. Either way, you'll need to have the Azure CLI 2.0 installed and also to have forked the sample code to your own GitHub account and renamed it to something shorter (I used TwoSampleApp). My previous post has all the details. If you already have the Azure CLI installed do make sure you've updated it (pip install azure-cli --upgrade) since version 2.0 was recently officially released.

Creating the Azure Infrastructure

You'll need to create the following infrastructure in Azure:

  • A dedicated resource group (not strictly necessary but helps considerably with cleaning up the 30+ resources that get created).
  • An Azure container registry.
  • An Azure container service configured with a DC/OS cluster.

The Azure CLI 2.0 commands to create all this are as follows:

The az acs create command in particular is doing a huge amount of work behind the scenes, and if configuring a container service for a production environment you'd most likely want greater control over the names of all the resources that are created. I'm not worried about that here and the output of these commands is fine for my research purposes. If you do want to delve further you can examine the automation script for the top level resources these commands create.

Configuring VSTS

Over in your VSTS account you'll need to attend to the following items:

  • Create a new team project (I called mine TwoServiceApp) configured for Git. (A new project isn't strictly necessary but it helps when cleaning up.)
  • Create an Agent Pool called TwoServiceApp. You can get to the page that manages agent pools from the agent queues tab of your team project:
  • Create a service endpoint of type Github that grants VSTS access to your GitHub account. The procedure is detailed here -- I used the personal access token method and called the connection TwoServiceAppGh.
  • Create a service endpoint of type Docker Registry that grants access to the Azure container registry created above. I describe the process in this blog post and called the endpoint TwoServiceAppAcr.
  • Create a personal access token (granting permission to all scopes) and store the value for later use.
  • Ensure the Docker Integration extension is installed from the Marketplace.

Create a VSTS Agent

This is where the fun begins because we're going to create a VSTS agent in DC/OS using a Docker container. Yep -- you read that right! If you've only ever created an agent on ‘bare metal' servers then you need to forget everything you know and prepare for awesomeness. Not least because if you suddenly feel that you want a dozen agents a quick configuration setting will have them created for you in a flash!

The first step is to configure your workstation to connect to the DC/OS cluster running in your Azure container service. There are several ways to do this but I followed these instructions (Connect to a DC/OS or Swarm clusterCreate an SSH tunnel on Windows) to configure PuTTY to create an SSH tunnel. The host name will be something like azureuser@twoserviceappacsmgmt.westeurope.cloudapp.azure.com (you can get the master FQDN from the overview blade of your Azure container service and the default login name used by az acr create is azureuser) and you will need to have created a private key in .ppk format using PuTTYGen. Once you have successfully connected (you actually SSH to a DC/OS master VM) you should be able to browse to these URLs:

  • DC/OS -- http://localhost
  • Marathon -- http://localhost/marathon
  • Mesos -- http://localhost/mesos

If you followed the Microsoft tutorial then much of what you see will be familiar, although there will be nothing configured of course. To create the application that will run the agent you'll need to be in Marathon:

Clicking Create Application will display the configuration interface:

Whilst it is possible to work through all of the pages and enter in the required information, a faster way is to toggle to JSON Mode and paste in the following script (overwriting what's there):

You will need to amend some of the settings for your environment:

  • id -- choose an appropriate name for the application (note that /vsts-agents/ creates a folder for the application).
  • VSTS_POOL -- the name of the agent pool created above.
  • VSTS_TOKEN -- the personal access token created above.
  • VSTS_ACCOUNT -- the name of your VSTS account (ie if the URL is https://myvstsaccount.visualstudio.com then use myvstsaccount).

It will only take a few seconds to create the application after which you should see something that looks like this:

For fun, click on the Scale Application button and enter a number of instances to scale to. I scaled to 50 and it literally took just a few seconds to configure them all. This resulted in this which is pretty awesome in my book for just a few seconds work:

Scaling down again is even quicker -- pretty much instant in Marathon and VSTS was very quick to get back to displaying just one agent. With the fun over, what have we actually built here?

The concept is that rather than configure an agent by hand in the traditional way, we are making use of one of the Docker images Microsoft has created specifically to contain the agent and build tools. You can examine all the different images from this page on Docker Hub. Looking at the Marathon configuration code above in the context of the instructions for using the VSTS agent images it's hopefully clear that the configuration is partially around hosting the image and creating the container and partially around passing variables in to the container to configure the agent to talk to your VSTS account and a specific agent pool.

Create a Build Definition

We're now at a point where we can switch back to VSTS and create a build definition in our team project. Most of the tasks are of the Docker Compose type and you can get further details here. Start with an empty process and name the definition TwoServiceApp. On the Options tab set the Default agent queue to be TwoServiceApp. On the tasks tab in Get sources configure the build to point to your GitHub account:

Now add and configure the following tasks (only values that need adding or amending, or which need a special mention are listed):

Task #1 -- Docker Compose
  • Display name = Build repository
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Docker Compose File = **/docker-compose.ci.build.yml
  • Action = Run a specific service image
  • Service name = ci-build

Save the definition and queue a build. The source code will be pulled down and then the instructions in the ci-build node of docker-compose.ci.build.yml will be executed which will cause service-b to be built.

Task #2 -- Docker Compose
  • Display name = Build service images
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Docker Compose File = **/docker-compose.yml
  • Qualify Image Names = checked
  • Action = Build service images
  • Additional Image Tags = $(Build.BuildId) $(Build.SourceBranchName) $(Build.SourceVersion) (on separate lines)
  • Include Source Tags = checked
  • Include Latest Tag = checked

Save the definition and queue a build. The addition of this task causes causes Docker images to be created in the agent container for service-a and service-b.

Task #3 -- Docker Compose
  • Display name = Push service images
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Docker Compose File = **/docker-compose.yml
  • Qualify Image Names = checked
  • Action = Push service images
  • Additional Image Tags = $(Build.BuildId) $(Build.SourceBranchName) $(Build.SourceVersion) (on separate lines)
  • Include Source Tags = checked
  • Include Latest Tag = checked

Save the definition and queue a build. The addition of this task causes causes the Docker images to be pushed to the Azure container registry.

Task #4 -- Docker Compose
  • Display name = Write service image digests
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Docker Compose File = **/docker-compose.yml
  • Qualify Image Names = checked
  • Action = Write service image digests
  • Image Digest Compose File = $(Build.StagingDirectory)/docker-compose.images.yml

Save the definition and queue a build. The addition of this task creates immutable identifiers for the previously built images which provide a guaranteed way of referring back to a specific image in the container registry. The identifiers are stored in a file called docker-compose.images.yml, the contents of which will look something like:

Task #5 -- Docker Compose
  • Display name = Combine configuration
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Docker Compose File = **/docker-compose.yml
  • Additional Docker Compose Files = $(Build.StagingDirectory)/docker-compose.images.yml
  • Qualify Image Names = checked
  • Action = Combine configuration
  • Remove Build Options = checked

Save the definition and queue a build. The addition of this task creates a new docker-compose.yml that is a composite of the original docker-compose.yml and docker-compose.images.yml. The contents will look something like:

This is the file that is used by the release definition to deploy the services to DC/OS.

Task #6 -- Copy Files
  • Display name = Copy Files to: $(Build.StagingDirectory)
  • Contents = **/docker-compose.env.*.yml
  • Target Folder = $(Build.StagingDirectory)

Save the definition but don't bother queuing a build since as things stand this task doesn't have any files to copy over. Instead, the task comes in to play when using environment files (see later).

Task #7 -- Publish Build Artifacts
  • Display name = Publish Artifact: docker-compose
  • Path to Publish = $(Build.StagingDirectory)
  • Artifact Name = docker-compose
  • Artifact Type = Server

Save the definition and queue a build. The addition of this task creates the build artefact containing the contents of the staging directory, which happen to be docker-compose.yml and docker-compose.images.yml, although only docker-compose.yml is needed. The artifact can be downloaded of course so you can examine the contents of the two files for yourself.

Create a Release Definition

Create a new empty release definition and configure the Source to point to the TwoServiceApp build definition, the Queue to point to the TwoServiceApp agent queue and check the Continuous deployment option:

With the definition created, edit the name to TwoServiceApp, rename the default environment to Dev and rename the default phase to AcsDeployPhase:

Add Docker Deploy task to the AcsDeployPhase and configure as follows (only values that need changing are listed):

  • Display Name = Deploy to ACS DC/OS
  • Docker Registry Connection = TwoServiceAppAcr (or the name of the Docker Registry endpoint created above if different)
  • Target Type = Azure Container Service (DC/OS)
  • Docker Compose File = **/docker-compose.yml
  • ACS DC/OS Connection Type = Direct

The final result should be as follows:

Trigger a release and then switch over to DC/OS (ie at http://localhost) and the Services page. Drill down through the Dev folder and the three services defined in docker-compose.yml should now be deployed and running:

To complete the exercise the Dev environment can now be cloned (click the ellipsis in the Dev environment to show the menu) to create Test and Production environments with manual approvals. If you want to view the sample application in action follow the View the application instructions in the Microsoft tutorial.

At this point there is no public endpoint for the production instance of TwoServiceApp. To remedy that follow the Expose public endpoint for production instructions in the Microsoft tutorial. Additionally, you will need to amend the production version of the Docker Deploy task so the Additional Docker Compose Files section contains docker-compose.env.production.yml.

Final Thoughts

Between Microsoft's tutorial and my two posts relating to it you have seen a glimpse of the powerful tools that are available for hosting and orchestrating containers. Yes, this has all been using Linux containers but indications are that similar functionality -- if perhaps not using exactly the same tools -- is on the way for Windows containers. Stay tuned!

Cheers -- Graham

Continuous Delivery with Containers – Azure CLI Command for Creating a Docker Release Pipeline with VSTS Part 1

Posted by Graham Smith on January 30, 20176 Comments (click here to comment)

One of the aims of my blog series on Continuous Delivery with Containers is to try and understand how best to use Visual Studio Team Services with Docker, so I was very interested to learn that Azure CLI 2.0 has a command to create a VSTS deployment pipeline to push Docker images to an Azure Container Registry and then deploy and run them on an Azure Container Service running a DC/OS cluster. Even better, Microsoft have written a tutorial (Continuous Integration and Deployment of Multi-Container Docker Applications to Azure Container Service) on how to use this command.

Whilst I'm somewhat sceptical about using generic scaffolding tooling to create production-ready workloads (I find that the naming conventions used are usually unsuitable for example) there is no doubt that they are great for quickly building proof of concepts and also for learning (what are hopefully!) best practices. It was with this aim that, armed with a large cup of tea, I sat down one afternoon to plough my way through the tutorial. It was a great learning experience, however I went down some blind alleys to get the pipeline working and then ended up doing quite a lot of head scratching (due to my ignorance I hasten to add) to fully understand what had been created.

So in this post I'm writing-up my experience of working through the tutorial with notes that I hope will help anyone else using it. In a follow-up post I'll attempt to document what the az container release create command actually creates and configures. Just a reminder that with this tutorial we're still very much in the Linux container world. Whilst this might be frustrating for those eager to see advanced tutorials based on Windows containers the learning focus here is mostly Docker and VSTS so the fact that the containers are running Linux shouldn't put you off.

On a final note before we get started, I'm using a Windows 10 Professional workstation with the beta version (1.13.0 at the time of writing) of Docker for Windows installed and running.

Getting Started with the Azure CLI

The tutorial requires version 2.0 of Azure CLI which is based on Python. The Azure CLI installation documentation suggests running Azure CLI in Docker but don't go down that path as it's a dead end as far as the tutorial is concerned. Instead follow these installation steps:

  1. Install the latest version of Python from here.
  2. From a command prompt upgrade pip (package management system for Python) using the python -m pip install --upgrade pip command.
  3. Install Azure CLI 2.0 using pip install azure-cli. (If you have previously installed Azure CLI 2.0 you should check for an upgrade using pip install azure-cli --upgrade.)
  4. Check Azure CLI is working using the az command. You should see this:

The next step is to actually log in to the Azure CLI. The process is as follows:

  1. At a command prompt type az login.
  2. Navigate to https://aka.ms/devicelogin in a browser.
  3. Supply the one-time authentication code supplied by the az login command.
  4. Complete the authentication process using your Azure credentials.

If you have multiple subscriptions you may need to set the default subscription:

  1. At the command prompt type az account list to show details of all your accounts.
  2. Each account has an isDefault property which will tell you the default account.
  3. If you need to make a change use az account set --subscription <Id> -- you can copy and paste the subscription Id from the accounts list.

Creating the Azure Container Service Cluster with DC/OS

This step is pretty straightforward and the tutorial doesn't need any further explanation. My commands to create the resource group and the ACS cluster were:

Be aware that the az acs create command results in a request to provision 18 cores. This might exceed your quota for a given region, even if you have previously contacted Microsoft Support to request an increase in the total number of cores allowed for your subscription (which you might have to do anyway if you have cores already provisioned). I found that choosing a region where I didn't have any cores provisioned fixed a quotaExceeded exception that I was getting.

For simplicity I used the --generate-ssh-keys option to save having to do this manually. This creates id_rsa and id_rsa.pub files (ie a private / public key pair) in C:\Users\<username>\.ssh.

A word of warning -- if you are using an Azure subscription with MSDN credits be aware that an ACS cluster will eat your credits at an alarming rate. As of the time of writing this post I've not found a reliable way of turning everything off and turning it back on again with everything fully working (specifically the build agent). Consequently I tend to delete the resource group and the VSTS project when I'm finished using them and then recreate them from scratch when I next need them. If you do this do be aware that if you have multiple Azure subscriptions the az account set --subscription <Id> command to set the default subscription can't be relied upon to be ‘sticky', and you can find yourself creating stuff in a different subscription by mistake.

Working with the Sample Code

The tutorial uses sample code that consists of an Angular.js-based web app (with a Node.js backend) that calls a separate .NET Core application, and these are deployed as two separate services. The problem I found was that the name of the GitHub repo (container-service-dotnet-continuous-integration-multi-container) is extremely long and is used to name some of the artefacts that get created by the Azure CLI container release command. This makes for some very unwieldy names which I found somewhat irksome. You can fix this as follows:

  1. Fork the sample code to your own GitHub account.
  2. Switch to the Settings tab:
  3. Use the Rename option to give the forked repo a more manageable name -- I chose TwoServiceApp.
  4. Clone the repo to your workstation in your preferred way -- for me this involved opening a command prompt at C:\Source\GitHub and running git clone https://github.com/GrahamDSmith/TwoServiceApp.git.

At this point it's probably a good idea to get the sample app working locally which will help with understanding how multi-container Docker deployments work. If you want to examine the source code then Visual Studio Code is an ideal tool for the job. To run the application the first step is to build the .NET Core component. At a command prompt at the root of the application run the following command:

This runs docker-compose with a specific .yml file, and executes the instructions at the ci-build node. The really neat thing about this command is that it uses a Docker container to build the .NET Core app (service-b), which means your workstation doesn't need the .NET Core to be installed for this to work. Looking at the key parts of the docker-compose.ci.build.yml file:

  • image: microsoft/dotnet:1.0.0-preview2.1-sdk -- this specifies that this particular Microsoft official Docker image for .NET Core on Linux should be used.
  • volumes: -- ./service-b:/src -- this causes the local service-b folder on your workstation to be ‘mirrored' to a folder named src in the container that will be created from the microsoft/dotnet:1.0.0-preview2.1-sdk image.
  • working_dir: /src -- set the working directory in the container to src.
  • command: /bin/bash -c "dotnet restore && dotnet publish -c Release -o bin ." -- this is the command to build and publish service-b.

Because the service-b folder on your workstation is mirrored to the src folder in the running container the result of the build command is copied from the container to your workstation. Pretty nifty!

To actually run the application now run this command:

By convention docker-compose will look for a docker-compose.yml file so there is no need to specify it. On examining docker-compose.yml it should be pretty easy to see what's going on -- three services (service-a, service-b and mycache) are specified and service-a and service-b are built according to their respective Dockerfile instructions. Both service-a and service-b containers are set to listen on port 80 at runtime and in addition service-a is accessible to the host (ie your workstation) on port 8080. Consequently, you should be able to navigate to http://localhost:8080 in your browser and see the app running.

Creating the Deployment Pipeline

This step is straightforward and the tutorial doesn't need any further explanation. One extra step I included was to create an Azure Container Registry instance in the same resource group used to create the Azure Container Service. Despite repeated attempts, for some reason I couldn't create this at the command line so ended up creating it through the portal. The command though should look similar to this:

To facilitate easy teardown I also created a dedicated project in VSTS called TwoServiceApp. The command to create the pipeline (GitHub token made up of course) was then as follows:

This command results in the creation of build and release definitions in VSTS (along with other supporting items) and a deploy of the image to a Dev environment.

Viewing the Application

To view the application as deployed to the Dev environment you need to launch the DC/OS dashboard. The tutorial instructions are easy to follow, however you might get tripped-up by the instructions for configuring Pageant since the instructions direct you to "Launch PuttyGen and load the private SSH key used to create the ACS cluster (%homepath%\id_rsa)". On my machine at least the id_rsa file was created at %homepath%\.ssh\id_rsa rather than %homepath%\id_rsa. If you persist with the instructions you eventually end up running the application in the Dev environment, but if like me you are new to cluster technologies such as DC/OS it all feels like some kind of sorcery.

A final observation here is that the configuration to launch the DC/OS dashboard requires your browser's proxy to be set. This knocked-out the Internet connection for all my other browser tabs, and was the cause of alarm for a few seconds when I realised that the tab I was using to edit my WordPress blog wouldn't save. If you launched the DC/OS dashboard from the command line (using az acs dcos browse --name TwoServiceAppAcs --resource-group TwoServiceAppRg) you need to use CTRL+C from the command line to close the session. In an emergency head over to Windows Settings > Network & Internet > Proxy to reset things back to normal.

Until Next Time

That concludes the write-up of my notes for use with the Continuous Integration and Deployment of Multi-Container Docker Applications to Azure Container Service tutorial. If you work through the tutorial and have any further tips that might be of use please do post in the comments.

In the next post I'll start to document what the what the az container release create command actually creates and configures.

Cheers -- Graham

Continuous Delivery with Containers – Amending a VSTS / Docker Hub Deployment Pipeline with Azure Container Registry

Posted by Graham Smith on December 1, 2016No Comments (click here to comment)

In this blog series on Continuous Delivery with Containers I'm documenting what I've learned about Docker and containers (both the Linux and Windows variety) in the context of continuous delivery with Visual Studio Team Services. It's a new journey for me so do let me know in the comments if there is a better way of doing things!

In the previous post in this series I explained how to use VSTS and Docker to build and deploy an ASP.NET Core application to a Linux VM running in Azure. It's a good enough starting point but one of the first objections anyone working in a private organisation is likely to have is publishing Docker images to the public Docker Hub. One answer is to pay for a private repository in the Docker Hub but for anyone using Azure a more appealing option might be the Azure Container Registry. This is a new offering from Microsoft -- it's still in preview and some of the supporting tooling is only partially baked. The core product is perfectly functional though so in this post I'm going to be amending the pipeline I built in the previous post with Azure Container Registry to find out how it differs from Docker Hub. If you want to follow along with this post you'll need to make sure  you have a working pipeline as I describe in my previous post.

Create an Azure Container Registry

At the time of writing there is no PowerShell experience for ACR so unless you want to use the CLI 2.0 it's a case of using the portal. I quite like the CLI but to keep things simple I'm using the portal. For some reason ACR is a marketplace offering so you'll need to add it from New > Marketplace > Containers > Container Registry (preview). Then follow these steps:

  1. Create a new resource group that will contain all the ACR resources -- I called mine PrmAcrResourceGroup.
  2. Create a new standard storage account for the ACR -- I called mine prmacrstorageaccount. Note that at the time of writing ACR is only available in a few regions in the US and the storage account needs to be in the same region. I chose West US.
  3. Create a new container registry using the resource group and storage account just created -- I called mine PrmContainerRegistry. As above, the registry and storage account need to be in the same location. You will also need to enable the Admin user:
    azure-portal-create-container-registry

Add a New Docker Registry Connection

This registry connection will be used to replace the connection made in the previous post to Docker Hub. The configuration details you need can be found in the Access key blade of the newly created container registry:

azure-portal-container-registry-access-key-blade

Use these settings to create a new Docker Registry connection in the VSTS team project:

vsts-services-endpoints-azure-container-registry

Amend the Build

Each of the three Docker tasks that form part of the build need amending as follows:

  • Docker Registry Connection = <name of the Azure Container Registry connection>
  • Image Name = aspnetcorelinux:$(Build.BuildNumber)
  • Qualify Image Name = checked

One of the most crucial amendments turned out to be the Qualify Image Name setting. The purpose of this setting is to prefix the image name with the registry hostname, but if left unchecked it seems to default to Docker Hub. This causes an error during the push as the task tries to push to Docker Hub which of course fails because the registry connection has authenticated to ACR rather than Docker Hub:

vsts-docker-push-error

It was obvious once I'd twigged what was going on but it had me scratching my head for a little while!

Final Push

With the amendments made you can now trigger a new build, which should work exactly as before except now the docker image is pushed to -- and run from -- your ACR instance rather than Docker Hub.

Your next question is probably going to be how can I get a list of the repositories I've created in ACR? Don't bother looking in the portal since -- at the time of writing at least -- there is no functionality there to list repositories. Instead one of the guys at Microsoft has created a separate website which, once authenticated, shows you this information:

acr-portal

If you want to do a bit more you can use the CLI 2.0. The syntax to list repositories for example is az acr repository list -n <Azure Container Registry name>.

It's early days yet however the ACR is looking like a great option for anyone needing a private container registry and for whom an Azure option makes sense. Do have a look at the documentation and also at Steve Lasker's Connect(); video here.

Cheers -- Graham

Continuous Delivery with Containers – Use Visual Studio Team Services and Docker to Build and Deploy ASP.NET Core to Linux

Posted by Graham Smith on October 27, 20168 Comments (click here to comment)

In this blog series on Continuous Delivery with Containers I'm documenting what I've learned about Docker and containers (both the Linux and Windows variety) in the context of continuous delivery with Visual Studio Team Services. The Docker and containers world is mostly new to me and I have only the vaguest idea of what I'm doing so feel free to let me know in the comments if I get something wrong.

Although the Windows Server Containers feature is now a fully supported part of Windows it is still extremely new in comparison to containers on Linux. It's not surprising then that even in the world of the Visual Studio developer the tooling is most mature for deploying containers to Linux and that I chose this as my starting point for doing something useful with Docker. As I write this the documentation for deploying containers with Visual Studio Team Services is fragmented and almost non-existent. The main references I used for this post were:

However to my mind none of these blogs cover the whole process to any satisfactory depth and in any case they are all somewhat out of date. In this post I've therefore tried to piece all of the bits of the jigsaw together that form the end-to-end process of creating an ASP.NET Core app in Visual Studio and debugging it whilst running on Linux, all the way through to using VSTS to deploy the app in a container to a target node running Linux. I'm not attempting to teach the basics of Docker and containers here and if you need to get up to speed with this see my Getting Started post here.

Install the Tooling for the Visual Studio Development Inner Loop

In order to get your development environment properly configured you'll need to be running a version of Windows that is supported by Docker for Windows and have the following tooling installed:

You'll also need a VSTS account and an Azure subscription.

Create an ASP.NET Core App

I started off by creating a new Team Project in VSTS and called Containers and then from the Code tab creating a New repository using Git called AspNetCoreLinux:

vsts-code-new-repository

Over in Visual Studio I then cloned this repository to my source control folder (in my case to C:\Source\VSTS\AspNetCoreLinux as I prefer a short filepath) and added .gitignore and .gitattributes files (see here if this doesn't make sense) and committed and synced the changes. Then from File > New > Project I created an ASP.NET Core Web Application (.NET Core) application called AspNetCoreLinux using the Web Application template (not shown):

visual-studio-create-new-asp-net-core-application

Visual Studio will restore the packages for the project after which you can run it with F5 or Ctrl+F5.

The next step is to install support for Docker by right-clicking the project and choosing Add > Docker Support. You should now see that the Run dropdown has an option for Docker:

visual-studio-run-dropdown

With Docker selected and Docker for Windows running (with Shared Drives enabled!) you will now be running and debugging the application in a Linux container. For more information about how this works see the resources on the Visual Studio Tools for Docker site or my list of resources here. Finally, if everything is working don't forget to commit and sync the changes.

Provision a Linux Build VM

In order to build the project in VSTS we'll need a build machine. We'll provision this machine in Azure using the Azure driver for Docker Machine which offers a very neat way for provisioning a Linux VM with Docker installed in Azure. You can learn more about Docker Machine from these sources:

To complete the following steps you'll need the Subscription ID of the Azure subscription you intend to use which you can get from the Azure portal.

  1. At a command prompt enter the following command:

    By default this will create a Standard A2 VM running Ubuntu called vstsbuildvm (note that "Container names must be 3-63 characters in length and may contain only lower-case alphanumeric characters and hyphen. Hyphen must be preceded and followed by an alphanumeric character.") in a resource group called VstsBuildDeployRG in the West US datacentre (make sure you use your own Azure Subscription ID). It's fully customisable though and you can see al the options here. In particular I've added the option for the VM to be created with a static public IP address as without that there's the possibility of certificate problems when the VM is shut down and restarted with a different IP address.
  2. Azure now wants you to authenticate. The procedure is explained in the output of the command window, and requires you to visit https://aka.ms/devicelogin and enter the one-time code:
    command-prompt-docker-machine-create
    Docker Machine will then create the VM in Azure and configure it with Docker and also generate certificates at C:\Users\<yourname>\.docker\machine. Do have a poke a round the subfolders of this path as some of the files are needed later on and it will also help to understand how connections to the VM are handled.
  3. This step isn't strictly necessary right now, but if you want to run Docker commands from the current command prompt against the Docker Engine running on the new VM you'll need to configure the shell by first running docker-machine env vstsbuildvm. This will print out the environment variables that need setting and the command (@FOR /f "tokens=*" %i IN (‘docker-machine env vstsbuilddeployvm') DO @%I) to set them. These settings only persist for the life of the command prompt window so if you close it you'll need to repeat the process.
  4. In order to configure the internals of the VM you need to connect to it. Although in theory you can use the docker-machine ssh vstsbuildvm command to do this in practice the shell experience is horrible. Much better is to use a tool like PuTTY. Donovan Brown has a great explanation of how to get this working about half way down this blog post. Note that the folder in which the id_rsa file resides is C:\Users\<yourname>\.docker\machine\machines\<yourvmname>. A tweak worth making is to set the DNS name for the server as I describe in this post so that you can use a fixed host name in the PuTTY profile for the VM rather than an IP address.
  5. With a connection made to the VM you need to issue the following commands to get it configured with the components to build an ASP.NET Core application:
    1. Upgrade the VM with sudo apt-get update && sudo apt-get dist-upgrade.
    2. Install .NET Core following the instructions here, making sure to use the instructions for Ubuntu 16.04.
    3. Install npm with sudo apt -y install npm.
    4. Install Bower with sudo npm install -g bower.
  6. Next up is installing the VSTS build agent for Linux following the instructions for Team Services here. In essence (ie do make sure you follow the instructions) the steps are:
    1. Install the Ubuntu pre-requisites using sudo apt-get install -y libunwind8 libcurl3.
    2. Create and switch to a downloads folder using sudo mkdir Downloads && cd Downloads.
    3. At the Get Agent page in VSTS select the Linux tab and the Ubuntu 16.04-x64 option and then the copy icon to copy the URL download link to the clipboard:
      vsts-download-agent-get-agent
    4. Back at the PuTTY session window type sudo wget followed by a space and then paste the URL from the clipboard. Run this command to download the agent to the Downloads folder.
    5. Go up a level using cd .. and then make and switch to a folder for the agent using sudo mkdir myagent && cd myagent.
    6. Extract the compressed agent file to myagent using sudo tar zxvf ~/Downloads/vsts-agent-ubuntu.16.04-x64-2.108.0.tar.gz (note the exact file name will likely be different).
    7. Configure the agent using ./config.sh after first making sure you have created a personal access token to use. I created my agent in a pool I created called Linux.
    8. Configure the agent to run as a service using sudo ./svc.sh install and then start it using sudo ./svc.sh start.

If the procedure was successful you should see the new agent showing green in the VSTS Agent pools tab:

vsts-agent-pools

Provision a Linux Target Node VM

Next we need a Linux VM we can deploy to. I used the same syntax as for the build VM calling the machine vstsdeployvm:

Apart from setting the DNS name for the server as I describe in this post there's not much else to configure on this server except for updating it using sudo apt-get update && sudo apt-get dist-upgrade.

Gearing Up to Use the Docker Integration Extension for VSTS

Configuration activities now shift over to VSTS. The first thing you'll need to do is install the Docker Integration extension for VSTS from the Marketplace. The process is straightforward and wizard-driven so I won't document the steps here.

Next up is creating three service end points -- two of the Docker Host type (ie our Linux build and deploy VMs) and one of type Docker Registry. These are created by selecting Services from the Settings icon and then Endpoints and then the New Service Endpoint dropdown:

vsts-services-endpoints-docker

To create a Docker Host endpoint:

  1. Connection Name = whatever suits -- I used the name of my Linux VM.
  2. Server URL = the DNS name of the Linux VM in the format tcp://your.dns.name:2376.
  3. CA Certificate = contents of C:\Users\<yourname>\.docker\machine\machines\<yourvmname>\ca.pem.
  4. Certificate = contents of C:\Users\<yourname>\.docker\machine\machines\<yourvmname>\cert.pem.
  5. Key = contents of C:\Users\<yourname>\.docker\machine\machines\<yourvmname>\key.pem.

The completed dialog (in this case for the build VM) should look similar to this:

vsts-services-endpoints-docker-host

Repeat this process for the deploy VM.

Next, if you haven't already done so you will need to create an account at Docker Hub. To create the Docker Registry endpoint:

  1. Connection Name = whatever suits -- I used my name
  2. Docker Registry = https://index.docker.io/v1/
  3. Docker ID = username for Docker Hub account
  4. Password = password for Docker Hub account

The completed dialog should look similar to this:

vsts-services-endpoints-docker-hub

Putting Everything Together in a Build

Now the fun part begins. To keep things simple I'm going to run everything from a single build, however in a more complex scenario I'd use both a VSTS build and a VSTS release definition. From the VSTS Build & Release tab create a new build definition based on an Empty template. Use the AspNetCoreLinux repository, check the Continuous integration box and select Linux for the Default agent queue (assuming you create a queue named Linux as I've done):

vsts-create-new-build-definition

Using Add build step add two Command Line tasks and three Docker tasks:

vsts-add-tasks

In turn right-click all but the first task and disable them -- this will allow the definition to be saved without having to complete all the tasks.

The configuration for Command Line task #1 is:

  • Tool = dotnet
  • Arguments = restore -v minimal
  • Advanced > Working folder = src/AspNetCoreLinux (use the ellipsis to select)

Save the definition (as AspNetCoreLinux) and then queue a build to make sure there are no errors. This task restores the packages specified in project.json.

The configuration for Command Line task #2 is:

  • Tool = dotnet
  • Arguments = publish -c $(Build.Configuration) -o $(Build.StagingDirectory)/app/
  • Advanced > Working folder = src/AspNetCoreLinux (use the ellipsis to select)

Enable the task and then queue a build to make sure there are no errors. This task publishes the application to$(Build.StagingDirectory)/app (which equates to home/docker-user/myagent/_work/1/a/app).

The configuration for Docker task #1 is:

  • Docker Registry Connection = <name of your Docker registry connection>
  • Action = Build an image
  • Docker File = $(Build.StagingDirectory)/app/Dockerfile
  • Build Context = $(Build.StagingDirectory)/app
  • Image Name = <your Docker ID>/aspnetcorelinux:$(Build.BuildNumber)
  • Docker Host Connection = vstsbuildvm (or your Docker Host name for the build server)
  • Working Directory = $(Build.StagingDirectory)/app

Enable the task and then queue a build to make sure there are no errors. If you run sudo docker images on the build machine you should see the image has been created.

The configuration for Docker task #2 is:

  • Docker Registry Connection = <name of your Docker registry connection>
  • Action = Push an image
  • Image Name = <your Docker ID>/aspnetcorelinux:$(Build.BuildNumber)
  • Advanced Options > Docker Host Connection = vstsbuildvm (or your Docker Host name for the build server)
  • Advanced Options > Working Directory = $(System.DefaultWorkingDirectory)

Enable the task and then queue a build to make sure there are no errors. If you log in to Docker Hub you should see the image under your profile.

The configuration for Docker task #3 is:

  • Docker Registry Connection = <name of your Docker registry connection>
  • Action = Run an image
  • Image Name = <your Docker ID>/aspnetcorelinux:$(Build.BuildNumber)
  • Container Name = aspnetcorelinux$(Build.BuildNumber) (slightly different from above!)
  • Ports = 80:80
  • Advanced Options > Docker Host Connection = vstsdeployvm (or your Docker Host name for the deploy server)
  • Advanced Options > Working Directory = $(System.DefaultWorkingDirectory)

Enable the task and then queue a build to make sure there are no errors. If you navigate to the URL of your deployment sever (eg http://vstsdeployvm.westus.cloudapp.azure.com/) you should see the web application running. As things stand though if you want to deploy again you'll need to stop the container first.

That's all for now...

Please do be aware that this is only a very high-level run-through of this toolchain and there many gaps to be filled: how does a website work with databases, how to host a website on something other than the Kestrel server used here and how to secure containers that should be private are just a few of the many questions in my mind. What's particularly exciting though for me is that we now have a great solution to the problem of developing a web app on Windows 10 but deploying it to Windows Server, since although this post was about Linux, Docker for Windows supports the same way of working with Windows Server Core and Nanao Server (currently in beta). So I hope you found this a useful starting point -- do watch out for my next post in this series!

Cheers -- Graham