Archives for Tips, Tricks and Tools

Ubiquiti WiFi: How I Got Started with this Fantastic Kit on a Modest Budget

Posted by Graham Smith on August 3, 20176 Comments (click here to comment)

It all started a few weeks ago when I was sat out in the garden on a sunny day with my wife. She was trying to do something on her tablet and was bemoaning the poor WiFi outdoors. At the time I was coincidentally reading an article on WiFi mesh systems and since WiFi wasn't too great in some parts of indoors either I briefly flirted with the idea of buying something like Google Wifi or BT's Whole Home Wi-Fi. However on looking in to this in more depth none of the products seemed to tick all the boxes, either being very expensive or lacking in what I would consider an essential feature. For example Google Wifi is administered by an app rather than by a browser application. Fine for some perhaps but not for me thank you.

I thought I could fix things on the cheap and bought a Netgear EX3700 WiFi Range Extender. I used this in both extender mode (I think of this as WiFi in serial with the router's WiFi) and also in access point (AP) mode via an Ethernet connection (I think of this as WiFi in parallel with the router's WiFi) however I wasn't thrilled with the results. The main gripe was that the mobile devices in my home at least (phones, tablets etc) all wanted to hang on to their existing connection for grim death. So even when standing next to the EX3700 in AP mode blasting out a 100% signal, my phone could still be hanging on to almost no signal from the router. Perhaps it was something wrong with my setup—the EX3700 too close to my router perhaps? Either way it wasn't wholly satisfactory.

Fast forward a couple of weeks and I found myself working through Troy Hunt's excellent Pluralsight course on What Every Developer Must Know About HTTPS. One of the slides had a screenshot of a blog post by Troy on fixing dodgy WiFi on his jet ski with Ubiquiti's UniFi Mesh. I vaguely remembered reading about Ubiquiti somewhere and with my interest piqued I started checking out Troy's blog.

And as it has been with so many others it seems, that's where my love affair began...

Warning! Reading Further WILL Cause you to be Parted from Your Hard-Earned Cash

There are many places on the Internet that eulogise about Ubiquiti products so I'm going to resist the temptation here. These are the key posts I read (specifically about the UniFi rage of products) and which I think you will enjoy and find useful and informative:

Make sure you don't miss the video in Troy's first post of him unboxing a load of Ubiquiti kit. This does a great job of explaining what all the main bits of kit are, and if you watch this in conjunction with reading the posts above you'll have a good idea of the key products in the UniFi range.

Needless to say, I was instantly hooked and I wanted in. However my existing WiFi setup wasn't so bad that I could justify spending over a £1,000 on new kit. Feeling slightly deflated I continued to research the UniFi range of products, to the point where it dawned on me that you don't need to start off with a big investment, and you don't need to buy every component to make a working system. And so the fun began...

Starting off with an Access Point

My journey began by adding a wireless access point (AP) to my home network. A few things need to be in place to make this work:

  • The first thing of course is an AP. There are several in the UniFi range and like many others I plumped for the AP-AC-PRO on the basis that it was only a little more expensive than the less capable models but vastly cheaper than the AP-AC-HD daddy.
  • Generally speaking APs require an Ethernet connection so you are going to need an Ethernet connection near to where you will site the AP. I'm lucky in that my home had CAT 5e wired-in when it was built and I have 40+ sockets all over the house and garage. An alternative would be running a dedicated cable from your modem/router or more likely powerline networking using the domestic electricity supply.
  • In addition to Ethernet providing a data connection, UniFi APs also need to get their power over an Ethernet connection (logically known as power over Ethernet—PoE). Although Ubiquiti sell some lovely switches that have PoE ports (see here for an example) you don't actually need one of these because the APs (if you buy them singly at least) ship with a PoE adaptor (the POE-48-24W-G model). As long as you have an electrical power socket near your Ethernet connection you are good to go.
  • The final piece of this jigsaw is the UniFi Controller software. Ubiquiti sell a dedicated device that runs the software (the Cloud Key) but again, you don't need this. The software is free to download and runs happily on the usual platforms—even on the Raspberry Pi. Furthermore, if you are just running an AP the UniFi Controller software doesn't need to be running all the time and can be installed on a PC or a Mac and spun up as and when is needed to configure the AP.

Putting all of this together was pretty straightforward. The AP-AC-PRO simply linked in to my Ethernet network via the PoE adaptor, and I opted to position it in the middle of the house on top of a unit in our open-plan kitchen / dining room. I have an always-on Windows Server 2012 R2 machine on my network and I installed the UniFi Controller software on that. There are a few considerations to be aware of when running on Windows:

  • Java is a requirement and whilst the installation wizard takes you to a download page you seem to end up installing 32-bit Java. For reasons I'll explain below you probably don't want this so instead make sure you download and install the 64-bit version.
  • In it's default configuration UniFi Controller doesn't run as a Windows service. It's easy to configure using these instructions however it only works with 64-bit Java—see above.
  • You access UniFi Controller using a browser (https://localhost:8443 if running locally) however it's not compatible with browsers that ship with Windows Server 2012 R2 or Windows Server 2016 and if this is a problem you can easily get round this by accessing from a different machine replacing localhost with the machine's IP address or FQDN.
  • UniFi Controller ships with a self-signed SSL certificate which causes browsers to raise warnings. These can be safely bypassed but it does leave the browser address bar looking a bit ugly.

The UniFi Controller installation wizard is a doddle and doesn't need explaining. At the end of the process you are presented with a nice dashboard:

So far so good, but it's clear that there are a lot of greyed-out features. The fix? Just a bit more expenditure to buy the UniFi Security Gateway, commonly known as the USG.

You Probably Will Want to get a UniFi Security Gateway

That was my initial reaction on seeing the Controller dashboard without the USG. There is a choice between the rackmount USG‑PRO‑4 or the standalone USG. The former is enterprise grade and much more expensive than the USG, which is perfectly adequate for a home network and the one I opted for. There are a few steps to incorporating the USG in to your home network and it helps to be clear about which roles each piece of kit will perform when the USG is in and working. In my case I'm on VDSL broadband and my original setup consisted of a Netgear D6400 performing the roles of both modem and router (as well as DHCP and a few other things of course but I'm keeping it simple). With the USG in the mix, the D6400 is configured to work in modem only mode and the USG takes on the router function. Crucially in my case, I needed to configure the USG to be the device that supplies the PPPoE credentials my broadband provider needs for a successful connection. This was a bit of a head-scratcher at first since the USG can work in two other modes (DHCP and Static IP) and I wasn't entirely sure how much configuration would be down to the D6400. None as it turns out.

Because the default D6400 gateway configuration is 192.168.0.1 and the USG is configured as 192.168.1.1 and I wasn't sure what would happen if I changed the USG to 192.169.0.1 as well, I decided to change my network to fit in with the USG. I planned to perform the initial USG configuration directly from my always-on server (running UniFi Controller on Windows Server 2012 R2) which I knew would cause issues with Internet Explorer so I planned ahead and installed FireFox. I also made sure that my broadband provider's PPPoE credentials were available locally on that box as well as the credentials to log in to UniFi Controller. The procedure was then as follows:

  • Configure the USG to work in PPPoE mode by attaching it directly to a laptop that did not already have a connection to another gateway (ie WiFi turned off and no Ethernet connected) and running the setup routine by pointing a browser to http://setup.ubnt.com/. This didn't work for me but pointing a browser to http://192.168.1.1 did. An Edit Configuration button allows you to change from the default DHCP setting to PPPoE.
  • Convert the Netgear D6400 from modem/router mode to modem only mode. This wasn't too hard to find in the advanced settings—you'll have to dig around for this on your own device. At this point you'll loose your broadband connection and for many devices it seems the ability to connect to them without performing a factory reset.
  • Because I was planning to bring my wired devices back one-by-one I unplugged everything from my switch and the D6400. I then plugged the machine running UniFi Controller directly in to the LAN 1 port. Because this machine had a static IP on the D6400's subnet I changed this temporarily back to DHCP so it could communicate properly with the USP. (I could have course given it a static IP on the USP's subnet.)
  • In UniFi Controller > Settings > Networks I amended the DHCP Range (I leave space for static IP addresses). You should end up with something like this:
  • After saving the network settings I navigated to UniFi Controller > Devices and located the USG. Under the Actions column I clicked Adopt to configure the USG with the previously defined settings.
  • Following the adoption process, I accessed the USG's properties by clicking its name (not the IP address). On the Configuration tab the WAN section allowed me to supply my ISP's PPPoE credentials and DNS details (I have an OpenDNS account):
  • Once the WAN changes had been provisioned to the USG I connected the WAN port of the USG to an Ethernet port on the D6400 in order to check broadband connectivity and speed. Note that both the WAN and LAN 1 ports should be connected at 1 Gbps. Initially my LAN 1 was showing 100/10 Mbps and it was due to a dodgy cable.
  • With broadband now connected again I took the opportunity of upgrading the USG's firmware using the handy button in the Actions column:
  • The final bit of this configuration was to plug the USG in to my switch (a ZyXEL GS1100-16) and plug my always-on server running UniFi Controller in to the switch and configure it with a static IP address.

With the core configuration completed I reconnected my wired devices one-by-one, fixing up any static IP address issues (due to the change of subnet) where required and giving each device (or client as they are known) a friendly name in UniFi Controller (click a client to open its properties and then navigate to Configuration > General > Alias). With this done the dashboard looks much better:

Troubleshooting and Disaster Recovery

If you do run in to problems you can find logs in the UniFi Controller installation folder (C:\Users\<profile name>\Ubiquiti UniFi\logs on Windows). It's also worth enabling Auto Backup from the Settings area. I configured mine to backup every day at 1am and then added C:\Users\<profile name>\Ubiquiti UniFi\data\backup to my CrashPlan configuration. Obviously do whatever works for you.

Outstanding Issues and Future Plans

One facility which I had taken for granted with my Netgear D6400 was some local DNS resolution. I first realised this was an issue when I couldn't get to my Windows Server 2012 R2 machine using its hostname. Long story short, it would appear that many SOHO routers use a tool called Dnsmasq for DNS forwarding and as a DHCP server. This apparently allows Dnsmasq to resolve DHCP client names. The USG doesn't really do DNS (which is fair enough since it's part of an ecosystem where different boxes are expected to do specific jobs) however I've seen a few posts in the forums where some scripting has been used to implement local DNS. It's not a major deal breaker for me and for the time being I've edited the hosts file on my Windows machines whilst I figure out what, if anything, I'm going to do about it.

EDIT: My conclusion about local DNS resolution is wrong. I traced the problem back to static IP addresses, specifically with me assigning static IP addresses from within clients themselves. (Most of my network is DHCP however there are a few clients on my network which I like to give static IP addresses. Probably pointless though—old habits die hard.) It turns out that if you assign IP addresses from within the clients DHCP is bypassed (of course) and the IP address doesn't get registered for DNS loookup. (It's something like that anyway.) The procedure to follow instead if you want a known IP address is to use IP address reservations. You can set these from the Properties window of a client by navigating to the Network tab under Configuration. Once I'd done this everything started working!

In terms of what's next, it will probably be a second AP-AC-PRO so I can have one at either end of the house. After that I will probably look at configuring some serious outdoor coverage via the UniFi Mesh devices. There's a huge amount to like about Ubiquiti products, but the ability to add new bits in as budget allows is one that I really appreciate.

Cheers -- Graham

Post Deployment Configuration with the PowerShell DSC Extension for Azure Resource Manager Templates

Posted by Graham Smith on April 28, 20162 Comments (click here to comment)

As part of a forthcoming blog post I'm writing for my series about Continuous Delivery with TFS / VSTS I want to be able to deploy PowerShell DSC scripts to Windows Server target nodes that both configure servers and deploy my application components. Separately, I want to automate the creation of target nodes so I can easily destroy and recreate them -- great for testing. In this previous post I explained how to do this with Azure Resource Manager templates, however the journey didn't end there since I also wanted to join the nodes to a domain and also install Windows Management Framework 5.0 in order to get the latest version of PowerShell DSC installed. Despite all that the journey still wasn't over because my server configuration and application deployment technique with PowerShell DSC uses WinRM which requires target nodes to have their firewalls configured to allow WinRM.

The solution to this problem lies with harnessing the true intended functionality of the PowerShell DSC Extension. Although you can just use it to install WMF it's real purpose is to run DSC configurations after the VM has been deployed. The configuration I used was as follows:

As you can see, rather than create any firewall rules I chose to simply turn the domain firewall off. The main reason is simplicity: creating firewall rules with DSC needs a custom resource which adds another layer of complexity to the problem. Although another option is to use netsh commands to create firewall rules in my case I have no issues with turning the firewall off.

The next step is to package this config in to a zip file and make it available on a publicly available URL. GitHub is one possible location that can be used to host the zip but I chose Azure blob storage. The Publish-AzureVMDscConfiguration cmdlet exists to help here, and can create the zip locally for onward transfer to GitHub (for example) or it can push it straight to Azure blob storage. I was using the latter route of course although I found that couldn't get the cmdlet to work with premium storage and ended up creating a standard storage account. The code is as follows:

The storage account key is copied from the Azure Portal via Storage account > $StorageAccount$ >Settings > Access keys. Don't try using mine as I've invalidated it. I should point out that I couldn't get this command to work consistently and it would sometimes error. I did get it to work eventually but I didn't manage to pin down the problem. The net effect of successfully running this code is a file called PostDeploymentConfig.ps1.zip in blob storage. As things stand though this file isn't accessible and its container (windows-powershell-dsc is created as a default) needs to have its access policy changed from Private to Blob.

With that done it's time to amend the JSON template. The dscExtension resource that was added in this post should now look as follows:

I've chosen to hard code the ModulesUrl and ConfigurationFunction settings because I won't need to change them but they can of course be parameterised. That's all there is to it, and the result is a VM that is completely ready to have its internals configured by PowerShell DSC scripts over WinRM. If you want to download the code that accompanies this post it's on my GitHub site as a release here.

Cheers -- Graham

Install Windows Management Framework 5.0 with Azure Resource Manager Templates

Posted by Graham Smith on April 9, 2016No Comments (click here to comment)

In a recent post on my blog series about Continuous Delivery with TFS / VSTS I mentioned that I was having to manually install Windows Management Framework 5.0 after creating a Windows server via ARM templates as it was a necessary precursor to running my PowerShell DSC configuration.  I also mentioned that automating the install was on my do-do list. But no more!

It turns out that the the PowerShell DSC extension for ARM templates will perform the installation, and that there's no need to actually run a DSC configuration if you don't need to -- just specify "WmfVersion": "5.0" in the settings section. The JSON to add to your ARM template should look similar to this:

I say similar because the code is configured to use the variables in my template, however you can see the full template to get the context on my GitHub Infrastructure repo here.

Many thanks to Zach Alexander and the PowerShell Team for pointing me in the right direction!

Cheers -- Graham

Version Control PowerShell Scripts with Visual Studio and Visual Studio Team Services

Posted by Graham Smith on January 6, 20163 Comments (click here to comment)

It's a new year and whilst I'm not a big fan of New Year's resolutions I do try and use this time of year to have a bit of a tidy-up of my working environments and adopt some better ways of working. Despite being a developer who's used version control for years to manage application source code one thing I've been guilty of for some time now is not version controlling my PowerShell code scripts. Shock, horror I know -- but I'm pretty sure I'm not alone. In this post I'll be sharing how I solved this, but first let's take a quick look at the problem.

The Problem

Unless you are a heavyweight PowerShell user and have adopted a specialist editing tool, chances are that you are using the PowerShell ISE (Integrated Script Editor) that comes with Windows for editing and running your scripts. Whilst it's a reasonably capable editor it doesn't have integration points to any version control technologies. Consequently, if you want version control and you want to continue using the ISE you'll need to manage version control from outside the ISE -- which in my book isn't the seamless experience I'm used to. No matter, if you can live without a seamless experience the next question in this scenario is what version control technologies can be used? Probably quite a few, but since Git is the hot topic of the day how about GitHub -- the hosted version of Git that's free for public repositories? Ideal since it's hosted for you and there's the rather nice GitHub Desktop to make things slightly less seamless. Hang on though -- if you are like me you probably have all sorts of stuff in your PowerShells that you don't want being publicly available on GitHub. Not passwords or anything like that, just inner workings that I'd rather keep private. Okay, so not GitHub. How about running your own version of Git server? Nah...

A Solution

If you are a Visual Studio developer then tools you are already likely using offer one solution to this problem. And if you aren't a Visual Studio developer then the same tools can still be used -- very possibly for free. As you've probably already guessed from the blog title, the tools I'm suggesting are Visual Studio (2015, for the script editing experience) and Visual Studio Team Services (VSTS, for version control). Whoa -- Visual Studio supports PowerShell as a language? Since when did that happen? Since Adam Driscoll created the PowerShell Tools for Visual Studio extension is since when.

The aim of this post is to explain how to use Visual Studio and VSTS to version control PowerShell scripts rather than understand how to start using those tools, so if you need a primer then good starting points are here for Visual Studio and here for VSTS. The great thing is that both these tools are free for small teams. If you want to learn about PowerShell Tools for Visual Studio I have a Getting Started blog post with a collection of useful links here. In my implementation below I'm using Git as the version control technology, so please amend accordingly if you are using TFVC.

Implementing the Solution

Now we know that our PowerShell scripts are going to be version controlled by VSTS the next thing to decide is where in VSTS you want them to reside. VSTS is based around team projects, and the key decision is whether you want your scripts located together in one team project or whether you want scripts to live in different team projects -- perhaps because they naturally belong there. It's horses for courses so I'll show both ways.

If you want your scripts to live in associated team projects then you'll want to create a dedicated Git repository to hold the Visual Studio solution. Navigate to the team project in VSTS and then to the Code tab. Click on the down arrow next to the currently selected repository and in the popup that appears click on New repository:

vsts-create-new-git-repo

A Create a new repository dialogue will appear -- I created a new Git repository called PowerShellScripts. You can ignore the Add some code! call to action as we'll address this from Visual Studio.

Alternatively, if you want to go down the route of having all your scripts in one team project then you can simply create a new team project based on Git -- called PowerShellScripts for example. The newly created project will contain a repository of the same name putting you in the same position as above.

The next step is to switch to Visual Studio and ensure you have the PowerShell Tolls for Visual Studio 2015 extension installed. It's possible you do since it can be installed as part of the Visual Studio installation routine, although I can't remember whether it's selected be default. To check if it's installed navigate to Tools > Extensions and Updates > Installed > All and scroll down:

visual-studio-extensions-and-updates-powershell-tools

If you don't see the extension you can install it from Online > Visual Studio Gallery.

With that done it's time to connect to the team project. Still within Visual Studio, from Team Explorer choose the green Plug icon on the menu bar and then Manage Connections, and then Connect to Team Project:

visual-studio-manage-connections

This brings up the Connect to Team Foundation Server dialog which (via the Servers button) allows you to register your VSTS subscription as a ‘server' (the format is https://yoursubscriptionname.visualstudio.com). Once connected you will be able to select your Team Project.

Next up is cloning the repository that will hold the Visual Studio solution. If you are using a dedicated team project with just one Git repository you can just click the Home icon on the Team Explorer menu bar to get the cloning link on the Home panel:

visual-studio-clone-this-repository

If you have created an additional repository in an existing team project you will need to expand the list of repositories and double-click the one you previously created:

visual-studio-select-repository

This will take you directly to the cloning link on the Home panel -- no need to click the Home icon. Whichever way you get there, clicking the link opens up the settings to clone the repository to your local machine. If you are happy with the settings click Clone and you're done.

Solutions, Projects and Files

At the moment we are connected to a blank local repository, and the almost final push is to get our PowerShell scripts added. These will be contained in Visual Studio Projects that in turn are contained in a Visual Studio Solution. I'm a bit fussy about how I organise my projects and solutions -- I'll show you my way but feel free to do whatever makes you happy.

At the bottom of the Home tab click the New link, which brings up the New Project dialog. Navigate to Installed > Templates > Other Project Types > Visual Studio Solutions. I want to create a Blank Solution that is the same name as the repository, but I don't want a folder of the same name to be created which Visual Studio gives me no choice about. A sneaky trick is to provide the Name but delete the folder (of the same name) from the Location text box:

visual-studio-create-blank-solution

Take that Visual Studio! PowerShellScripts.sln now appears in the Solutions list of the Home tab and I can double-click it to open it, although you will need to manually switch to the Solution Explorer window to see the opened solution:

visual-studio-solution-explorer

The solution has no projects so right-click it and choose Add > New Project from the popup menu. This is the same dialog as above and you need to navigate to Installed > Templates > Other Languages > PowerShell and select PowerShell Script Project. At this point it's worth having a think about how you want to organise things. You could have all your scripts in one project, but since a solution can contain many projects you'll probably want to group related scripts in to their own project. I have a few scripts that deal with authorisation to Azure so I gave my first project the name Authorisation.Azure. Additional projects I might need are things like DSC.Azure and ARM.Azure. It's up to you and it can all be changed later of course.

The new project is created with a blank Script.ps1 file -- I usually delete this. There are several ways to get your scripts in -- probably the easiest is to move your existing ps1 scripts in to the project's folder in Windows Explorer, make sure they have the file names you want and then back in Visual Studio right-click the project and choose Add > Existing Item. You should see your script files and be able to select them all for inclusion in the project.

Don't Forget about Version Control!

We're now at the point where we can start to version control our PowerShell scripts. This is a whole topic in itself however you can get much of what you need to know from my Git with Visual Studio 2015 and TFS 2015 blog post and if you want to know more about Git I have a Getting Started post here. For now though the next steps are as follows:

  • In Team Explorer click on the home button then click Changes. Everything we added should be listed under Included Changes, plus a couple of Git helper files.
  • Add a commit comment and then from the Commit dropdown choose Commit and Sync:
    visual-studio-team-explorer-changes
  • This has the effect of committing your changes to the local repository and then syncing them with VSTS. You can confirm that from VSTS by navigating to the Code tab and selecting the repository. You should see the newly added files!

Broadly speaking the previous steps are the ones you'll use to check in any new changes that you make, either newly added files or amendments to scripts. Of course the beauty of Git is that if for whatever reason you don't have access to VSTS you can continue to work locally, committing your changes just to the local repository as frequently as makes sense. When you next have access to VSTS you can sync all the changes as a batch.

Finally, don't loose sight of the fact that as well as well as providing version control capabilities Visual Studio allows you to run and debug your scripts courtesy of the PowerShell Tolls for Visual Studio 2015 extension. Do be sure to check out my blog post that contains links to help you get working with this great tool.

Cheers -- Graham

Remote Desktop Connections to New-Style Azure VMs – Where Has The DNS Name Gone?

Posted by Graham Smith on September 17, 20152 Comments (click here to comment)

If there's one thing that's certain about Microsoft Azure it's that it's constantly changing. If your interaction with Azure is through the old portal or through PowerShell this might not be too obvious, however if you've used the new portal to any extent then it's hard to miss. One of the obvious changes is the appearance of ‘classic' versions of resources such as Virtual machines (classic). For most people -- me included -- this will immediately pose the question "does classic mean there is a new way of doing things that we should all be using going forward?".

The short answer is ‘yes'. The slightly longer answer is that there are now two different ways to interact with Azure: Azure Service Management (ASM) and Azure Resource Manager (ARM). ASM is the classic stuff and ARM is the new world where resources you create live in Resource Groups. The recommendation is to use ARM if you can -- see this episode of Tuesdays with Corey for details.

I've been learning about ARM in recent weeks and one of the first things I did was to create a new VM in the new portal. It's not a whole lot different from the old portal once you get used to the new ‘blade' feature of the new portal, however one key difference between ASM and ARM VMs is that ARM VMs are no longer created under a cloud service. I didn't think much of it until I downloaded the RDP file for my new ARM VM only to find the computer name field was populated with an IP address and a port number. This is in contrast to ASM VMs where the computer name field is populated with the DNS name of the cloud service and a Remote Desktop port number for that VM.

So what's the problem? It's that MSDN users of Azure who need to make sure their credits don't disappear like to deallocate their VMs at the end of a session so they aren't costing anything. But when a VM is deallocated it looses its IP address, only to be allocated a new (almost certainly different) one when the VM is next started. This is a major pain if you are relying on the IP address to form part of the connection details in an RDP file or a Remote Desktop Connection Manager profile. This isn't a problem with ASM where the DNS name of the cloud service and Remote Desktop port number don't change even if the IP address of the cloud service changes (which it will if all VMs get deallocated).

So what to do? Initial investigations seemed to point to the need for a load balancer, and so (rather reluctantly it has to be said) I started to delve in to the details. However it quickly became clear that creating a load balancer and all its associated gubbins (which is quite a lot) was going to be something of a pain for a developer type of guy like me. And then...a breakthrough! Whilst looking at the settings of my VM's Public IP address in the portal I noticed a DNS name label (optional) field with a tooltip that gave the impression that filling this field in would give a DNS name for my VM.

azure-portal-virtual-machine-dns-name-label

So I gave my VM a DNS name label (I used the same name as the VM which fortunately resulted in a unique DNS name) and then changed the computer name of my RDP file to tst-core-dc.westeurope.cloudapp.azure.com. Result -- a successful login! This feels like problem solved for me at the moment and if you face this issue I hope it helps you out.

Cheers -- Graham

Blogging with WordPress

Posted by Graham Smith on March 21, 201514 Comments (click here to comment)

If you're the sort of IT professional who prefers to actively manage their career rather than just accept what comes along then chances are that deciding to blog could be one of the best career decisions you ever make. Technical blogging is a great focal point for all your learning efforts, a lasting way to give back to the community, a showcase for your talents to potential employers and a great way to make contact with other people in your field. Undecided? Seems most people initially feel that way. Try reading this, this and this.

If you do decide to make the leap one of the best sources of inspiration that I have found is John Sonmez's Free Blogging Course. It's packed full of tips on how to get started and keep going and since there is nothing to lose I wholeheartedly recommend signing up. This is just the beginning though, and it turns out that this is the start of many practical decisions you will need to make in order to turn out good quality blog posts that reach as wide an audience as possible. I thought I'd document my experience here to give anyone just starting out a flavour of what's involved.

Which Blogging Platform?

Once you have decided to blog the first question is likely to be which blogging platform to go for. The top hits for a best blogging platform Google search invariably recommend WordPress and since there seemed no point in ignoring the very advice I had sought that's what I chose. There's more to it than that though since there is a choice of WordPress.com and WordPress.org. The former allows you to host a blog for free at WordPress.com however there are some restrictions on customisation and complications with domain name. WordPress.org on the other hand is a software package that anyone can host and is the full customisable version. If you are an IT professional you are almost certainly going to want all the flexibility and despite the fact that modest costs are involved for me WordPress.org was the clear winner.

Hosting WordPress

Next up is where are you going to host WordPress? With super fast broadband and technical skills some people might choose to host themselves at home but on my rural 1.8 Mbps connection that wasn't an option. I started down the Microsoft Azure route and actually got a working WordPress up-an-running with my own private Azure account. Whilst I would have enjoyed the flexibility it would have given me I decided that cost was prohibitive as I wanted a PaaS solution which means paying for the Azure Website and a hosted MySQL database. It might be cheaper now so don't discount this option if Azure appeals, however I found the costs of a web hosting company to be much more reasonable. There are lots of these and I chose one off the back of a computer magazine review. Obviously you need a company that hosts WordPress at a price you are happy to pay and which offers the level of support you need.

What's in a name?

If you are looking to build a brand and market yourself then your domain name will be pretty important. It could be an amusing twist on your view or niche in the technology world or perhaps your name if it's interesting enough (mine isn't). My core interest is in the technology and processes around deploying software so pleasereleaseme tickled my fancy although I know the reference doesn't make sense to every culture.

As if choosing a second-level domain name is hard enough you also need to choose the top-level. Of course to some extent your choice might be limited by what's available, what makes sense or how much you're willing to pay. I chose .net since I have a background as a Microsoft.NET developer. Go figure!

Getting Familiar

I'm assuming that most people reading this blog post are technically minded and so I'm not going to go through the process of getting your WordPress site and domain name up-and-running, and in any case it will vary according to who you choose as a host. It's worth noting though that some companies might do some unwanted low-level configuration to the ‘template' used to create your WordPress site and if you need to reverse any of that you may need to use an FTP tool such as FileZilla to assist with the file editing process. In my case I found I couldn't edit themes and it turned out I need to make a change to wp-config (which lives in the file system) to turn this on.

With your site now live I recommend taking some time to familiarise yourself with WordPress and go through all the out-of-the-box configuration settings before you start installing any plugins. In particular Settings > General and Settings > Permalinks are two areas to check before you start writing posts. In the former don't forget to set your Site Title and a catchy Tagline. Permalinks in particular is very important for ensuring search engine friendliness. See here for more details but the take home seems to be to use Post name.

One of the bigger decisions you'll need to make is which theme to go for (Appearance > Themes). There's oodles of choice but do choose carefully as your theme will say a lot about your blog. If you find a free one then great however there is a cottage industry in paid-for themes if you can't. I'm not at all a flashy type of person so my theme is one of muted tones. It's not perfect but for free who can argue? One day I will probably ask if the author can make some changes for appropriate remuneration. In the meantime I use Appearance > Edit CSS (see Jetpack below) to make a few tweaks to my site after the theme stylesheet has been processed.

Fun with Widgets

In addition to displaying your pages and posts WordPress can also display Widgets which appear to the right of your main content. I change widgets around every so often but as a minimum always have a Text widget with some details about me, widgets from Jetpack (see below) so people can follow me via Twitter, email and RSS, and also the Tag Cloud widget.

Planning Ahead

There's still a little more to do before you begin writing posts but this probably is a good point to start planning how you will use posts and pages and categories and tags. At the risk of stating the obvious posts are associated with a publish date whilst pages are not. Consequently pages are great for static content and posts for, er, posts. The purpose of categories and tags is perhaps slightly less obvious. The best explanation I have come across is that categories are akin to the table of contents in a book and tags are akin to the index. The way I put all this together is by having themes for my posts. Some themes are tightly coupled for example my Continuous Delivery with TFS soup-to-nuts series of posts, or less so for example my Getting Started series. I use categories to organise my themes and I also use pages as index pages for each category. It's a bit of extra maintenance but useful to be able to link back to them. Using tags then becomes straightforward and I have a different tag for each technology I write about and posts will often have several tags.

Backing Up

Please do get a backup strategy in place before you sink too much effort in to configuring your site and writing posts. It's probably not enough to rely on your web hosting provider's arrangements and I thoroughly recommend implementing your own supplementary backup plan. There are plenty of plugins that will manage backup for you -- some free and some paid. To cut a long research story short I use BackUpWordPress which is free if all you want to do is backup to your web space. You don't of course -- you want to copy your backups to an offline location. For this I use their BackUpWordPress To Google Drive extension which costs USD 24 per year. It does what it ways on the tin and there are other flavours. Please don't skimp on backup and getting your backups to an offsite location!

Beating Spam

If you have comments turned on (and you probably should to get feedback and make connections with people who are interested in your posts) you will get a gazillion spam comments. As far as I can see the Akismet plugin is the way to go here. Don't forget to regularly clean out your spam (Comments > Spam). Your instinct will probably be to want to go through spam manually the first few times you clean it out but Akismet is so good that I don't bother any more and just use the Empty Spam button.

Search Engine Optimisation

This is one of those topics that is huge and makes my head hurt. In short it's all about trying to make sure your posts are ranked highly by the search engines when someone performs a search. For the long story I recommend reading some of the SEO guides that are out there -- I found this guide by Yoast in particular to be very useful. To help cope with the complexities there are several plugins that can manage SEO for you. I ended up choosing the free version of WordPress SEO by Yoast since the plugin and the guide complement each other. There is some initial one-time setup to perform such as registering with the Google and Bing webmaster tools and verifying your site and submitting an XML sitemap , after which it's a case of making sure each post is as optimised for SEO as it can be. The plugin guides you through everything and there is a paid-for version if you need more.

Tracking Visitors

In order to understand who is visiting your site you will want to sign up for a Google Analytics account. You then need to insert the code in to every page on your site and as you would expect a plugin can do this for you. There's a few to choose from and I went for Google Analytics Dashboard for WP.

Install Jetpack

Jetpack is a monster pack of ‘stuff' from WordPress.com that can help you with all sorts of things big and small. Some items are enabled in your site as soon as you install Jetpack such as the Edit CSS feature mentioned above and others become available when you link it to a WordPress.com account. There is far too much to cover here and it's a case of trawling through and working out what suits your needs. To give you a flavour though:

  • Enhanced Distribution -- shares published content with third party services such as search engines
  • Extra Sidebar Widgets -- give you extra sidebar widgets
  • Monitor -- checks your site for downtime and emails you if there is a problem
  • Photon -- loads your post images from WordPress.com's content delivery network
  • Protect -- guards against brute force attacks
  • Publicize -- allows you to connect your blog to popular social networking sites and automatically share new posts
  • Shortcode Embeds -- allow you to embed media from other sites such as YouTube
  • WP.me Shortlinks -- gives you a Get Shortlink button in the post editor
  • WordPress.com Stats -- collects statistics about your site visitors similar to Google Analytics

These are just a few of the options I've enabled -- there are many, many more. Could keep you busy for hours...

Image Matters

Although blogging is about words a picture can apparently paint a thousand of them and as a technical blogger you are undoubtedly going to want to include screenshots of applications in your posts. I spent quite some time researching this but ultimately decided to go with one of Scott Hanselman's recommendations and chose WinSnap. It's a paid for offering but you can use it on as many machines as you need and it's very feature-rich. I do most of my work via remote desktop connections and pretty much all of the time use WinSnap from an instance installed on my host PC. I do have it installed on the main machine I remote to but making it work to capture menu fly-outs and so on directly on the remote machine  is a work in progress. Whichever tool you choose please do try to take quality screenshots -- Scott has a guide here. I frequently find that my mouse is in the screenshot or that I've picked up some background at the edge of a portion of a dialog. I always discard these and start again. Don't forget to protect any personal data, licence keys and the like.

Code Quality

If you are bloggging about a technology that involves programming code you'll soon realise that the built-in WordPress feature for displaying code is lacking and that you need something better. As always there are several plugins that can come to the rescue -- here is just one review. I tried a couple and decided that Crayon Syntax Highlighter was the one for me. Whichever plugin you choose do take some time to understand all the options and do some experimenting to ensure your readers get the best experience.

Writing Quality Posts

Your blog says a lot about you and for that reason you probably want to pay close attention to the quality of your writing. I don't mean that you should get stressed over this and never publish anything, after all one of your reasons for blogging might be to improve your writing skills. Rather, pay attention to the basics so that they are a core feature of every post and then you'll have solid foundations to improve on. Here's a list of some of the things to consider:

  • Try not to let spelling mistakes to slip through. Browsers have spell checkers and highlighting these days -- do use them.
  • Proofread your posts before publishing -- and after. I'm forever writing form when I mean from and spell checkers don't catch this. WordPress has a preview feature and Jetpack has a Proofreading module -- why not try it out?
  • Try to adopt a consistent formatting style that uses white space and headings. Watch out for any extra white space that might creep in between paragraphs. Use the Text pane of the editor to check the HTML if necessary.
  • Watch out for extra spaces at the start of paragraphs (they creep in somehow) and also for double spaces.
  • If you are unsure about a word, phrase or piece of grammar Google for it to find out how it should be used or how others have used it, but only trust a reputable source or common consensus. If I have any nagging doubt I never assume I am right and will always check. I've been surprised more than a few times to find out that what I thought was correct usage was wrong.
  • Technical writing can be quite difficult because you need to refer to elements of an application as you describe how to do something, and you need to distinguish these from the other words in your sentences. I use bold to flag up the actions a reader needs to take if they are ‘following along at home' and usually also at the first mention of a core technology component and the like. have a look at some of my posts to see what I mean.
  • In my first career as a research scientist my writing (for academic journals for example) was strictly in the third person and quite formal. That doesn't work at all well for blogs where you want to write directly to the user. For most ‘how-to' blogs second person is probably best with a bit of first person thrown in on occasions and that's the style I use. Have a look here for more explanation.

If you are serous about improving your written work there are plenty of books and web pages you can read. Many years ago when I was an undergraduate at the University of Wales, Bangor, there was an amazing guide to writing called The Style and Presentation of Written Work by Agricultural and Forest Sciences lecturer Colin Price. I read it over and over again and it still stands me in good stead today. My paper copy has long since vanished but I was thrilled to find it available here. The focus is on academic writing but it's packed full of useful tips for everyone and well worth reading.

Marketing your Blog

So, your blog is up-and-running and you are putting great effort in to writing quality posts that you are hoping will be of use to others in your area. Initially you might be happy with the trickle of users finding your site but then you'll write a post that takes over 10 hours of research and writing and you'll wish you had more traffic for your efforts.

Say hello to the mysterious world of marketing your blog. I was -- and to some extent still am -- uneasy about all this however there is something satisfying about seeing your Google Analytics statistics go up. So how does it work? I'm still learning but here are some of the techniques I'm using and which you might want to try:

  • Answering questions on MSDN forums and StackOverflow. Create and maintain your profiles on these forums and when answering questions link to one of your blog posts if it's genuinely helpful. Answering questions is also a great way to understand where others are having problems and where a timely blog post might help.
  • Comment on other bloggers' posts. I follow about 70 blogs and several times a week there might be an opportunity to comment and link back to a post you have written.
  • Link to the CodeProject if you are writing a blog that fits that site -- instructions on how to link here. Once connected the quick way to have a post consumed is to have a WordPress category called CodeProject and use it in the post.
  • Use Jetpack's Publicize module to automatically post your blogs to Twitter, Facebook and LinkedIn and Google+. I'm still working out the value of doing this -- my family are bemused -- but it's automatic so what the heck? In all seriousness if you are looking to promote your blog in a big way then social media is probably going to be a big thing for you.

That's pretty much where I have got to on my blogging with WordPress journey to date. Looking back it's been a huge enjoyment getting all this configured and sorted out. Concepts that were once very hazy are now a little less so and I have learned a huge amount. I'm sure I've missed lots of important bits out and maybe you have your own thoughts as to which plugins are must-haves. Do share through the comments!

Cheers -- Graham

Azure Automation Fails with “AADSTS50055: Password is expired.”

Posted by Graham Smith on January 24, 201510 Comments (click here to comment)

A while back I posted on how to set up Azure Automation to ensure your VMs get shut down if you accidentally leave them running. Very important for those of us with MSDN accounts that need to preserve Azure credits.

A few days ago when starting the runbook manually I noticed that it had no effect and my VMs didn't shut down. On investigation from the Azure Portal (Automation > $(AutomationAccount) > Runbooks > $(Runbook) > Jobs > $(CompletedJobThatIsFailing) > History) I saw that the job was throwing an exception:

Add-AzureAccount : AADSTS70002: Error validating credentials. AADSTS50055: Password is expired.
Trace ID: 4f7030e5-5d95-4c91-8b64-606231e3b056
Correlation ID: 7c2eb266-bf31-45ec-bb72-2677badd8ad3
Timestamp: 2015-01-22 00:32:53Z: The remote server returned an error: (401) Unauthorized.
At Stop-AzureVMExceptDomainController:5 char:5
+
+ CategoryInfo : CloseError: (:) [Add-AzureAccount], AadAuthenticationFailedException
+ FullyQualifiedErrorId : Microsoft.WindowsAzure.Commands.Profile.AddAzureAccount

So, the password on the automation account must have expired. I went through the procedure of resetting it following the original instructions for creating the automation account here and sure enough everything sprung to life again.

Two things are troublesome about all this: Firstly I had no idea that my password expired and secondly I don't want it to expire. It seems that if you don't do anything passwords will expire after 90 days. You can fix this using PowerShell but there are one or two hoops:

  1. Install the Microsoft Online Services Sign-In Assistant for IT Professionals RTW.
  2. Install the Azure Active Directory Module for Windows PowerShell (64-bit version).
  3. Import the AD module using the import-module MSOnline command in PowerShell.
  4. Connect to Azure AD and change the automation account so the password doesn't expire using the following code:
  5. The code above prompts you to supply credentials. However, for me my Microsoft account didn't work -- just kept causing an exception. I had to use a Windows Azure Active Directory account and ended up creating a new account with the Global Administrator role. Looking back I might have been able to give the automation account I was trying to change the Global Administrator role rather than create a new one -- feel free to try this first.
  6. If you want to check the password expiry status of an Azure AD account use this code:

With the expiry taken care of it's time to wonder if there is some notification scheme in place for this. I noticed in my authentication account that I hadn't set an alternate email address. I have now but of course it's too late to know if that's the notification route. One for the forums I think...

Cheers -- Graham

Azure VM Reporting “The network location cannot be reached”

Posted by Graham Smith on January 19, 201512 Comments (click here to comment)

For the past few days my TFS Admin server in Azure had been reporting that it couldn't access a share on the domain controller in the same Azure network and was reporting an The network location cannot be reached error. The server was logging on to the domain okay, could ping the DC and could resolve domain users when setting permissions on a share (for example), and other machines could see the Drop folder on the TFS Admin machine. It wasn't the DC at fault as all other machines could see it and access the share on the DC.

I spent ages Googling and checking DNS and other such settings but everything checked out normal. So I took the plunge and removed the server from the domain, removed its entry from the Computers section of Active Directory, rebooted the offending server and tried to rejoin the domain. Nooooooo! Exactly the same error message trying to join the domain with an The machine ALMTFSADMIN attempted to join the domain but failed. The error code was 1231. error in the System Event Log.

Fearing a complete rebuild of my TFS demo infrastructure coming on I desperately tried the good old netsh int ip reset command (yes it seems it can be done on a VM without trashing the network connection and possibly locking yourself out) but no change and then and I then tried uninstalling the  File and Printer Sharing for Microsoft Networks service of the network adapter. Still nothing doing!

I then came across a post where a respondent advised removing orphaned network adapters. I knew there was something funny going on here since my network adapter name was Microsoft Hyper-V Network Adapter #149 and was presumably installing a new adapter every time the machine booted from cold (ie from the Stopped (Deallocated) state). With nothing to loose I opened up Device Manager, selected View > Show hidden devices and then expanded Network adapters. Sure enough there was a monster list of orphaned Microsoft Hyper-V Network Adapter entries -- 148 to be precise. Breaking all rules about automating tasks that you do more than once I feverishly manually uninstalled all the entries bar the current one. Then rebooted. And with bated breath tried to rejoin the domain...with success!

Whew! Lucky escape this time. Follow-up actions are to find out why this is happening and how to uninstall these in a more automated way -- with the PowerShell Remove-VMNetworkAdapter cmdlet for example? Do leave a comment if you have any insight!

Cheers -- Graham

Organise RDP Connections with Remote Desktop Connection Manager

Posted by Graham Smith on December 8, 2014No Comments (click here to comment)

Years ago when I first started working with Hyper-V I soon realised there must be a better way of remoting to servers than using the client built in to Windows. There was, in the form of a nifty utility called Remote Desktop Connection Manager or RDCMan. There are other tools but this one is simple and does the job very nicely. For many years it was stuck on version 2.2 published in 2010, probably because it originated as a tool used by Microsoft engineering and technical staff and wasn't the focus of any official attention. Fast-forward to 2014 and there is now a new 2.7 version, as before available as a free download from Microsoft. I highly recommend this for organising your RDP connections to your Azure (or any other) Windows VMs. In addition RDPMan is able to save your logon credentials and if you are in an environment where it's safe to do this it's a great time-saver.

There is a trick to getting RDCMan to work with Azure VMs which can cause endless frustration if you don't know it. The DNS name of the cloud service and the port of the Remote Desktop endpoint need to be entered in separate places in the RDCMan profile for your VM. See here for a post that has all the details you need to get started.

Cheers -- Graham

Use Azure Automation to Shut Down VMs Automatically

Posted by Graham Smith on December 4, 20146 Comments (click here to comment)

If you have an MSDN subscription (which gives you Azure credits) you will hopefully only forget to shut down your VMs after you have finished using them once. This happened to me and I was dismayed a few days later to find my Azure credits had been used up and I had to wait until the next billing cycle to carry on using Azure. There are a few ways to keep costs down (use a basic VM, size appropriately and don't install an image with a pre-loaded application such as SQL Server and instead install from an ISO from your MSDN subscription) but the most effective is to deallocate your VMs when you are finished using them.

As a safeguard after the episode where I ran down my credits I created a PowerShell script that I set up as a scheduled task to run daily at 1am on an always-on server that runs in my home datacentre under the stairs. Doable but not ideal, not least because of all the components that are installed by Azure PowerShell just to run one script. However, the recent launch of Azure Automation means this script can now be run from within Azure itself. Getting started with Azure Automation used to be a bit of a pain as there were quite a lot of steps to go through to set up authentication using certificates. The process is much simpler now as Azure Active Directory can be used. If you are just getting going with Azure Automation it's worth watching the Azure Friday Automation 101, 102 and 103 videos. When you are ready to start using Automation this post has the instructions for setting up the authentication. Once that is in place it's a case of navigating to the Automation pane in the Portal and creating an Automation Account. You then create a Runbook using the Quick Create feature and start editing it in draft mode. The following code is an example of how you might go about shutting down your VMs:

If you were to use the example code above you would need to have created a Runbook called Stop-AzureVMExceptDomainController and be using an Azure Active Directory user called Automation. (The code also ensures that a VM called ALMDC isn't shut down.) With the Runbook in place you can link it to a Schedule. You'll need to publish it first, although the typical workflow is to run in draft to test it until you are satisfied that it's working correctly. When you do finally publish you click the Schedule tab where you can link to a new or an existing schedule -- I have mine set to 1am.

Once your Runbook is in place you can of course run it manually as a convenient way to shut your VMs down when you have finished with them. No longer do you have to wait for PowerShell running locally to finish before you can turn your PC off. And if you do forget to shut your VMs off you can relax knowing that your schedule will kick in and do the job for you.

Cheers -- Graham