BM-Bloggers

The blogs of Black Marble staff

Connecting Azure Network Site-Site VPN to a SonicWall Appliance

I am with a customer this week, building a test Azure Network+IaaS/Azure AD/Office 365 environment. We struggled to get the site-site VPN connection up for a while and there wasn’t a great deal on the greater internet to help, save for a couple of posts in a discussion forum by the marvellous Marcus Robinson. We finally got it working when we found a tech note from SonicWall, published just a few days ago on the 7th October.

It turns out that we had created a gateway on Azure that used dynamic routing (I had a working lab environment using Server 2012 RRAS done that way). In SonicWall terms, that is not a site-site VPN and as we had configured appliance for one of those were completely adrift. When we deleted the Azure gateway and created a static routing one everything worked.

For anyone embarking down this road with a SonicWall device I can report that when we followed the instructions everything appeared to connect just fine. The tech note is available on the SonicWall site for all to enjoy.

Speaking at UK Tech.Days Online 2013

I’ve been supporting the great team of evangelists at Microsoft with their UK Tech.Days events for some time now. I am chuffed to bits that they have asked me to contribute to the fantastic UK Tech.Days Online event. If you haven’t heard about it, go look at the agenda right now! Three days of great content on the latest technologies covering client, server, cloud and dev. The whole thing will be streamed live thanks to the wonder of the internet and includes a live interview with Steve Ballmer.

I am doing a session with the marvellous Steve Plank on the technologies that enable you to move your on-premises VMs into Azure and then a solo session on Windows Azure Backup – something we use already at BM as part of our DPM configuration. Sandwiched between those will be one where Steve covers the kind of automation you can achieve with PowerShell for Windows Azure.

Robert is also involved – I believe he is speaking during the very first session of day 1, on Windows 8.1.

It’s a privilege to be involved in Tech.Days Online and I’m really looking forward to it. Go register and I look forward to answering all your great questions on the day!

Changes in Skydrive access on Windows 8.1

After upgrading to Windows 8.1 on my Media Center PC I have noticed a change in SkyDrive. The ‘upgrade’ process from 8 to 8.1 is really a reinstall of the OS and reapplication of Windows 8 applications. Some Windows desktop applications are removed. In the case of my Media Center PC the only desktop app installed was Windows Desktop Skydrive that I used to sync photos from my MediaPC to the cloud. This is no longer needed as Windows 8.1 exposes the Skydrive files linked to the logged in LiveID as folders under the c:\user\[userid]\documents folder, just like Windows Desktop client used to do.

This means though the old dekstop Skydrive client has been removed my existing timer based jobs that backup files to the cloud by copying from a RAID5 box to the local Skydrive folder still work.

A word of warning here though, don’t rely on this model as you only backup. There is a lot of ransonware around at the moment and if you aren't careful an infected PC can infect your automated cloud backup too. Make your you cloud backup is versioned so you can old back to a pre-infected file and/or you have a more traditional offline backup too.

TF215106: Access denied from the TFS API after upgrade from 2012 to 2013

Updated 6th Nov 2013 - Also see this updated post , the API mentioned here maybe an issue, but the rights change in this other post is probably the real issue

After upgrading a test server from TFS 2012 to 2013 I started getting the following exception when trying to set the retention for a build policy via the TFS API

"TF215106: Access denied. TYPHOONTFS\\TFSService needs Update build information permissions for build definition ClassLibrary1.Main.Manual in team project Scrum to perform the action. For more information, contact the Team Foundation Server administrator."}

This was a surprising error, the code had been working OK and the TFSService account is, well the service account, so has full rights.

The issue was I also needed to rebuild my application with the TFS 2013 API, once I rebuild with the 2013 DLLs it all worked fine.

A fix for power saving stopping my slow application installation

I am getting sick of the fact that the Samsung 500T tablet running Windows 8.1 I am installing applications on  keeps going into sleep mode to save power. I start the install, leave it to run, look back later it has been saving power, started and stopped Wifi and I have one very confused install. It is not as if it is anything weird, just Office 2013.

So the the workaround (as I admit I forgot to pickup it’s PSU this morning) is pop it into Presenter Mode (via the Windows key X and Mobility Center). This means it ignore Power saving and just runs for me. Install finish fine and I am good to go

Can I use the HP ALM Synchronizer with TF Service?

I recently tried to get the free HP ALM Synchronizer to link to Microsoft’s TF Service, the summary is it does not work. However, it took me a while to realise this.

The HP ALM Synchronizer was designed for TFS 2008/2010 so the first issue you hit is that TF Services is today basically TFS 2013 (and is a moving goal post as it is updated so often). This means when you try to configure the TFS connection in HP ALM Synchronizer  it fails because it cannot see any TFS client it supports. This is fairly simple to address, just install Visual Studio Team Explorer 2010 and patch it up to date so that it can connect to TF Service (you could go back to the 2008 and achieve the same if you really needed to)

Once you have a suitably old client you can progress to the point that it asks you for your TFS login credentials. HP ALM Synchronizer validates they are in the form DOMAIN\USER, this is a problem.

On TF Service you usually login with a LiveID, this is a non-starter in this case. However, you can configure alternative credentials, but these are in the form USER and PASSWORD. The string pattern verification on the credentials entry form in  HP ALM Synchronizer does not accept them, it must have a domain and slash. I could not find any pattern at satisfies both TF Service and the HP ALM Synchronizer setup tool. So basically you are stuck.

So for my client we ended up moving to a TFS 2013 Basic Install on premises and at all worked fine, they could sync HP QC defects into TFS using the HP ALM Synchronizer, so they were happy.

However, is there a better solution? One might be to a commercial product such as Tasktop Sync, this is designed to provide synchronisation services between a whole range of ALM like products. I need to find out if that supports TF Service as yet?

Upgrading Windows 8 Media Center to 8.1

Just done the update of my Windows 8 Media Center to 8.1. First issue was I looked in the store and could not see the update. I was looking for an ‘update’ in the top right not a huge application entry in the middle of the screen. Strange how you can miss the obvious

The download took about 30 minutes which worked in background whilst I was using Media Center. It then did a reboot,  the main install of files took about 30 minutes too. It seemed to sit at 85% for a long time, then another reboot. A few ‘preparing devices’ messages and ‘getting ready’ for a good 10 minutes, then ‘Applying PC settings', another reboot then ‘setting up a few more things’ . I then had to accept some licenses, and login with a LiveID, and eventually it started, and everything was working.

I did not time it exactly but I think about 90 minutes all told.

Update 4th Nov 2013 – I have been away on holiday and came back to find not much recorded by my Media Center. Turns out the problem was the EPG was not updating and showing ‘No data’ for about 70% of my channels (all the BBC ones plus some others). Turns out the upgrade corrupts/loses the EPG mappings. The fix is to do a full channel rescan

Problems with Microsoft Fake Stubs and IronPython

I have been changing the mocking framework used on a project I am planning to open source. Previously it had been using Typemock to mock out items in the TFS API. This had been working well but used features of the toolset that are only available on the licensed product (not the free version). As I don’t like to publish tests people cannot run I thought it best to swap to Microsoft Fakes as there is a better chance any user will have a version of Visual Studio that provides this toolset.

Most of the changes were straightforward but I hit a problem when I tried to run a test that returned a TFS IBuildDetails object for use inside an IronPython DSL.

My working Typemock based test was as follows

[Test]
     public void Can_use_Dsl_to_get_build_details()
     {
        // arrange
         var consoleOut = Helpers.Logging.RedirectConsoleOut();

         var tfsProvider = Isolate.Fake.Instance<ITfsProvider>();
         var emailProvider = Isolate.Fake.Instance<IEmailProvider>();
         var build = Isolate.Fake.Instance<IBuildDetail>();

         var testUri = new Uri("vstfs:///Build/Build/123");
         Isolate.WhenCalled(() => build.Uri).WillReturn(testUri);
         Isolate.WhenCalled(() => build.Quality).WillReturn("Test Quality");
         Isolate.WhenCalled(() => tfsProvider.GetBuildDetails(null)).WillReturn(build);

         // act
         TFSEventsProcessor.Dsl.DslProcessor.RunScript(@"dsl\tfs\loadbuild.py", tfsProvider, emailProvider);

        // assert
         Assert.AreEqual("Build 'vstfs:///Build/Build/123' has the quality 'Test Quality'" + Environment.NewLine, consoleOut.ToString());
     }

Swapping to Fakes I got

[Test]
     public void Can_use_Dsl_to_get_build_details()
     {
        // arrange
         var consoleOut = Helpers.Logging.RedirectConsoleOut();
         var testUri = new Uri("vstfs:///Build/Build/123");

         var emailProvider = new Providers.Fakes.StubIEmailProvider();
         var build = new StubIBuildDetail()
                         {
                             UriGet = () => testUri,
                             QualityGet = () => "Test Quality",
                         };
         var tfsProvider = new Providers.Fakes.StubITfsProvider()
         {
             GetBuildDetailsUri = (uri) => (IBuildDetail)build
         };

         // act
         TFSEventsProcessor.Dsl.DslProcessor.RunScript(@"dsl\tfs\loadbuild.py", tfsProvider, emailProvider);
     

        // assert
         Assert.AreEqual("Build 'vstfs:///Build/Build/123' has the quality 'Test Quality'" + Environment.NewLine, consoleOut.ToString());
     }

But this gave me the error

Test Name:    Can_use_Dsl_to_get_build_details
Test FullName:    TFSEventsProcessor.Tests.Dsl.DslTfsProcessingTests.Can_use_Dsl_to_get_build_details
Test Source:    c:\Projects\tfs2012\TFS\TFSEventsProcessor\Main\Src\WorkItemEventProcessor.Tests\Dsl\DslTfsProcessingTests.cs : line 121
Test Outcome:    Failed
Test Duration:    0:00:01.619

Result Message:    System.MissingMemberException : 'StubIBuildDetail' object has no attribute 'Uri'
Result StackTrace:   
at IronPython.Runtime.Binding.PythonGetMemberBinder.FastErrorGet`1.GetError(CallSite site, TSelfType target, CodeContext context)
at System.Dynamic.UpdateDelegates.UpdateAndExecute2[T0,T1,TRet](CallSite site, T0 arg0, T1 arg1)
at Microsoft.Scripting.Interpreter.DynamicInstruction`3.Run(InterpretedFrame frame)
at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame)
at Microsoft.Scripting.Interpreter.LightLambda.Run2[T0,T1,TRet](T0 arg0, T1 arg1)
at IronPython.Compiler.PythonScriptCode.RunWorker(CodeContext ctx)
at IronPython.Compiler.PythonScriptCode.Run(Scope scope)
at IronPython.Compiler.RuntimeScriptCode.InvokeTarget(Scope scope)
at IronPython.Compiler.RuntimeScriptCode.Run(Scope scope)
at Microsoft.Scripting.SourceUnit.Execute(Scope scope, ErrorSink errorSink)
at Microsoft.Scripting.SourceUnit.Execute(Scope scope)
at Microsoft.Scripting.Hosting.ScriptSource.Execute(ScriptScope scope)
at TFSEventsProcessor.Dsl.DslProcessor.RunScript(String scriptname, Dictionary`2 args, ITfsProvider iTfsProvider, IEmailProvider iEmailProvider) in c:\Projects\tfs2012\TFS\TFSEventsProcessor\Main\Src\WorkItemEventProcessor\Dsl\DslProcessor.cs:line 78
at TFSEventsProcessor.Dsl.DslProcessor.RunScript(String scriptname, ITfsProvider iTfsProvider, IEmailProvider iEmailProvider) in c:\Projects\tfs2012\TFS\TFSEventsProcessor\Main\Src\WorkItemEventProcessor\Dsl\DslProcessor.cs:line 31
at TFSEventsProcessor.Tests.Dsl.DslTfsProcessingTests.Can_use_Dsl_to_get_build_details() in c:\Projects\tfs2012\TFS\TFSEventsProcessor\Main\Src\WorkItemEventProcessor.Tests\Dsl\DslTfsProcessingTests.cs:line 141

If I altered my test to not use my IronPython DSL but call the C# DSL Library directly the error went away. So the issue lay in the dynamic IronPython engine – not something I am going to even think of trying to fix.

So I swapped the definition of the mock IBuildDetails to use Moq (could have used the free version of Typemock or any framework) instead of a Microsoft Fake Stubs and the problem went away.

So I had

[Test]
     public void Can_use_Dsl_to_get_build_details()
     {
         // arrange
         var consoleOut = Helpers.Logging.RedirectConsoleOut();
         var testUri = new Uri("vstfs:///Build/Build/123");

         var emailProvider = new Providers.Fakes.StubIEmailProvider();
         var build = new Moq.Mock<IBuildDetail>();
         build.Setup(b => b.Uri).Returns(testUri);
         build.Setup(b => b.Quality).Returns("Test Quality");

         var tfsProvider = new Providers.Fakes.StubITfsProvider()
         {
             GetBuildDetailsUri = (uri) => build.Object
         };

        // act
         TFSEventsProcessor.Dsl.DslProcessor.RunScript(@"dsl\tfs\loadbuild.py", tfsProvider, emailProvider);


         // assert
         Assert.AreEqual("Build 'vstfs:///Build/Build/123' has the quality 'Test Quality'" + Environment.NewLine, consoleOut.ToString());
     }

So I have a working solution, but it is a bit of a bit of a mess, I am using Fake Stubs and Moq in the same test. There is no good reason not to swap all the mocking of interfaces to Moq. So going forward on this project I only use Microsoft Fakes for Shims to mock out items such as  TFS WorkItem objects which have no public constructor.

Generation 2 Virtual Machines on Windows 8.1 and Server 2012 R2 plus other nice new features

DDD North 2013 was a fantastic community conference but sadly I didn’t get chance to deliver my grok talk on Generation 2 virtual machines. A few people came up to me beforehand to say they were interested in the topic, and a few more spoke to me afterwards to ask if I would blog. I had planned to write a post anyway, but when you know it’s something people want to read you get a bit more of a push.

This post will cover two areas of Hyper-V in Windows 8.1 and Server 2012: Generation 2 virtual machines which are completely new and a number of changes that should apply to all VMs, be they gen 1 or gen 2. What I not going to cover, as it’s a post all of it’s own, is the new and improved software-defined-networking in hyper-v.

Generation Next

As you can see in the screenshot below, when creating a virtual machine in the Windows 8.1 and Server 2012 you are asked which generation of VM you want. The screen gives a brief and reasonable summary of what the differences are… to a point.

image

Generation 1 virtual machines are a mix of synthetic and emulated hardware. This goes all the way back to previous virtualisation solutions where the virtual machine was usually a software emulation of the good old faithful Intel 440BX motherboard.

  • The emulated hardware delivered a high level of compatibility across a range of operating systems. Old versions of DOS, Windows NT, Netware etc would all fairly happily boot and run on the 440BX hardware. You didn’t get all the cleverness of a guest that knew it was inside a VM but it worked.
  • PXE (network) boot was not possible on the implementation of the synthetic network adapter in Hyper-V. That meant that you had to use the emulated NIC if you wanted to do this.
  • Virtual hard disks could be added to the virtual SCSI adapter whilst the machine was running, but not the IDE adapter. You couldn’t boot from a SCSI device, however, so many machines had to have drives on both devices.
  • Emulated keyboard controllers and other system devices were also implemented for compatibility.

Generation 2 virtual machines get rid of all that legacy, emulated hardware. From what I’ve read and heard, all the devices in a generation 2 VM are synthetic, software generated. This makes the VM leaner and more efficient in how it uses resources, and potentially faster as gen 2 VMs are much closer to the kind of hardware found in a modern PC.

There are three key changes in Gen as far as most users are concerned:

  • SCSI disks are not bootable. There is no IDE channel at all; all drives (VHD or virtual optical drive) are now on the SCSI channel. This is far simpler than before.
  • Synthetic network adapters support PXE boot. Gone is the old legacy network adapter.
  • The system uses UEFI rather than BIOS. That means you can implement secure boot on a VM. Whilst this might sound unnecessary it could be of great interest to organisations where security is key.

The drawback of gen 2 is that, right now, only Windows 8, Server 2012 and their respective new updated versions can be run as a guest in a gen 2 VM. I’m not sure that this will change in terms of Microsoft operating systems, but I do expect a number of Linux systems to be able to join the club eventually. I have done a good deal of experimentation here, with a large range of Linux distributions. Pretty much across the board I could get the installation media to boot but install failed because the hardware was unknown. What this means is that when Microsoft release new versions of the hyper-v kernel additions for Linux we should see support expand in this regard.

The screenshot below shows the new hardware configuration screen for a generation 2 virtual machine. Note the much shorted list of devices in the left hand column:

image

Useful changes across generations

There have been some other changes that, in theory, span generations. More on that in a bit.

Drives

When Server 2012/Windows 8 arrived, Microsoft added bandwidth management for VMs. That useful for IT pros who want to manage what resources servers can consume but it’s also jolly handy for developers who would like to try low bandwidth connections during testing. We can’t do anything about latency with this approach, but it’s nice to be able to dial a connection down to 1Mb to see what the impact is.

Server 2012 R2/Windows 8.1 add a similar option for the virtual hard drive. We can now specify QoS for the virtual hard disks, in IoPs. The system allows you to set a minimum and maximum. It’s important to remember here that this does depend on the physical tin beneath your VM. I run two SSDs in my laptops now, but before that my VMs ran on a 5400rpm drive. Trying to set a high value for minimum IoPs wouldn’t get me very far here. What is more useful, however, is being able to set the maximum value so we can start to simulate slow drives for testing.

As with network bandwidth management, I think this is also a great feature for IT pros who need to manage contention between VMs and focus resource on key machines.

The screenshot below shows the disk options screen with QoS and more.

image

Also new is the ability to resize a VHD that is attached to a running machine. This is only possible with disks attached to SCSI channels, so gen 2 VMs may get more benefit here. Additionally, VHDs can now be shared between VMS. Again, this is SCSI only but this is a really useful change because it means we can build clusters with shared storage hosted on VHDs rather than direct attached iSCSI or fibrechannel. The end result is to make more options available to the little guys who don’t have the resources for expensive tin. It’s also great for building test environments that need to mirror those of a customer – we do that all the time and it’s going to give us lots of options.

Networks

I already said that I’m not going to dive into the new software-defined-networking here. If terms like NVGRE get you excited then there are people with more knowledge of comms than I have writing on the subject. Suffice to say it looks really useful for IT pros but not really for developers, I don’t think.

Also not much use for developers but incredibly useful for developers is the new Protected Network functionality. The concept of this is really simple and so, so useful:

Imagine you have a two node cluster. Each node has a network connection for VMs, not shared by the host OS, and one for the OS itself that the cluster uses. Node 1 suddenly loses connectivity on the VM connection. What happens? Absolutely nothing with Server 2012 because the VMs are still running and nothing knows that the VMs no longer have connectivity. With Server 2012 R2/Windows 8.1 you can enable protect network for the virtual adapter. Now, the systems are checking connectivity to the VM and in our scenario all the VMs on node 1 will fail merrily over to node 2, which still has a connection.

I know we will find this new feature useful on our clustered, production VM hosts. Again, this really helps smaller organisations get better resilience from simpler hardware solutions.

The screenshot below shows the advanced options for a network adapter with network protection enabled.

image

Enhanced session mode

I said that, in theory, many of the new changes are pan-generation (and pan-guest OS). According to the documentation, enhanced session mode should work on more than just Windows 8.1 or Server 2012 FR2 guest operating systems. In practice, I have not found this to be the case, even after updating the VM additions on my machines to the latest version.

It is useful, however. When you enable enhanced session mode then, providing you have enabled remote desktop on the guest, this will be used to connect to the VM. Even if the guest has no network connection to the host OS, or even a network adapter!).

The screenshot below shows the option for enhanced session mode. This is enabled by default in Windows 8.1 and disabled by default in Server 2012 R2.

image

When you have the option enabled you will see a new button on the right of the toolbar, as shown in the image below.

image

That little PC with a plus symbol toggles the VM connection between old-style and the new, RDP-based connection. The end result is that you get more screen resolution choices, you can copy and paste properly between your host and the VM (no more paste keystrokes and you can copy files and documents!) and all the USB device pass-through from the host works too.

For developers working inside a VM this is is great – no more needing network connections to be able to RDP into a box. That means that you can run sensitive VMs, or multiple copies of a VM on multiple machines much more easily than before. If you enable the new connection mode on a VM, and restart it, when the VM begins to boot it connects in the old way, but as soon as it detects the RDP service on the guest you get a dialog asking you for the new resolution and it swtiches to the RDP style connection. It’s great.

I’m hoping that there will either be updates for older Microsoft OS versions, or updated VM additions that will give a consistent result that I have no so far experienced. In theory, updates to the Linux kernel additions could also add this new connection type, but again, so far my experience is that it doesn’t work right now.

Summary

To sum up then:

  • Generation 2 VMs – leaner, meaner and simpler all round but limited to the latest Microsoft desktop and server OS’. I can’t see a reason not to use them for the latest OS version.
  • Disk QoS – should be really useful for dev/test when you need to simulate a slow drive. Great for IT pros to manage environments with a mix of critical and non-critical VMs.
  • Online VHD resizing. There are so many times I’ve needed this on dev/test in the last few months alone. Shame it’s SCSI only so you can’t grow the OS disk on a gen 1 VM but you can’t have everything.
  • Shared VHD. Another useful new option that will help building dev/test environments and will also be useful for smaller organisations who want to build things like virtualised clustered file servers using a cluster shared volume (CSV).
  • Network protection. Great for IT pros running host clusters. Can’t see a use for devs.
  • Enhanced session mode. Useful all round, especially for devs who want to easily work on a VM. Useful for IT pros who need to copy stuff on to running VMs, but so far my experience is mixed as it only works on Windows 8.1 and Server 2012 guests.

Windows 8.1 is already on MSDN and TechNet so if you’re a dev or IT Pro with the right subscriptions, why aren’t you trying this stuff already? For everybody else, the 18th of this month sees general availability and I expect evaluation media will be available for you to play with.