BM-Bloggers

The blogs of Black Marble staff

And the most ridiculous packaging award of the day goes to…

CPC who managed to send two resistor-capacitor balances for LED lights, which as about 1cm in size

in a box as shown here

image

Other than loads of that inflatable packing material there was a huge CPC catalogue, with the interesting sticker

image

So an online electronics company, that provides free shipping (a really good thing when the components were only a couple of £s), chose to also send a very heavy catalogue that they know is out of date, when I have already used their quick and easy web site.

You have to wonder why?

Being allowed out in public: Forthcoming events

It’s autumn again, and that means event season is upon us once more. In the next few months I’m getting around a bit, so this post is a plug for the events I’m either attending or speaking at.

  • October 1st is VMUG Leeds. I’m registered to attend, but chances are I’ll be spending much of the day helping the Microsoft guys run their hand-on lab. The agenda has some great-sounding sessions and I believe there are still spaces so why not register and come along.
  • October 12th is DDDNorth in Sunderland. Perhaps unsurprisingly my Lab Manager session wasn’t voted in, but I’m sure that’s because the range of speakers and topics available to choose from during voting meant hard choice had to be made <grin/>. As with last year, I’m helping out on the day but I’m also hoping to attend (heckle) some of the sessions. Black Marble should be there in force, as Richard, Steve and Gary are all speaking. Former BM staffer Iain Angus’ session looks interesting too!
  • December 4th is the Black Marble Architecture Forum in the North. With so many great speakers lined up already I’m not sure if Linda will squeeze me in, but I’ll certainly be there to join in the discussion (and make sure the AV works!).
  • January 29th is the Black Marble Annual Tech Update. I’ll be speaking in the morning, IT-focused session. It’s hard work to prep for but great fun to deliver and the feedback we get is always really great so come along if you want to be informed about the Microsoft roadmap to help your planning process. The afternoon session covers the development technologies too!
  • Microsoft also have a new season of Tech.Days about to kick off, so keep an eye on their web site for details.

Building environments for Lab Manager: Why bare metal scripting fails

In the world of DevOps it’s all about the scripts: I’ve seen some great work done by some clever people to create complex environments with multiple VMs all from scratch using PowerShell. That’s great, but unfortunately in the world of Lab Manager it just doesn’t work well at all.

We’ve begun the pretty mammoth task of generating a new suite of VMs for our Lab Manager deployment to allow the developers and testers to create multi-machine environments. I had hoped to follow the scripting path and create these things much more on the fly, but it wasn’t to be.

I hope to document our progress over the next few weeks. This post is all about the aims and the big issues we have that make us take the path we are following.

Needs and Wants

Let’s start with our requirements:

  • A flexible, multi-server environment with a range of Microsoft software platforms to allow devs to work on complex projects.
  • All servers must be part of the same domain.
  • All products must be installed according to best practice – no running SharePoint as local service here!
  • Multiple versions of products are needed: SharePoint 2010 and 2013; CRM 4 and 2011; SQL 2008 R2 and 2012; Biztalk 2010 and 2013.
  • ‘Flexible’ VMs running IIS or bare server 2008 R2/2012 are needed.
  • No multi-role servers. We learned from past mistakes: Don’t put SQL on the DC because Lab Manager Network Isolation causes trouble.
  • Environments must only consist of the VMs that are needed.
  • Lab Manager should present these as available VMs that can be composed into an environment: No saving complete environments until they have been composed for a project.
  • Developers want to be able to run the same VMs locally on their own workstations for development; Lab Environments are for testing and UAT but we need consistency across them all.

That’s quite a complex set of needs to meet. What we decided to build was the following suite of VMs:

  • Domain Controller. Server 2012.
  • SQL 2012 DB server. Server 2012.
  • SharePoint 2013 (WFE+APP on one box). Server 2012. Uses SQL 2012 for DB.
  • Office Web Apps 2013. Server 2012.
  • Azure Workflow Server (for SharePoint 2013 workflows). Server 2012.
  • CRM 2011 Server. Server 2012. Users SQL 2012 for DB.
  • Biztalk 2013 Server. Server 2012. Users SQL 2012 for DB.
  • IIS 8 server. Server 2012.
  • ‘Flexible’ Server 2012. For when you just want a random server for something.
  • SQL 2008 R2 server. Server 2008 R2.
  • SharePoint 2010 (WFE+APP+OWA on one box). Server 2008 R2. Uses SQL 2008 R2 for DB.
  • CRM 4. Server 2008 R2. Uses SQL 2008 R2 for DB.
  • Biztalk 2010. Server 2008 R2. Uses SQL 2008 R2 for DB.
  • IIS 7.5 server. Server 2008 R2.
  • ‘Flexible’ Server 2008 R2.

In infrastructure terms we end up with a number of important elements:

  • Our AD domain and DNS domain: <domain>.local.
  • For SharePoint 2013 Apps we need a different domain. For ease this is apps.<domain>.local.
  • To ensure we can use SSL for our web sites we need a CA. This is used to issue device certs and web certs. For simplicity a wildcard cert (*.<domain>.local) is issued.
  • Services such as SharePoint web applications all get DNS registrations. Each service gets an IP address and these addresses are bound to servers in addition to their primary IPs.
  • All services are configured to work on our private network. If we want them to work on the public network (Lab machines, excluding the DNS, can have multiple NICs and multiple networks) then we’ll deal with that once the environment is composed and deployed through Lab Manager.

Problems and constraints

The biggest problem with Lab Manager is the way Network Isolation works. Lab asks SCVMM to deploy a new environment. If network isolation is required (because you are deploying a DC and member servers more than once through many copies of the same servers) then Lab creates a new Hyper-V virtual network (named with GUID) and connects the VMs to that. It then configures static addresses on that network for the VMS. It starts with the DC and counts up.

My experience is that trying to be clever with servers that are sysprepped and then run scripts simply confuse the life out of Lab. You really need your VMs to be fully working and finished right out of the gate. Unfortunately, that mans building the full environment by hand, completely, and then storing them all with SCVMM before importing each VM into Lab.

There are still a few wrinkles that I know we have to iron out,even with this approach:

  • In the wonderful world of the SharePoint 2013 app model we need subdomains and wildcard DNS entries. We also need multiple IP addresses on the server. Right now we haven’t tested this with lab. What we are hoping is that we can build our environment on the correct address space as Lab uses. It counts up from 1, so our additional IPs will count down from 254. What we don’t know is whether Lab will remove all the IP addresses from the NICs when it configures the machines. If it does, then we will need to have some powershell that runs to configure the VMs correctly.
  • Since we don’t have to use all the VMs we have built in any given environment, DNS registration becomes important. Servers should register themselves with the DNS running on our DC, but we need to make sure we don’t have incorrect registrations hanging around.

Both of these areas will have to be addressed during this week, so I’ll post an update on how we get on.

Consistency is still key

Even though we can’t use scripts to create our environment from bare metal, consistency is still really important. We are, therefore, using scripts to ensure that we are following a set of fixed, replicable steps for each VM build. By leaving the scripts on the VM when we finished, we also have some documentation as to what is configured. We are also trying, where possible, to ensure that if a script is re-run it won’t cause havoc by creating duplicate configurations or corrupting existing ones.

Each of our VMs has been generated in SCVMM using templates we built for the two base operating systems. That avoids differences in OS install and allows me to get a new VM running in minutes with very little involvement. By scripting our steps, should things go badly wrong we can throw away a VM and run through those steps again. It’s getting trickier as we move forward, though; rebuilding our SQL boxes once we have SharePoint installed would be a pain.

Frustration begets good practice

In fact, one of the most useful things that has come out of this project so far is a growing set of robust powershell modules that perform key functions for us. They are things that we have scripted already, but in the past these have been scripts we have edited and run manually as part of install procedures. Human intervention meant we created simpler scripts. This week I have been shifting some of the things those scripts do into functions. The functions are much more complex, as they carefully check for success and failure at every step. However, the end result is a separation of the function that does the work and the parameters that change from job to job. The scripts will will create for any given installation now are simpler and are paired with an appropriate module of functions.

Deciding where to spend time

Many will read this blog and raise their hands in despair at the time we are spending to build this environment. Surely the scripted approach is better? Interestingly, our developers would disagree. They want to be able to get a new environment up and running quickly. The truth is that our Lab/SCVMM solution can push a new multi-server rig out and have it live and usable far quicker than we could do with bare metal scripts. The potential for failure using scripts if things don’t happen in exactly the right order is quite high. More importantly, if devs are sat on their hands then billable time is being wasted. Better to spend the time up front to give us an environment with a long life.

Dev isn’t production, except it is, sort of…

The crux of this is that we need to build a production-grade environment. Multiple times. With a fair degree of variation each time. If I was deploying new servers to production my AD and network infrastructure would already be there. I could script individual roles for new servers. If I was building a throwaway test or training rig where adherence to best practice wasn’t critical then I could use the shortcuts and tricks that allow scripted builds.

Development projects run into difficulties when the dev environment doesn’t match the rigor of production. We’ve had issues with products like SharePoint, where development machines have run a simple next-next-finish wizard approach to installation. Things work in that kind of installation that fail in a production best practice installation. I’m not jumping for joy over the time it’s taking to build our new rigs, but right now I think it’s the best way.

Links from presentation on Server 2012 R2

Thanks to all who attended the ReBuild and TechEd revisited event today. I promised that I would post the links from the final slide to this blog so you can all start evaluating Server 2012 R2 and System Center 2012 R2.

Download and evaluate the Preview software

http://www.microsoft.com/en-us/server-cloud/evaluate/trial-software.aspx

Refer to additional Windows Server 2012 R2 resources

http://www.microsoft.com/en-us/server-cloud/windows-server/windows-server-2012-r2.aspx

Windows Server 2012 R2 on TechNet

http://www.Microsoft.com/technet

Refer to additional System Center 2012 R2 resources

http://www.microsoft.com/en-us/server-cloud/system-center/system-center-2012-r2.aspx

System Center marketplace

http://systemcenter.pinpoint.microsoft.com

Server and Cloud Blog

http://blogs.technet.com/server-cloud

‘TF400499: You have not set your team field’ when trying to update Team Settings via the TFS API

I have recently been automating TFS admin processes such as creating a new team within an existing team project. The TFS team is now our primary means we use to segment work so we need to create new teams fairly often, so automation makes good sense.

As far as I can see, there are no command line tools, like TF.EXE or WITADMIN.EXE, to do most of the team operations. They are usually browser based. So I have been using PowerShell and the TFS API for the automation.

I hit the problem that when trying to save the team’s backlog iteration, iteration paths etc. for a newly created team using  the SetTeamSettings method. I was seeing

Exception calling "SetTeamSettings" with "2" argument(s): "TF400499: You have not set your team field."
At C:\Projects\tfs2012\TFS\PowerShell\Set-TfsTeamConfig.ps1:37 char:1
+ $configSvc.SetTeamSettings($configs[0].TeamId , $configs[0].TeamSettings)
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
    + FullyQualifiedErrorId : SoapException

This was strange as I had previously tested my PowerShell script file with hard coded values to update an existing team without error. When I inspecting the parameters being passed all looked OK, I had set a value for the team field and the read only property defining where to store the team data was correct (into the AreaPath).

After far to long I realised the problem was I had set the team field to the correct value e.g. ‘My project\My team’, but I had not created this area path before trying to reference it. Once I had created the required area path my scripted worked as expect.

So the TF400499 error is a little confusing, it does not mean ‘not set’ but ‘not set to a valid value’

Enabling Modern Apps

I’ve just finished presenting my talk on “Successfully Adopting the Cloud: TfGM Case Study”and there were a couple of questions that I said I would clarify.

1. What are the limits for the numbers of subscriptions per service bus topic. the answer is 2000. further details can be found at:http://msdn.microsoft.com/en-us/library/windowsazure/ee732538.aspx

2. what are the differences between Windows Azure SQL database and SQL Server 2012. The following pages provide the details:

Supported T-SQL: http://msdn.microsoft.com/en-us/library/ee336270.aspx

Partially supported T-SQL: http://msdn.microsoft.com/en-us/library/ee336267.aspx

Unsupported T-SQL: http://msdn.microsoft.com/en-us/library/ee336253.aspx

Guidelines and Limitations: http://msdn.microsoft.com/en-us/library/ff394102.aspx

3. Accessing the TfGM open data site requires you to register as a developer at: http://developer.tfgm.com

Thanks to everyone who attended I hope you found it useful.

Moving from Ubuntu to Mint for my TEE demos

I posted a while ago about the problems with DHCP using a Hyper-V virtual switch with WIFI and an Ubuntu VM, well I never found a good solution without hard coding IP addresses.

I recently tried using Mint 15 and was please to see this did not suffer the same problems, it seems happy with DHCP over Hyper-V virtual switches. I think I will give it a go a do a while for any cross platform TEE demos I need for a while.

Update: Just noticed that I still get a DHCP problem with Mint when I connect to my dual band Netgear N600 router via 2.4Ghz, but not when I use the same router via 5Ghz. I just know I am not at the bottom of this problem yet!

Where did that email go?

We use the TFS Alerts system to signal to our teams what state project build are at. So when a developer changes a build quality to ‘ready for test’ an email is sent to everyone in the team and we make sure the build retention policy is set to keep. Now this is not the standard behaviour of the TFS build alerts system, so we do all this by calling a SOAP based web service which in turn uses the TFS API.

This had all been working well until we did some tidying up and patching on our Exchange server. The new behaviour was:

  • Email sent directly via SMTP by the TFS Alert system  worked
  • Email sent via our web service called by the TFS Alert system disappeared, but no errors were shown

As far as we could see emails were leaving our web service (which was running as the same domain service account as TFS, but its own AppPool) and dying inside our email system, we presumed due to some spam filter rule?

After a bit of digging we spotted the real problem.

If you look at the advanced settings of the TFS Alerts email configuration it points out that if you don’t supply credentials for the SMTP server it passes those for the TFS Service process

image

Historically our internal SMTP server had allowed anonymous posting so this was not an issue, but in our tidy it now required authentication, so this setting became important.

We thought this should not be an issue as the TFS service account was correctly registered in Exchange, and it was working for the TFS generated alert emails, but on checking the code of the web service noticed a vital missing line, we were not setting the credentials on the message, we were leaving it as anonymous, so the email was being blocked

using (var msg = new MailMessage())
            {
                msg.To.Add(to);
                msg.From = new MailAddress(this.fromAddress);
                msg.Subject = subject;
                msg.IsBodyHtml = true;
                msg.Body = body;
                using (var client = new SmtpClient(this.smptServer))
                {
                    client.Credentials = CredentialCache.DefaultNetworkCredentials;
                    client.Send(msg);
                }

            }

Once this line was added and the web service redeployed it worked as expect again

Statistics, Intuition and Monty Hall

One of the things that makes data science hard is that it has a foundation in statistics, and one of the things that makes statistics hard is that it can run counter-intuitively. A great illustration of that is the Monty Hall Problem. Month Hall was a US game show host who presented a show called “Let’s Make a Deal” and the Monty Hall Problem is modelled on that show; it goes something like this:

As a contestant, you are presented with three doors. Behind one of the doors is a car and behind the other two there are goats. You are asked to pick a door. Monty will then open one of the other doors, revealing a goat and will then ask you if you want to swap your pick to the remaining closed door. The dilemma is, should you stick or swap, and does it really matter?

When faced with this dilemma, most people get it wrong, mainly because the correct answer runs counter to intuition. Intuition runs something like this:

There are three doors, behind one of the doors there is a car. Each door is equally likely, therefore there is a 1/3 chance of guessing correctly at this stage.

Monty then opens one of the remaining doors, showing you a goat. There are now two doors left, one containing a car, the other a goat, each are equally likely, therefore the chances of guessing correctly at this stage are 50:50, but since the odds are equal there is no benefit, nor harm, in switching and so it doesn’t matter if you switch or not.

That’s the intuitive thinking, and it’s wrong. In fact if you switch, you will win twice as often as you lose. What! I hear you say. (Literally, I can hear you say that). Yeah I know, it’s hard to believe isn’t it? See, I told you it was counter-intuitive. So, let me prove it to you.

Firstly, let’s state some assumptions. Sometimes these assumptions are left implied when the problem is stated, but we’ll make them explicit here for the purposes of clarity. Our assumptions are:

1. 1 door has a car behind it.
2. The other 2 doors have goats behind them.
3. The contestant doesn’t know what is behind each door.
4. Monty knows where the car is.
5. The contestant wants to win the car not the goat. (Seems obvious but, you know…)
6. Monty must reveal a goat.
7. If Monty has a choice of which door to open, he picks with equal likelihood.

Now let’s have a concrete example to work through. Let’s say you are the contestant and you pick door 1, then Monty shows you a goat behind door 2. The question now is should you swap to door 3 or stick with door 1, and I say you’ll win twice as often as you’ll lose if you swap to door 3.

Let’s use a tree diagram to work through this example, as we have a lot of information to process:

WP_20130903_001

There’s a couple of variables we have to condition for, where the car is, and which door Monty shows us, it’s that second condition that intuition ignores, remember Monty *must* show us a goat. So looking at the diagram we can see we choose door 1 and the car can be behind doors 1, 2 or 3 with equal probability; so, 1/3, 1/3, 1/3.

Next, let’s condition for the door Monty shows us. So if we pick 1 and the car is behind 1, he can show us doors 2 or 3, with equal likelihood, so we’ll label them 1/2, 1/2. If we pick 1 and the car is behind door 2, Monty has no choice but to show us door 3, so we’ll label that 1, and finally, if we pick 1 and the car is behind 3 then Monty must show us door 2, so again, we’ll label that 1.

Now, we said in our example that Monty shows us door 2, so we must be on either the top branch or the bottom branch (circled in red). To work out the probabilities we just multiply along the branches, so the probability of the first branch is 1/3 X 1/2 = 1/6 and on the bottom branch it’s 1/3 X 1 = 1/3.  Having done that, we must re-normalise so that the arithmetic adds to 1, so we’ll multiply each by 2, giving us 1/3 and 2/3, making 1 in total.

So now, if we just follow the branches along, we see that if we pick door 1 and Monty shows us door 2, there is a 2/3 probability that the car is behind door 3 and only a 1/3 probability that it is behind door 1 so we should swap and if we do so, we’ll win twice as often as we lose.

The good thing about living in the age of computers is that we now have the number crunching abilities to prove this kind of thing by brute force. Below is some code to run this simulation 1,000,000 times and then state the percentage of winners:

using System;
using System.Collections.Generic;
using System.Linq;

namespace ConsoleApplication2
{
    // Define a door
    internal class Door
    {
        public bool HasCar { get; set; }
        public int Number { get; set; }
    }

    public class Program
    {
        public static void Main()
        {
            // Create a tally of winners and losers
            List<int> tally = new List<int>();

            // We'll need some random values
            var rand = new Random(DateTime.Now.Millisecond);

            // Run our simulation 1,000,000 times
            for (int i = 0; i < 1000000; i++)
            {
                // First create three numbered doors
                List<Door> doors = new List<Door>();
                for (int j = 1; j < 4; j++)
                {
                    doors.Add(new Door { Number = j });
                }

                // Randomly assign one a car
                doors[rand.Next(0, 3)].HasCar = true;

                // Next the contestant picks one
                Door contestantChoice = doors[rand.Next(0, 3)];

                // Then Monty shows a goat door
                Door montyChoice = doors.Find(x =>
                    x != contestantChoice && !x.HasCar);

                // Then the contestant swaps
                contestantChoice = doors.Find(x =>
                    x != contestantChoice && x != montyChoice);

                // Record a 1 for a win and a 0 for a loss
                tally.Add(contestantChoice.HasCar ? 1 : 0);
            }

            // state winners as a percentage
            Console.WriteLine(tally.Count(x =>
                x == 1) / (float)tally.Count() * 100);
        }
    }
}

When I run this code on my machine (YMMV) I get the following result:

image

Which is pretty much bang on what we predicted.

Well that’s all for this post, until next time, keep crunching those numbers. Smile