BM-Bloggers

The blogs of Black Marble staff

Getting started with Release Management with network isolated Lab Management environments

Our testing environments are based on TFS Lab Management, historically we have managed deployment into them manually (or at least via a PowerShell script run manually) or using TFS Build. I thought it time I at least tried to move over to Release Management.

The process to install the components of Release Management is fairly straight forward, there are wizards that ask little other than which account to run as

  • Install the deployment server, pointing at a SQL instance
  • Install the management client, pointing at the deployment server
  • Install the deployment agent on each box you wish to deploy to, again pointing it as the deployment server

I hit a problem with the third step. Our lab environments are usually network isolated, hence each can potentially be running their own copies of the same domain. This means the connection from the deployment agent to the deployment server is cross domain. We don’t want to setup cross domain trusts as

  1. we don’t want cross domain trusts, they are a pain to manage
  2. as we have multiple copies of environments there are more than one copy of some domains – all very confusing for cross domain trusts

So this means you have to use shadow accounts, as detailed in MSDN for Release Management, the key with this process is to make sure you manually add the accounts in Release Management (step 2) – I missed this at first as it differs from what you usually need to do.

To resolve this issue, add the Service User accounts in Release Management. To do this, follow these steps:

    1. Create a Service User account for each deployment agent in Release Management. For example, create the following:

      Server1\LocalAccount1
      Server2\LocalAccount1
      Server3\LocalAccount1

    2. Create an account in Release Management, and then assign to that account the Service User and Release Manager user rights. For example, create Release_Management_server\LocalAccount1.
    3. Run the deployment agent configuration on each deployment computer.

However I still had a problem, I entered the correct details in the deployment  configuration client, but got the error

image

The logs showed

Received Exception : Microsoft.TeamFoundation.Release.CommonConfiguration.ConfigurationException: Failed to validate Release Management Server for Team Foundation Server 2013.
   at Microsoft.TeamFoundation.Release.CommonConfiguration.DeployerConfigurationManager.ValidateServerUrl()
   at Microsoft.TeamFoundation.Release.CommonConfiguration.DeployerConfigurationManager.ValidateAndConfigure(DeployerConfigUpdatePack updatePack, DelegateStatusUpdate statusListener)
   at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument)

A quick look using WireShark showed it was try to access http://releasemgmt.mydomain.com:1000/configurationservice.asmx,if I tried to access this in a browser it showed

 

Getting

Server Error in '/' Application.
--------------------------------------------------------------------------------

Request format is unrecognized.
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.Web.HttpException: Request format is unrecognized.

Turns out the issue was I needed to be run the deployment client configuration tool as the shadow user account, not as any other local administrator.

Once I did this the configuration worked and the management console could scan for the client. So now I can really start to play…

Getting ready for Global Windows Azure Bootcamp 2

It’s a busy week. I’m speaking at the Black Marble-hosted GWAB2 event this Saturday, along with Steve Spencer and Andy Westgarth. Richard and Robert will also be on hand which means between us we should be able to cover questions on much of the newly re-monikered Microsoft Azure.

I’ll be running through IaaS, Azure AD and looking at hybrid cloud solutions from an IT perspective while Steve and Andy talk through the other platform services from a developer point of view.

A better way of using TFS Community Build Extensions StyleCop activity so it can use multiple rulesets

Background

The TFS Community Build Extensions provide many activities to enhance your build. One we use a lot is the one for StyleCop to enforce code consistency in projects as part of our check in & build process.

In most projects you will not want a single set of StyleCop rules to be applied across the whole solution. Most teams will require a higher level of ‘rule adherence’ for production code as opposed to unit test code. By this I don’t mean the test code is ‘lower quality’, just that rules will differ e.g. we don’t require XML documentation blocks on unit test methods as the unit test method names should be documentation enough.

This means each of our projects in a solution may have their own StyleCop settings file. With Visual Studio these are found and used by the StyleCop runner without an issue.

However, on our TFS build boxes what we found that when we told it to build a solution, the StyleCop settings file in the same folder as the solution file was used for the whole solution. This means we saw a lot of false violations, such as unit test with no documentation headers.

The workaround we have used to not tell the TFS build to build a solution, but to build each project individually (in the correct order). By doing this the StyleCop settings file in the project folder is picked up. This is an OK solution, but does mean you need to remember to add new projects and remove old ones as the solution matures. Easy to forget.

Why is it like this?

Because of this on our engineering backlog we have had a task to update the StyleCop task so it did not use the settings file from root solution/project folder (or any single named settings file you specified).

I eventually got around to this, mostly due to new solutions being starting that I knew would contain many projects and potentially had a more complex structure than I wanted to manage by hand within the  build process.

The issue is the in the activity a StyleCop console application object is created and run. This takes a single settings file and a list of .CS files as parameters. So if you want multiple settings files you need to create multiple StyleCop console application objects.

Not a problem I thought, nothing adding a couple of activity arguments and a foreach loop can’t fix. I even got as far as testing the logic in a unit test harness, far easier than debugging in a TFS build itself.

It was then I realised the real problem, it was the StyleCop build activity documentation, and I only have myself  to blame here as I wrote it!

The documentation suggests a way to use the activity

  1. Find the .sln or .csproj folder
  2. From this folder load a settings.stylecop file
  3. Find all the .CS files under this location
  4. Run StyleCop

It does not have to be this way, you don’t need to edit the StyleCop activity, just put in a different workflow

A better workflow?

The key is finding the setting files, not the solution or project files. So if we assume we are building a single solution we can use the following workflow

image

Using the base path of the .sln file do a recursive search for all *.stylecop files



Loop on this set of .stylecop files


For each one do a recursive search for .cs files under its location


Run StyleCop for this settings file against the source files below it.

 

This solution seems to work, you might get some files scanned twice if you have nested settings files, but that is not a issue for us as we place a StyleCop settings file with each projects. We alter the rules in each of these files as needed, from full sets to empty rulesets of we want StyleCop to skip the project.

So now I have it working internally it is now time go and update the TFS Community Build Extensions Documentation

Learning the Microsoft Platform Succinctly

The good people over at SyncFusion have taken it upon themselves to provide the Microsoft development community with a great set of books to bootstrap you into different Microsoft development technologies.

 

To find these little gems head over to http://www.syncfusion.com/resources/techportal/ebooks

you have to register but since doing so i have had no email or hassle from SyncFusion unlike other “free” resources on the net.

Here are some of the Titles

Visual Studio 2013 Succinctly

Windows Phone 8 Development Succinctly

TypeScript Succinctly

WPF Succinctly

Windows Store Apps Succinctly

Data Structures 1&2 Succinctly (well not so Succinctly, but such an important subject I think it is fair enough)

F# Succinctly

and there are more on GIT etc.

SyncFusion have gone to a lot of effort in this and the quality of the books is very good, I would most definitely check them out.

hopefully they will do one on secure coding soon as well

 

b.

 

 

 

 

 

 

 

 

b.

Migrating to SCVMM 2012 R2 in a TFS Lab Scenario

Last week I moved our SCVMM from 2012 with service pack 1 to 2012 R2. Whilst the actual process was much simpler than I expected, we had a pretty big constraint imposed upon us by Lab Manager that largely dictated our approach.

Our SCVMM 2012 deployment was running on an aging Dell server. It had a pair of large hard drives that were software mirrored by the OS an we were using NIC teaming in Server 2012 to improve network throughput. It wasn’t performing that well, however. Transfers from the VMM library hosted on the server to our VM hosts were limited by the speed of the ageing SATA connectors and incoming transfers were further slowed by the software mirroring. We also had issues where Lab manager would timeout jobs whilst SCVMM was still diligently working on them.

Our grand plan involves migrating our VM hosts to Server 2012 R2. That will give us better network transfers of VMs and allow generation 2 VMs on our production servers (also managed by SCVMM). To get there we needed to upgrade SCVMM, and to do that we had to upgrade our Team Foundation Server. Richard did the latter a little while ago, which triggered the process of SCVMM upgrade.

Our big problem was that Lab is connected extremely strongly to SCVMM. We discovered just how strongly when we moved the SCVMM 2012. If we changed the name of the SCVMM server we would have to disconnect Lab from SCVMM. That would mean throwing away all our environments and imported machines, and I’m not going through the pain of rebuilding all that lot ever again.

I desperately wanted to move SCVMM onto better tin – more RAM, more cores and, importantly, faster disks and hardware mirroring. That led to a migration process that involved the following steps:

  1. Install Server 2012 R2 on our new server. Configure storage to give an OS drive and a data drive for the SCVMM library.
  2. Install the SCVMM pre-requisites on the new server.
  3. Using robocopy, transfer the contents of the SCVMM library to the new server. This needed breaking into blocks as we use data deduplication, and our library share contents are about three times the size of the drive! We could repeat the robocopy script and it would transfer any updated files.
  4. Uninstall SCVMM 2012 from the old server, making sure to keep the database as we do so.
  5. Change the name of the old server, and it’s IP address.
  6. Change the name of the new server to that of the old one, and change the IP address.
  7. Install SCVMM 2012 R2 onto the new server.

Almost all of that worked perfectly. When installing SCVMM onto the new server I wanted to use an existing share for the library, sat on drive d: and called MSCVMMLibrary. Setup refused, saying that the server I was installing to already had a share of that name, but on drive c:. Very true – for various reasons the share was indeed on the c: drive, albeit with storage on a separate partition attached with a mount point.

What to do – I couldn’t remove the existing share as I didn’t have SCVMM installed. I didn’t want to roll back either, as the steps were painful enough to deter me. So I looked in the SCVMM database for the share.

Sure enough, there is a table in there that lists the paths for the library shares for each server (tbl_IL_LibraryShare). There was a row with the name of my SCVMM server and a c:\mscvmmlibrary path for the share. I changed the ‘c’ to a ‘d’ and reran setup. It worked like a charm.

Now, I would not recommend doing what I did, but in the Lab Manager scenario, removing and re-adding that share causes all kinds of trouble as the resources in the library are connected to lab environments. I haven’t had any problems post-upgrade, so it looks like I got away with it. Sadly, this is just another in a long list of issues with the way Lab Manager interacts with SCVMM.

Creating Azure Virtual Networks using Powershell and XML Part 4: Local networks and site-site connectivity

This is part 4 of a series of posts building powershell functions to create and modify Azure Virtual Networks. Previous posts have covered functions to create virtual networks and then delete them. In this part, I’m going to show you functions that will define local networks and configure site-site VPN connectivity between a local and virtual network.

Next on my list is to create functions to delete the local networks and remove the site-site connections. Then I really must look at functions to edit the configuration.

Adding the functionality for local networks also meant that I had to modify the get-azureNetworkConfig function to create the LocalNetworkSites xml node if it does not already exist, ready to hold our local network definitions.

The Functions

get-azureNetworkConfig

This is an update to the function shown in part 2.

function get-azureNetworkXml
{

$currentVNetConfig = get-AzureVNetConfig
if ($currentVNetConfig -ne $null)
{
[xml]$workingVnetConfig = $currentVNetConfig.XMLConfiguration
} else {
$workingVnetConfig = new-object xml
}

$networkConfiguration = $workingVnetConfig.GetElementsByTagName("NetworkConfiguration")
if ($networkConfiguration.count -eq 0)
{
$newNetworkConfiguration = create-newXmlNode -nodeName "NetworkConfiguration"
$newNetworkConfiguration.SetAttribute("xmlns:xsd","http://www.w3.org/2001/XMLSchema")
$newNetworkConfiguration.SetAttribute("xmlns:xsi","http://www.w3.org/2001/XMLSchema-instance")
$networkConfiguration = $workingVnetConfig.AppendChild($newNetworkConfiguration)
}

$virtualNetworkConfiguration = $networkConfiguration.GetElementsByTagName("VirtualNetworkConfiguration")
if ($virtualNetworkConfiguration.count -eq 0)
{
$newVirtualNetworkConfiguration = create-newXmlNode -nodeName "VirtualNetworkConfiguration"
$virtualNetworkConfiguration = $networkConfiguration.AppendChild($newVirtualNetworkConfiguration)
}

$dns = $virtualNetworkConfiguration.GetElementsByTagName("Dns")
if ($dns.count -eq 0)
{
$newDns = create-newXmlNode -nodeName "Dns"
$dns = $virtualNetworkConfiguration.AppendChild($newDns)
}

$localNetworks = $virtualNetworkConfiguration.GetElementsByTagName("LocalNetworkSites")
if ($localNetworks.count -eq 0)
{
$newlocalNetworks = create-newXmlNode -nodeName "LocalNetworkSites"
$localNetworks = $virtualNetworkConfiguration.AppendChild($newLocalNetworks)
}

$virtualNetworkSites = $virtualNetworkConfiguration.GetElementsByTagName("VirtualNetworkSites")
if ($virtualNetworkSites.count -eq 0)
{
$newVirtualNetworkSites = create-newXmlNode -nodeName "VirtualNetworkSites"
$virtualNetworkSites = $virtualNetworkConfiguration.AppendChild($newVirtualNetworkSites)
}

return $workingVnetConfig
}

add-azureVnetLocalNetworkSite

Add-azureVnetLocalNetworkSite takes three parameters: networkName is the name for the new local network; addressPrefix is the network prefix for the local network and vpnGatewayAddress is the ip address of the local VPN gateway that will establish the vpn tunnel. The function checks that the local network does not already exist and then creates the appropriate XML.

function add-azureVnetLocalNetworkSite
{
param
(
[string]$networkName,
[string]$addressPrefix,
[string]$vpnGatewayAddress
)

#check if the network already exists
$siteExists = $workingVnetConfig.GetElementsByTagName("LocalNetworkSite") | where {$_.name -eq $networkName}
if ($siteExists.Count -ne 0)
{
	write-Output "Local Network Site $networkName already exists"
	$newNetwork = $null
	return $newNetwork
}
 
#get the parent node
$workingNode = $workingVnetConfig.GetElementsByTagName("LocalNetworkSites")
#add the new network node
$newNetwork = create-newXmlNode -nodeName "LocalNetworkSite"
$newNetwork.SetAttribute("name",$networkName)
$network = $workingNode.appendchild($newNetwork)

#add new address space node
$newAddressSpace = create-newXmlNode -nodeName "AddressSpace"
$AddressSpace = $network.appendchild($newAddressSpace)
$newAddressPrefix = create-newXmlNode -nodeName "AddressPrefix"
$newAddressPrefix.InnerText = $addressPrefix 
$AddressSpace.appendchild($newAddressPrefix)

#add the new vpn gateway address
$newVpnGateway = create-newXmlNode -nodeName "VPNGatewayAddress"
$newVpnGateway.InnerText = $vpnGatewayAddress
$network.AppendChild($newVpnGateway)

#return our new network
$newNetwork = $network
return $newNetwork 

}

add-azureVnetSiteConnectivity

add-azureVnetSiteConnectivity takes two parameters: networkName is the name of the virtual network and localNetworkName is the name of the local network. It checks to make sure both are defined before creating the appropriate XML to define the connection. In order for the site-site VPN configuration to be applied, the virtual network must have a subnet named GatewaySubnet, so the function checks for that too. I already have a function to create subnets so I can use that to create the subnet. The function also currently specifies a type of IPSec for the connection as no other options are currently available for site-to-site vpn connections.

function add-azureVnetSiteConnectivity
{
param
(
[string]$networkName,
[string]$localNetworkName
)

#get our target network
$workingNode = $workingVnetConfig.GetElementsByTagName("VirtualNetworkSite") | where {$_.name -eq $networkName}
if ($workingNode.Count -eq 0)
{
	write-Output "Network $networkName does not exist"
	$newVnetSiteConnectivity = $null
	return $newVnetSiteConnectivity
}

#check that the network has a GatewaySubnet
$subNetExists = $workingNode.GetElementsByTagName("Subnet") | where {$_.name -eq "GatewaySubnet"}
if ($subNetExists.count -eq 0)
{
	write-Output "Virtual network $networkName has no Gateway subnet"
	$newVnetSiteConnectivity = $null
	return $newVnetSiteConnectivity
}


#check that the local network site exists
$localNetworkSite = $workingVnetConfig.GetElementsByTagName("LocalNetworkSite") | where {$_.name -eq $localNetworkName}
if ($localNetworkSite.count -eq 0)
{
	write-Output "Local Network Site $localNetworkSite does not exist"
	$newVnetSiteConnectivity = $null
	return $newVnetSiteConnectivity
}

#check if the gateway node exists and if not, create
$gateway = $workingNode.GetElementsByTagName("Gateway")
if ($gateway.count -eq 0)
{
$newGateway = create-newXmlNode -nodeName "Gateway"
$gateway = $workingNode.appendchild($newGateway)
}

#check if the ConnectionsToLocalNetwork node exists and if not, create
$connections = $workingNode.GetElementsByTagName("ConnectionsToLocalNetwork")
if ($connections.count -eq 0)
{
$newConnections = create-newXmlNode -nodeName "ConnectionsToLocalNetwork"
$connections = $gateway.appendchild($newConnections)
}


#check to make sure our local site reference doesn't already exist
$localSiteRefExists = $workingNode.GetElementsByTagName("LocalNetworkSiteRef") | where {$_.name -eq $localNetworkName}
if ($localSiteRefExists.count -ne 0)
{
	write-Output "Local Site Ref $localNetworkName already exists"
	$newVnetSiteConnectivity = $null
	return $newVnetSiteConnectivity 
}

#add the local site ref
$newVnetSiteConnectivity = create-newXmlNode -nodeName "LocalNetworkSiteRef"
$newVnetSiteConnectivity.SetAttribute("name",$localNetworkName)
$vNetSiteConnectivity = $connections.appendchild($newVnetSiteConnectivity)
$newConnection = create-newXmlNode -nodeName "Connection"
$newConnection.SetAttribute("type","IPsec") 
$vNetSiteConnectivity.appendchild($newConnection)

#return our new subnet
$newVnetSiteConnectivity = $vNetSiteConnectivity
return $newVnetSiteConnectivity 
}

Using the functions

These functions modify an XML configuration that needs to be held in an object named $workingVnetConfig. Part 2 of this series showed how they can be loaded from a powershell file and called. Get-azureNetworkXml is required to get the XML configuration object. The functions here can then be used to remove items from that configuration, then save-azureNetworkXml will push the modified configuration back into Azure.

Gary Lapointe to the rescue: Using his Office 365 powershell tools to recover from a corrupted masterpage

I also need to give credit to the Office 365 support team over this. They were very quick in their response to my support incident, but I was quicker!

Whilst working on an Office 365 site for a customer today I had a moment of blind panic. The site is using custom branding and I was uploading a new version of the master page to the site when things went badly wrong. The upload appeared to finish OK but the dialog that was shown post upload was not the usual content type/fill in the fields form, but a plain white box. I left it for a few minutes but nothing changed. Unperturbed, I returned to the mater page gallery… Except I couldn’t. All I got was a white page. No errors, nothing. No pages worked at all – no settings pages, no content pages, nothing at all.

After some screaming, I tried SharePoint designer. Unfortunately, this was disabled (it is by default) and I couldn’t reach the settings page to enable it. I logged a support call and then suddenly remembered a recent post from Gary Lapointe about a release of some powershell tools for Office 365.

Those tools saved my life. I connected to the Office 365 system with :

Connect-SPOSite -Credential "<my O365 username>" -url "<my sharepoint online url>"

Success!

First of all I used set-spoweb to set the masterurl and custommasterurl properties of the failed site. That allow me back into the system (phew!):

Set-SPOWeb -Identity "/" -CustomMasterUrl "/_catalogs/masterpage/seattle.master"

Once in, I thought all was well, but I could only access content pages. Every time I tried to access the masterpages libary or one of the site settings pages I got an error, even using Seattle.master.

Fortunately, Gary also has a command that will upload a file to a library, so I attempted to overwrite my corrupted masterpage:

New-SPOFile -List "https://<my sharepoint online>.sharepoint.com/_catalogs/masterpage" -Web "/" -File "<local path to my master page file>" –Overwrite

Once I’d done that, everthing snapped back into life.

The moral of the story? Keep calm and always have PowerShell ISE open!

You can download Gary’s tools here and instructions on their use are here.

Big thanks, Gary!