BM-Bloggers

The blogs of Black Marble staff

Why is the Team Project drop down in Release Management empty?

The problem

Today I found I had a problem when trying to associate a Release Management 2013.2 release pipeline with a TFS build. When I tried to select a team project the drop down for the release properties was empty.

image

The strange thing was this installation of Release Management has been working OK last week. What had changed?

I suspected an issue connecting to TFS, so in the Release Management Client’s ‘Managing TFS’ tab I tried to verify the  active TFS server linked to the Release Management. As soon as I tried this I got the following error that the TFS server was not available.

clip_image002

I switched the TFS URL to HTTP from HTTPS and retired the verification and it worked. Going back to my release properties I could now see the build definitions again in the drop down. So I knew I had an SSL issue.

The strange thing was we use SSL as out default connection, and none of our developers were complaining they could not connect via HTTPS.

However, on checking I found on some of our build VMs there was an issue. If on those VMs I tried to connect to TFS in a browser with an HTTPS URL you got a certificate chain error.

But stranger, on my PC, where I was running the Release Management client, I could access TFS over HTTPS from a browser and Visual Studio, but the Release Management verification failed.

The solution

It turns out the issue was we had an intermediate cert issue with our TFS server. An older Digicert intermediate certificate had expired over the weekend, and though the new cert was in place, and had been for a good few months since we renewed our wildcard cert, the active wildcard cert insisted on using the old version of the intermediate cert on some machines.

As an immediate fix we ended up having to delete the old intermediate cert manually on machines showing the error. Once this was done the HTTPS connect worked again.

Turns the real culprit was a group policy used to push out intermediate certs that are required to be trusted for some document automation we use. This old group policy was pushing the wrong version of the cert to some server VMs. Once this policy was update with the correct cert and pushed out it overwrote the problem cert and the problem went away.

One potentially confusing thing here is that the ‘verity the TFS link’ in Release Management verifies that the Release Management server can see the TFS server, not the PC running the Release Management client. It was on the Release Management server I had to delete the dead cert (run a gpupdate /force to get the new policy). Hence why I was confused by my own PC working for Visual Studio and not for Release Management

So I suspect the issue with drop down being empty is always going to really mean the Release Management server cannot see the TFS server for some reason, so check certs, permissions or basic network failure.

A week with the Surface Pro 3

Robert unexpectedly (gotta love him!) gave me a surprise present in the form of a Microsoft Surface Pro 3. I’ve now been using it for a week and I thought it was time to put my thoughts into words.

You’ll pry it out of my cold, dead hands

Overall, this is a fantastic bit of kit and it’s the device I have used most at home, for meetings and even sometimes at my desk. The only reason it hasn’t replaced my stalwart ThinkPad X220T is that it has neither the memory nor the storage to run the virtual machines I still need. It’s light, comfortable to hold, has great battery life and the screen is gorgeous.

Specs – good enough?

The model I have is the core i5 with 8Gb of RAM and a 256Gb SSD. It’s quick. It also has ample storage for my needs once I remove VMs from the equation. It’s true – Visual Studio hasn’t been installed yet, but I know from conversations with Robert that I am not space-poor.

It’s quick to boot up – quick enough that I rarely bother with the connected standby and usually shut down fully. It has handled all of my Office-centric tasks without pause, from Word through PowerPoint to the ever-present OneNote. The screen is a pin sharp 2160x1440 which is easy to read when typing (although there are a few apps that appear a little blurry from the display scaling), although as with many other devices, the glossy glass screen can suffer from reflections in very bright sunlight.

I’m also very happy with the keyboard. I’m typing this post in my front room, sat on the sofa with the Pro on my lap. The revised ‘any-position’ kickstand makes it much more comfortable than the Surface and Surface Pro – neither of which I would have endured this process with. The new ‘double fold’ design of the type cover makes it less sit at a better angle than its predecessors. Yes, it still flexes on a single hinge when on my lap, but it does feel more stable than before.

The track pad is also much improved. I now have a collection of covers – touch, type and power, along with the new type cover. The power cover is great fgor battery life but the track pad was an abomination. This one is just fine – it feels good to touch, with enough resistance to the surface texture, and the buttons have a responsive click to them.

Shape and size

The first thing you notice about the Pro 3 is the size of it. It’s no thicker than my original RT and half the thickness of the original Pro. It’s also a different shape, and I think it’s that which makes all the difference. No longer 16:9, the device is very comfortable to use in portrait mode – much better than the Pro, although I tended to use that in portrait too. When you aren’t wanting to type, you naturally stand it on the short edge. Microsoft obviously expects that – the Windows button is on the right hand edge as you look at the tablet when using the type cover.

It’s also really light. Much lighter than the Pro, and it even feels lighter than the RT. I suspect the thickness of the glass helps a great deal, but it’s pretty impressive when you think that they’ve packed the same power as the Pro in to half the weight, half the thickness and managed to increase the battery life at the same time.

Battery Life

I’ve not run exhaustive battery tests, but I can report that I have charged the surface about three times during the week. It lasts all night when reading web pages, using the twitter app and other Windows Store applications; it quite happily ran through a four hour meeting with a copy of Word open (try doing that on a generation 1 Pro) so, thus far I’m impressed. I haven’t yet tried to last a full working day on a charge, though.

The Stylus

I was concerned when Microsoft switched from the Wacom technology used by the older Surface Pro to the new Ntrig active pen. I have been very pleasantly surprised, however. The inking experience is wonderful. The pen has a very soft feel to it – very unlike the Dell Venue 8 Pro and better even than the Wacom. I do miss being able to erase by flipping the pen, but having used the two-button Dell pen for six months now the switch wasn’t an issue. The accuracy of writing is great. Supposedly the distance between the LCD display and the surface of the glass has been reduced and I must say that the feel of writing is good – the lines I draw feel closer to the pen tip than the Dell, certainly.

My one little niggle

I only have one problem, and to be fair it’s pretty minor. One of the things I use the original Pro for is pulling photos off the SD card from my Canon EOS 450D. The new Pro, with it’s better screen would be great for that task. Except I can’t, because the SD card slot present on the Pro has gone, replaced by a MicroSD slot in the same place as on the RT. It makes sense for space, but it’s a bit of a pain. Time to try using a MicroSD card with adapter in my camera, I guess – I don’t really want to carry a USB adapter.

You’d think that I’d miss a physical ethernet port (I don’t – I can use a USB one if I need to ) or bemoan the single USB 3 port (if I’m stuck, my USB 3 ethernet dongle is also a hub, and how often do I need to use more than one USB device, since this thing has a keyboard and track pad!), but it’s the SD card which is the only thing I’ve wished had been present.

A panoply of devices

I’ll admit to being a device fiend. I now have an original Surface RT, a generation one Surface Pro and a Dell Venue 8 Pro. Of those, the RT has been used rarely since I got the Dell, although the Pro was something I would turn to regularly at home to work on, being larger than the Dell and lighter than the X220T (although with the Power Cover on, we could debate that).

Since I got the Pro 3, I haven’t touched anything else. As I said, I still use the X220T, because I have no alternative. Yes, I could run VMs in Azure or on our Hyper-V server, but the neither work without an internet connection and it’s quick and easy to roll forwards and backwards between checkpoints when VMs are on your own machine.

The fact that I haven’t touched the Dell is perhaps the saddest part of this. I find myself reaching for the Pro 3 every time. I am still using OneNote rather than typing or using paper, but the Pro 3 is nicer to write on than the Dell. Whether I will still use the Dell for customer meetings, where the size means I can leave my usual rucksack of equipment behind I have yet to find out, but it’s a telling change.

Dig a little deeper – enterprise ready?

Pretty much the first thing I did with the new device was wipe it clean. We have a Windows 8.1 image at Black Marble that we usually push out with SCCM. I grabbed that image, downloaded the Surface Pro driver pack from Microsoft and used dism to install the drivers into the image. I then deployed that image onto the Pro via USB.

Installation was completely painless, even including the automated installation of some firmware updates that were included in the driver pack. All devices were detected just fine and the end result is a Surface Pro 3 with our Enterprise image, domain-joined and hooked up to our DirectAccess service so I can work anywhere.

I have installed Office, but I will admit to not having used Outlook on this yet. Much of my usage has been in tablet mode and I prefer the Windows 8.1 Mail app over Outlook without the keyboard and trackpad. Office 2013 is not yet fully touch-friendly, whatever they try to tell you.

You know what would make it perfect?

You can see this coming, can’t you? Sure, I could get more storage and horsepower with the top-of-the-line model, but there is no point. The only reason I would need those is if I could have my one wish – 16Gb of RAM.

It’s s terrible thing – no Ultrabooks come with 16Gb of RAM. I don’t need a workstation replacement (like the W530s our consultants use) as I don’t run the number or size of VMs they do. But I, like Richard, do run VMs for demos and working on projects. 8Gb doesn’t cut it. 16Gb would be fine. I firmly believe that there is a market for a 16Gb ultrabook. Or a 16Gb Pro 3. In all honesty, I think I’d be happy with this as my one device, if I could solve the RAM problem. I think that says it all, really.

Automating TFS Build Server deployment with SCVMM and PowerShell

Rik recently posted about the work we have done to automatically provision TFS build agent VMs. This has come out of us having about 10 build agents on our TFS server all doing different jobs, with different SDKs etc. When we needed to increase capacity for a given build type we had a problems, could another agent run the build? what exactly was on the agent anyway? An audit of the boxes made for horrible reading, there were very inconsistent.

So Rik automated the provision of new VMs and I looked at providing a PowerShell script to install the base tools we needed  on our build agents, knowing this list is going to change a good deal over time. After some thought, for our first attempt we picked

  • TFS itself (to provide the 2013.2 agent)
  • Visual Studio 2013.2 – you know you always end up installing it in the end to get SSDT, SDK and MSBuild targets etc.
  • WIX 3.8
  • Azure SDK 2.3 for Visual Studio 2013.2 – Virtually all our current projects need this. This is actually why we have had capacity issue on the old build agents as this was only installed on one.

Given this basic set of tools we can build probably 70-80% of our solutions. If we use this as the base for all build boxes we can then add extra tools if required manually, but we expect we will just end up adding to the list of items installed on all our build boxes, assuming the cost of installing the extra tools/SDKs is not too high. Also we will try to auto deploy tools as part of our build templates where possible, again reducing what needs to be placed on any given build agent.

Now the script I ended up with is a bit rough and ready but it does the job. I think in the future a move to DSC might help in this process, but I did not have time to write the custom resources now. I am assuming this script is going to be a constant work in progress as it is modified for new releases and tools.  I did make the effort to make all the steps check to see if they needed to be done, thus allowing the re-running of the script to ‘repair’ the build agent. All the writing to the event log is to make life easier for Rik when working out what is going on with the script, especially useful due to the installs from ISOs being a bit slow to run.

 

# make sure we have a working event logger with a suitable source
Create-EventLogSource -logname "Setup" -source "Create-TfsBuild"
write-eventlog -logname Setup -source Create-TfsBuild -eventID 6 -entrytype Information -message "Create-Tfsbuild started"

# add build service as local admin, not essential but make life easier for some projects
Add-LocalAdmin -domain "ourdomain" -user "Tfsbuilder"





# Install TFS, by mounting the ISO over the network and running the installer

# The command ‘& $isodrive + ":\tfs_server.exe" /quiet’ is run

# In the function use a while loop to see when the tfconfig.exe file appears and assume the installer is done – dirty but works

# allow me to use write-progress to give some indication the install is done.
Write-Output "Installing TFS server"
Add-Tfs "\\store\ISO Images\Visual Studio\2013\2013.2\en_visual_studio_team_foundation_server_2013_with_update_2_x86_x64_dvd_4092433.iso"

Write-Output "Configuring TFS Build"
# clear out any old config – I found this helped avoid error when re-running script

# A System.Diagnostics.ProcessStartInfo object is used to run the tfsconfig command with the argument "setup /uninstall:All"

# ProcessStartInfo is used so we can capture the error output and log it to the event log if required
Unconfigure-Tfs


# and reconfigure, again using tfsconfig, this time with the argument "unattend /configure  /unattendfile:config.ini", where

# the config.ini has been created with tfsconfig unattend /create flag (check MSDN for the details)
Configure-Tfs "\\store\ApplicationInstallers\TFSBuild\configsbuild.ini"

# install vs2013, again by mounting the ISO running the installer, with a loop to check for a file appearing
Write-Output "Installing Visual Studio"
Add-VisualStudio "\\store\ISO Images\Visual Studio\2013\2013.2\en_visual_studio_premium_2013_with_update_2_x86_dvd_4238022.iso"

# install wix by running the exe with the –q options via ProcessStartInfo again.
Write-Output "Installing Wix"
Add-Wix "\\store\ApplicationInstallers\wix\wix38.exe"

# install azure SDK using the Web Platform Installer, checking if the Web PI is present first and installing it if needed

# The Web PI installer lets you ask to reinstall a package, if it is it just ignores the request, so you don’t need to check if Azure is already installed
Write-Output "Installing Azure SDK"
Add-WebPIPackage "VWDOrVs2013AzurePack"

write-eventlog -logname Setup -source Create-TfsBuild -eventID 7 -entrytype Information -message "Create-Tfsbuild ended"
Write-Output "End of script"

 

So for a first pass this seems to work, I now need to make sure all our build can use this cut down build agent, if they can’t do I need to modify the build template? or do I need to add more tools to our standard install? or decide if it is going to need a special agent definition?

Once this is all done the hope is that when all the TFS build agents need patching for TFS 2013.x we will just redeploy new VMs or run a modified script to silently do the update. We shall see if this delivers on that promise

SharePoint 2013: Creating Managed Metadata Columns that allow Fill-In Choices

This is a relatively quick post. There’s a fair bunch of stuff written about creating columns in SharePoint 2013 that use Managed Metadata termsets. However, some of it is a pain to find and then some. I have had to deal with two frustrating issues lately, both of which boil down to poor sharepoint documentation.

 Wictor Wilén wrote the post I point people at for most stuff on managed metadata columns, but this time the internet couldn’t help.

First of all, I wanted to create a custom column which used a termset to hold data. This is well documented. However, I wanted to allow fill-in choices and could I make that work? My termset was open, my xml column definition looked right, but no fill-in choice. Update the column via the web UI and I could turn fill-in on and off with no problem. In the end, I examined the column with PowerShell before and after the change. It turns out (and this is not the only place they do it) the UI stays the same, but the settings changed in the background are different. For metadata columns the FillInChoice property is ignored – you must add a custom property called Open:

<?xml version="1.0" encoding="utf-8"?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/">
  <Field
       ID="{b7406e8e-47aa-40ac-a061-5188422a58d6}"
       Name="FeatureGroup"
       DisplayName="Feature Group"
       Type="TaxonomyFieldType"
       Required="FALSE"
       Group="Nimbus"
       ShowField="Term1033"
       ShowInEditForm="TRUE"
       ShowInNewForm="TRUE"
       FillInChoice="TRUE">
    <Customization>
      <ArrayOfProperty>
        <Property>
          <Name>TextField</Name>;
          <Value xmlns:q6="http://www.w3.org/2001/XMLSchema" p4:type="q6:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">{b7406e8e-47aa-40ac-a061-5188422a58d6}</Value>
        </Property>
        <Property>
          <Name>Open</Name>
          <Value xmlns:q5="http://www.w3.org/2001/XMLSchema" p4:type="q5:boolean" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">true</Value>
        </Property>
        <Property>
          <Name>IsPathRendered</Name>
          <Value xmlns:q7="http://www.w3.org/2001/XMLSchema" p4:type="q7:boolean" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">false</Value>
        </Property>
      </ArrayOfProperty>
    </Customization>
  </Field>

  <Field
    Type="Note"
    DisplayName="FeatureGroupTaxHTField0"
    StaticName="FeatureGroupTaxHTField0"
    Name="FeatureGroupTaxHTField0"
    Group="Nimbus"
    ID="{164B63F0-3424-4A9B-B6E4-5EC675EF5C75}"
    Hidden="TRUE"
    DisplaceOnUpgrade="TRUE">
  </Field>
</Elements>

Whilst we’re on the subject, if you want metadata fields to be correctly indexed by search, the hidden field MUST follow the strict naming convention of <fieldname>TaxHTField0.

When using the content by search web part, what gets indexed by search is the text of the term in the column within the content type. However, if you enter a search query wanting to match the value of the current page or item what gets pushed into the search column is the ID of the term (a GUID), not the text. It is possible to match the GUID against a different search managed property, but that only gets created if you name your hidden field correctly. Hat-tip to Martin Hatch for that one, in an obscure forum post I have not been able to find since.

Automating TFS Build Server deployment with SCVMM and PowerShell

Richard and I have been busy this week. It started with a conversation about automating the installation of new build servers. Richard was looking at writing PowerShell to install and configure the TFS build agent, along with all the various SDKs that we use across all out projects. Our current array of build servers have all been built by hand and each has a different set of SDKs to build specific project types. Richard’s aim is to make a single, homogenous build server configuration so we can then scale out for capacity much more quickly than before.

Enter, stage left, SCVMM. For my part, I’ve been looking at what can be done with VM Templates and, more importantly, service templates. It seemed to me that creating a Build Server service in SCVMM with a standard template would allow us to quickly and easily add and remove servers to the group.

There isn’t much written about the application/script side of SCVMM server templates, so I thought I’d write up my part.

Note: I’m not a System Center specialist. We use Config Manager, Virtual Machine Manager and Data Protection Manager at Black Marble for our own services rather than being a System Center partner.

Dividing up the problem space

Our final template uses a single PowerShell script to perform the configuration and installation work. Yes, I could have created steps in the service template to install each of the items Richard’s script deployed, but we decided against that. The reasoning is relatively simple: It’s much easier to modify the PowerShell script to add, remove or change the stuff that gets deployed. It’s hard to do that with SCVMM, as far as I can tell.

However, during testing I discovered that if I added windows roles and features through the template it was faster than when the various installers Richard called in his script triggered the feature addition.

The division of labour, then, became the following:

SCVMM Tasks:

  • VM Template is created for the target OS. The VM template is configured to automatically join the new machine to our domain and place the machine in the correct OU. It also sets the language correctly. More on that later.
  • Service Template is created for a Build Servers service. It’s a single tier service that has a minimum of one machine and a maximum of twenty (that maximum is a bit on an arbitrary value on my part). The service template adds the roles and features to the machine definition and runs two script application blocks:
    • The first simply runs xcopy to pull the contents of a folder on a share to the local PC. I do this because running a power shell PowerShell script from a network share doesn’t work – in an interactive session you are prompted before the script executes because it’s from an untrusted location and I haven’t worked out how to suppress the prompt yet.
    • The second application executes powershell.exe and feeds in the full path to Richard’s PowerShell script, newly copied onto the local disk.

Step 1: VM Template

There’s a wealth of information about creating VM templates in the internet, so I’m not going to cover this in depth. I did, however, want to pull out a couple of things I discovered along the way.

I installed my base VM with the UK English regional settings. When SCVMM converted that into a template via sysprep, the resulting machine comes up in English US, which is really annoying. Had I been paying attention, I would have noticed that we already had an unattend.xml file to correct this, which I could have referenced in the VM template settings. However, I found a much more interesting way to address the issue (which of course led me down another rabbit hole).

A bit of research led me to a very interesting post by Gunter Danzeisen. In it he shows how to use powershell to modify an unattendsettings property of the VM template within System Center. This is at the same time both irritating and enlightening.

It’s irritating, because I am truly fed up of ‘hidden’ functionality in products that causes me pain. The VM Template clearly allows me to specify an unattend.xml file, so why have an internal one as an object property. Moreover, why not simply document it’s existence and let me modify that property – why do I need two different methods which then makes me constantly wonder which gets priority.

It’s enlightening, because I can modify that property really easily – it’s simply a collection of name/value pairs that marry against the unattend.xml settings.

There is a bit of snag with this approach, however, which I’ll come onto in a little while.

Anyway, back to the plot. I followed Gunter’s advice and used PowerShell to set the language values of the internal unattend. I then decided to use the same approach to see if I could add other settings – specifically the destination OU for the server when added to AD.

The PowerShell for the region settings is below:

$template = Get-SCVMtemplate | where {$_.Name -eq "My VM Template"} 
$settings = $template.UnattendSettings; 
$settings.add("oobeSystem/Microsoft-Windows-International-Core/UserLocale","en-GB"); 
$settings.add("oobeSystem/Microsoft-Windows-International-Core/SystemLocale","en-GB"); 
$settings.add("oobeSystem/Microsoft-Windows-International-Core/UILanguage","en-GB"); 
$settings.add("oobeSystem/Microsoft-Windows-International-Core/InputLocale","0809:00000809"); 
Set-SCVMTemplate -VMTemplate $template -UnattendSettings $settings

For reference, removing a setting is easy – simply reference the name of the setting when calling the remove method:

$settings.remove("oobeSystem/Microsoft-Windows-International-Core/UserLocale"); 

I then set the destination OU with the following setting:

$settings.add("specialize/Microsoft-Windows-UnattendedJoin/Identification/MachineObjectOU","OU=FileServers,DC=mydomain,DC=local"); 

The Snag

There is a problem with this approach. If you use an unattend.xml file, you can override that setting when you add the VM template to your service template. However, whilst I could find the unattendsettings property of the VM when referenced by the template, I couldn’t modify it.

If we access the Service Template object with:

$svctemplate = Get-SCServiceTemplate | where {$_.name -eq "TFS Build Service"}

We get an object that contains one or more ComputerTierTemplates (depending on how many tiers you gave your service). Each of those has a VMTemplate object that holds the information from our original VMTemplate, and therefor has our unattendsettings.

$svctemplate.ComputerTierTemplates[0].VMTemplate.UnattendSettings

So, we can grab those settings and modify them. Great. The trouble is, I haven’t found a way to update the stored configuration. Set-SCServiceTemplate doesn’t let me stuff the settings back in the same was as Set-SCVMTemplate does, and you can’t use the latter with a reference to the VMTemplate child of our template.

Now, I decided that I would create a copy of my original VM template just for Build Servers, so I could set a different target OU for the servers. In hindsight, I’m not sure whether this is better than overlaying unattend.xml files, and I haven’t experimented with how the unattend.xml might interact with the unattendsettings yet either. If you try, please let me know how you get on.

Step 2: Service Template

Once I’d got my VM Template sorted, the next step was to create a service. There’s a pretty nice design surface for these that allows you to pick a ‘starter’ template with the right number of tiers, although it’s dead easy to add a new tier.

I started with the Single Machine template, which gave me a single tier. You then need to drag a VM template from a list of available templates onto the tier. The screenshot below shows my single tier Service. The VM template has a single NIC and is configured to connect to my Black Marble network.

image

The light blue border on the large box (the service tier) indicates it’s selected. That will show the tier properties at the bottom of the design window. In here I have set a minimum and maximum number of servers that can be deployed in the tier.

image

Notice also the availability set option – if I needed to ensure that VMs in this service were spread across multiple hosts for resilience I could tick this option. I don’t care where build servers get deployed (they go onto our Lab VM hosts and are effectively ‘disposable’) so I have left this alone.

Open the properties of the tier (right-click or choose View All Properties in the property pan) and a dialog opens with machine properties. In here I have configured the roles and features for the build server (I deliberately haven’t set these in the VM Template so I can have fewer, more general VM templates).

image

Also in here are the Application Configuration settings that cause the VM to run Richard’s PowerShell. The first is a simple one that references cmd.exe to run xcopy. All the settings on this are default.

image

The second app runs Powershell.exe and passes in a file parameter. This was a source of much frustration – I wanted to use the –ExecutionPolicy parameter to ensure the script ran successfully but if I added this (as the first parameter, –File has to be the last one) the whole command failed. As it happens I set the execution policy in Group Policy for all the Build Servers but I like the belt and braces approach.

The biggest point here is the timeout setting. Richard’s script can take an hour or so to run, so the timeout is a BIG number. The first few times I deployed the script task failed because of this, although in reality the script itself was still running happily and completed fine.

image

I have changed the advanced settings for this script, though. To make debugging a little easier (the VM is a black box whilst deploying, so it’s tricky to see what’s going on) I have directed standard output and standard errors to a file. I’ve also turned off the options to automatically ‘detect’ failures through watching output, error and exit codes. Richard’s script can be run repeatedly with no ill-effect, so I’ve left the restart option to restart the script. That means that if I restart the deployment job from SCVMM if it fails, the script will be run.

image

Once the service template is created, I deployed the service with a single server. We then added a second server by scaling out the tier.

image

We can do this via the SCVMM console, or using the virtualmachinemanager PowerShell module:

$serviceInstance = Get-SCService -Name "TFS Build Servers"
$computerTier = Get-SCComputerTier -Service $serviceInstance | where { $_.Name -eq "TFS Build Server" }
New-SCVirtualMachine -ComputerTier $computerTier -Name "Build03" -Description "" -ReturnImmediately -ComputerName "Build03" -StartAction "NeverAutoTurnOnVM" -StopAction "SaveVM"

Lessons Learned

It’s been an interesting process overall. I think we’ve made the right choice in using a single, easily modifiable powershell script to do the heavy lifting here. Yes, I could have created Application Profiles in SCVMM for each of the items Richard installed, but it would have been harder to make changes (and things like the Azure SDK are updated faster than I can blink!).

I’m still considering whether my choice of adding the domain location to the unattendsettings in the VM template object was a good choice or not. I’m happy that the language settings should go in there – we never change those. I need to experiment with how adding an unattend.xml file affects the settings in the template object.

Service Templates are a great way to go. When you think about it, most of our IT systems tend to be service-focused. Using service templates for things like SharePoint or CRM are a no brainer, but also things like web servers, where we have a number of relatively heterogeneous VMs that host internal web services or sites. Services in SCVMM collect those VMs into manageable groups that can be easily spread across multiple hosts for resilience if required. It’s also much easier to find VMs in services than in a very long list of hosts!

In terms of futures, I’m interested in where Desired State Configuration will take us. Crafting the necessary elements for this project would be extremely complicated with DSC right now, but when all the ducks are in order it should make life much, much easier, and DSC is certainly on my learning list.

Could not load file or assembly 'Microsoft.TeamFoundation.WorkItemTracking.Common, Version=12.0.0.0’ when running a build on a new build agent on TFS 2013.2

I am currently rebuilding our TFS build infrastructure, we have too many build agents that are just too different, they don’t need to be. So I am looking at a standard set of features on a build agent and the ability to auto provision new instances to make scaling easier. More on this in a future post…

Anyway whilst testing a new agent I had a problem. A build that had worked on a previous test agent failed with the error

Could not load file or assembly 'Microsoft.TeamFoundation.WorkItemTracking.Common, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

The log showed it was failing to even do a get latest of the files to build, or anything on the build agent.

image

 

Turns out the issue was the PowerShell script that installed all the TFS build components and SDKs had failed when trying to install the Azure SDK for VS2013, the Web Deploy Platform was not installed so when it tried to use the command line installer to add this package it failed.

I fixed the issues for the Web PI tools and re-ran the command line to installed the Azure SDK and all was OK.

Not sure why this happened, maybe a missing pre-req put on by Web PI itself was the issue. I know older versions did have a .NET 3.5 dependency. Once to keep an eye on

MSBuild targeting a project in a solution folder

Whilst working on an automated build where I needed to target a specific project I hit a problem. I would normally expect the MSBuild argument to be

/t:MyProject:Build

Where I want to build the project Myproject in my solution and perform the Build target (which is probably the default anyway).

However, my project was in a solution folder. The documentation says the you should be able to use for form

/t:TheSolutionFolder\MyProject:Build

but I kept getting the error the project did not exist.

Once I changed to

/t:TheSolutionFolder\MyProject

it worked, the default build target was run, which was OK as this was Build the one I wanted.

Not sure why occurred, maybe I should steer clear of solution folders?

Building Azure Cloud Applications on TFS

If you are doing any work with Azure Cloud Applications there is a very good chance you will want your automated build process to produce the .CSPKG deployment file, you might even want it to do the deployment too.

On our TFS build system, it turns out this is not a straight forward as you might hope. The problem is that the MSbuild publish target that creates the files creates them in the $(build agent working folder)\source\myproject\bin\debug folder. Unlike the output of the build target which puts them in the $(build agent working folder)\binaries\ folder which gets copied to the build drops location. Hence though the files are created they are not accessible with the rest of the built items to the team.

I have battled to sort this for a while, trying to avoid the need to edit our customised TFS build process template. This is something we try to avoid where possible, favouring environment variables and MSbuild arguments where we can get away with it. There is no point denying that editing build process templates is a pain point on TFS.

The solution – editing the process template

Turns out a colleague had fixed the same problem a few projects ago and the functionality was already hidden in our standard TFS build process template. The problem was it was not documented; a lesson for all of us, that it is a very good idea to put customisation information in a searchable location so others find customisations that are not immediate obvious. Frankly this is one of the main purposes of this blog, somewhere I can find what I did that years, as I won’t remember the details.

Anyway the key is to make sure the publish target for the MSBbuild uses the correct location to create the files. This is done using a pair of MSBuild arguments in the advanced section of the build configuration

  • /t:MyCloudApp:Publish -  this tells MSbuild to perform the publish action for just the project MyCloudApp. You might be able to just go /t:Publish if only one project in your solution has a Publish target
  • /p:PublishDir=$(OutDir) - this is the magic. We pass in the temporary variable $(OutDir). At this point we don’t know the target binary location as it is build agent/instance specific, customisation in the TFS build process template converts this temporary value to the correct path.

In the build process template in the Initialize Variable sequence within Run on Agent add a If Activity.

image

  • Set the condition to MSBuildArguments.Contains(“$(OutDir)”)
  • Within the true branch add an Assignment activity for the MSBuildArguments variable to MSBuildArguments.Replace(“$(OutDir)”, String.Format(“{0}\{1}\\”, BinariesDirectory, “Packages”))

This will swap the $(OutDir) for the correct TFS binaries location within that build.

After that it all just works as expected. The CSPKG file etc. ends up in the drops location.

Other things that did not work (prior to TFS 2013)

I had also looked a running a PowerShell script at the end of the build process or adding an AfterPublish target within the MSBuild process (by added it to the project file manually) that did a file copy. Both these methods suffered the problem that when the MSBuild command ran it did not know the location to drop the files into. Hence the need for the customisation above.

Now I should point out that though we are running TFS 2013 this project was targeting the TFS 2012 build tools, so I had to use the solution outlined above, a process template edit. However, if we had been using the TFS 2013 process template as our base for customisation then we would have had another way to get around the problem.

TFS 2013 exposes the current build settings as environment variables. This would allow us to use a AfterPublish MSBuild Target something like

<Target Name="CustomPostPublishActions" AfterTargets="AfterPublish" Condition="'$(TF_BUILD_DROPLOCATION)' != ''">
  <Exec Command="echo Post-PUBLISH event: Copying published files to: $(TF_BUILD_DROPLOCATION)" />
  <Exec Command="xcopy &quot;$(ProjectDir)bin\$(ConfigurationName)\app.publish&quot; &quot;$(TF_BUILD_DROPLOCATION)\app.publish&quot; /y " />
</Target>

So maybe a simpler option for the future?

The moral of the story document your customisations and let your whole team know they exist

Interesting license change for VS Online for ‘Stakeholder’ users

All teams have  ‘Stakeholder’, the people the are driving a project forward, who want the new system to be able to do their job; but are often not directly involved in the production/testing of the system. In the past this has been an awkward group to provide TFS access for. If they want to see any detail of the project they need a TFS CAL, expensive for the occasional casual viewer.

Last week Brian Harry announced there would be a licensing change in VSO with a ‘Stakeholder’ license. It has limitations, but provides the key feature they will need

  • Full read/write/create on all work items
  • Create, run and save (to “My Queries”) work item queries
  • View project and team home pages
  • Access to the backlog, including add and update (but no ability to reprioritize the work)
  • Ability to receive work item alerts

The best news is that it will be a free license, so no monthly cost to have as many ‘Stakeholders’ on you VSO account.

Now most of my clients are using on-premise TFS, so this change does not effect them. However, the same post mentions that the “Work Item Web Access” TFS CAL exemption will be change in future releases of TFS to bring it in line with the ‘Stakeholder’.

So good new all round, making TFS adoption easier, adding more ways for clients to access their ALM information