BM-Bloggers

The blogs of Black Marble staff

‘Windows Phone 8.1 Update’ update

I have been running Windows Phone 8.1 Update for a couple of weeks now and have to say I like. I have not suffered the poor battery life others seem to have suffered. Maybe this is an feature of the Nokia 820 no needing as many firmware updates from Nokia (which aren't available yet) note having such power hungry features as the larger phones.

The only issue I have had is that I lost an audio channel when using a headset. Initially I was unsure if it was a mechanical fault on the headphone socket, but I checked the headset was good, it sounded as if the balance was faded to just one side as you could just hear something faint on the failing side. Anyway as often is the case in IT, a reboot of the phone fixed the issue.

VS2013 Update 2 RC–Ouch (Part 1)

So recently the Visual Studio Team released the Update 2 RC for the 2013. Among other things it has updates for project templates including the Universal Application project template.

This particular feature is something that my current development team were interested in as the project we’re currently working on is a Windows 8.1 Application, our client has strongly hinted they want it for Windows Phone 8.1 in the not too far future.

With that in mind one of our developers downloaded and installed the Update onto his development rig. The projects he added (portable class libraries) as it turned out had a solid dependency on the the Update. This meant no one on the rest of the team could build the solution without first unloading the projects he had added.

This also meant the build agent couldn’t build the solution since msbuild couldn’t open the project files. With work having already been done in these projects we all started rolling our Visual Studio installations forward onto the Update. (Whilst our Team Lead/Engineering Manager applied the update to the Build Agent).

That’s when the problems started.

I’ve been working on the CodedUI tests for this project for about 8-10 weeks now. They’re all hand coded, I use the CodedUI Test Builder tooling in Visual Studio to have peaks at the various properties of the controls I’m trying to find. After the this tooling stopped working. I’d open the Tool and then…nothing. Checking the event logs on my rig showed that the codeduitest.exe had crashed due to not being able to load any of the test extensions.

I attempted a repair installation to no avail (hoping that perhaps some of the assemblies had been corrupted). At this point I decided it would be best to reinstall visual studio 2013. My colleague kicked off an installation of VS2013 RTM for me the same evening after I left.

When I got into work the next day VS started and the CodedUI tool started! Great….but then I discovered that it was not very functional. It couldn’t pick up any objects. Poking around I found that to my horror the RTM install had decided to split itself between the root of Program Files(x86) and Program Files(X86)\Visual Studio 12.0.

I made a guess that some of the dlls that the IDE used weren’t where it was expecting despite the numerous registry keys that should have been pointing Visual Studio around to where it’s innards were strewn about. I tried moving the erroneously placed components back into the the 2013 folder but this caused the IDE to fail to even start.

Needless to say I uninstalled it again then found out to my dismay that it would only let me install it into the root of X86. Reading into it if the VS installation detects that a dependency of the installation is already installed (I believe it’s the C++ binaries) it forces you to install VS in the same location.

Because VS does not cleanly uninstall I had no means of getting rid of the pointer that was forcing it into the X86 folder, short of going through my registry with regedit and destroying everything labelled Visual Studio 12.0/2013 with extreme prejudice, even then it wasn’t guaranteed to work. So I had to have my rig re-imaged and wiped down. All so I could put the correct tooling back on to do my job properly, and overall I’d lost about 3 days of time on the sprint we were on due to the faffing around trying to get my machine into working order.

You’d think by this point how could it get any worse. Well it did but that’s a separate blog post (see part 2)

The moral of this story isn’t “Don’t install VS2013 Update 2 RC!!”, it should be tested and used by the community, just not on a production project in my opinion. Our first developer to install the update should have consulted with the rest of the team before doing so and as a team we would have weighed the risks versus the gains before deciding whether to install it. The simple reason being it’s an RC not the finished product. I can’t help but wonder if we’d waited a couple of months for an RTM version would I have had the same problems upgrading.

The return of Visual Studio Setup projects - just because you can use them should you?

A significant blocker for some of my customers moving to Visual Studio 2013 (and 2012 previously) has been the removal of Visual Studio Setup Projects; my experience has been confirmed by UserVoice. Well Microsoft have addressed this pain point by releasing a Visual Studio Extension to re-add this Visual Studio 2010 functionality to 2013. This can be downloaded from the Visual Studio Gallery.

Given this release, the question now becomes should you use it? Or should you take the harder road in the short term of moving to Wix, but with the far greater flexibility this route offers going forward?

At Black Marble we decided when Visual Studio Setup projects were dropped to move all active projects over to Wix, the learning curve can be a pain, but in reality most Visual Studio Setup project convert to fairly simple Wix projects. The key advantage for us is that you can build a Wix project on a TFS build agent via MSBuild; not something you can do with a  Visual Studio Setup Project without jump through hoops after installing Visual Studio on the build box.

That said I know that the upgrade cost of moving to Wix is a major blocker for many people, and this extension will remove that cost. However, please consider the extension a tool to allow a more staged transition of installer technology, not an end in itself. Don’t let you installers become a nest of technical debt

All upgraded to the Windows Phone 8.1 Update

My Nokia 820 phone is now updated to 8.1 with the developer preview.

image

The actual upgrade was straight forward, the only issue was that the Store was down last night so updating apps could not be done until this morning. This was made more of an issue by the fact I had had to remove all my Nokia Maps and the iPodcast application (and downloaded podcasts) to free up space on the phone to allow the upgrade. Both these apps could only store data on the phone (not the SDcard) thus blocked the upgrade. This lack of space on the actual phone has been a constant issue for me on the Nokia 820.

So what is new and immediately useful to me?

  • You can now store virtually anything on an SDCard, not just music and images
  • The notification bar is great, no need for the connectivity shortcuts, but it does so much more
  • And at last podcasting is built back in, only issue is I am not sure I want the hassle of re-entering all my subscriptions, iPodcast does such a great job storing them in the cloud making re-installation or device swaps so easy – time will tell on that one if I move or not.

At this point I decided to leave my phone on UK settings, so did not get Cortana enable, just letting others in the office map out any issues that may occur with playing with region settings and the Store.

So now to see what WP8.1 is like to live with…

Where has my picture password login sign in gone on Windows 8?

I have had a Surface 2 for about six months. It is great for watching videos on the train, or a bit of browsing, but don’t like it for note taking in meetings. This is a shame, as this is what I got it for; a light device with good battery life to take to meetings. What I needed was something I could hand write on in OneNote, an electronic pad. The Surface 2 touch screen is just not accurate enough.

After Rik’s glowing review I have just got a Dell Venue 8 Pro and stylus. I setup the Dell with a picture password and all was OK for a while, I could login via a typed password or a picture as you would expect. However the picture password sign-in option disappeared from the lock/login screen at some point after running the numerous update and application installation I needed.

I am not 100% certain, but I think the issue is that when I configured the Windows 8 Mail application to talk to our company Exchange server I was asked to accept some security settings from our domain. I think this blocked picture password for non-domain joined devices. I joined the Dell to our domain (you can do this as it is Atom not ARM based , assuming you re willing to do a reinstall with Windows 8 Pro) and this seems to have fixed my problem. I have installed all the same patches and apps and I still have the picture password option.

So roll on the next meeting to see if I can take reasonable hand written notes on it, and that OneNote desktop manages to get them converted to text.

Handling .pubxml files with TFS MSBuild arguments

With Visual Studio 2012 there were changes in the way Web Publishing worked; the key fact being that the configuration was moved from the .csproj to a .pubxml in the properties folder. This allows them to be more easily managed under source control by a team. This does have some knock on effects though, especially when you start to consider automated build and deployment.

Up to now we have not seen issues in this area, most of our active projects that needed web deployment packages had started in the Visual Studio 2010 era so had all the publish details in the project and this is still supported by later versions of Visual Studio. This meant that if we had three configurations debug, lab and release, then there were three different sets of settings stored in different blocks of the project file. So if you used the /p:DeployOnBuild=True MS Build argument for your TFS build, and built all three configurations you got the settings related to the respective configuration in each drop location.

This seems a good system, until you consider you have built the assemblies three times, in a world of continuous deployment by binary promotion is this what you want? Better to build the assemblies once, but have different (or transformed) configuration files for each environment/stage in the release pipeline. This is where a swap to a .pubxml file helps.

You create a .pubxml file by running the wizard in Visual Studio via right click on a project and selecting Publish

image

To get TFS build to to use a .pubxml file you need to pass its name as a MSBuild argument. So in the past we would have used the argument /p:DeployOnBuild=True, now we would use /p:DeployOnBuild=True;PublishProfile=MyProfile, where there is a .pubxml file in the path

[Project]/properties/PublishProfiles/MyProfile.pubxml

Once this is done your package will be built (assuming that this is Web Deployment Package and not some other form of deploy) and available on your drops location. The values you may wish to alter are probably in the [your package name].SetParameters.xml file, which you can alter with whichever transform technology you wish to use e.g. SlowCheetah or Release Management workflows.

One potential gotcha I had whilst testing with MSBuild from the command line, is that the .pubxml files contains a value for the property  <DesktopBuildPackageLocation>. This will be the output path you used when you created the publish profile using the wizard in Visual Studio.

If you are testing your arguments with MSBuild.exe from the command line this is where the output gets built to. If you want the build to behave more like TFS build (using the obj/bin folders) you can by clearing this value by passing the MSBuild argument /p:DesktopBuildPackageLocation="". 

You don’t need to worry about this for the TFS build definitions as it seems to be able to work it out and get the correctly packaged files to the drops location.

Windows 8.1 update 1 _KB2919355 Error: 0x80070070

While I have been pootling about updating machines at home I received an  Error: 0x80070070 for _KB2919355 patch. After some digging the problem and a bit of trial and error was the need for 4.5 Gb of free space to be available which on some of our smaller tablets was a challenge.

But simple enough, clear the space and you are set to go

b.

What I learnt getting Release Management running with a network Isolated environment

Updated 20 Oct 2014 – With notes on using an Action for cross domain authentication

In my previous post I described how to get a network isolated environment up and running with Release Management, it is all to do with shadow accounts. Well getting it running is one thing, having a useful release process is another.

For my test environment I needed to get three things deployed and tested

  • A SQL DB deployed via a DACPAC
  • A WCF web service deployed using MSDeploy
  • A web site deployed using MSDeploy

My environment was a four VM network isolated environment running on our TFS Lab Management system.

image 

The roles of the VMs were

  • A domain controller
  • A SQL 2008R2 server  (Release Management deployment agent installed)
  • A VM configured as a generic IIS web server (Release Management deployment agent installed)
  • A VM configured as an SP2010 server (needed in the future, but its presence caused me issues so I will mention it)

Accessing domain shares

The first issue we encountered was that we need the deployment agent on the VMs to be able to access domain shares on our corporate network, not just ones on the local network isolated domain. They need to be able to do this to download the actual deployment media. The easiest way I found to do this was to place a NET USE command at the start of the workflow for each VM I was deploying too. This allowed authentication from the test domain to our corporate domain and hence access for the agent to get the files it needed. The alternatives would have been using more shadow accounts, or cross domain trusts, both things I did not want the hassle of managing.

image

The run command line activity runs  the net command with the arguments use \\store\dropsshare [password] /user:[corpdomain\account]

I needed to use this command on each VM I was running the deployment agent on, so appears twice in this workflow, once for the DB server and once for the web server.

Updated 20 Oct 2014: After using this technique in a few release I realised it was a far better idea to have a action to do the job. The technique I mentioned above meant the password was in clear text, a parameterised action allows it to be encrypted.

To create an action (and it can be an action, not a component as it does not need to know the build location) needs the following settings

image

Version of SSDT SQL tools

My SQL instance was SQL 2008R2, when I tried to use the standard Release Management DACPAC Database Deployer tool it failed with assembly load errors. Basically the assemblies downloaded as part of the tool deployment did not match anything on the VM.

My first step was to install the latest SQL 2012 SSDT tools on the SQL VM. This did not help the problem as there was still a mismatch between the assemblies. I therefore create a new tool in the Release Management inventory, this was a copy of the existing DACPAC tool command, but using the current version of the tool assemblies from SSDT 2012

image

Using this version of the tools worked, my DB could be deployed/updated.

Granting Rights for SQL

Using SSDT to deploy a DB (especially if you have the package set to drop the DB) does  not grant any user access rights.

I found the easiest way to grant the rights the web service AppPool accounts needed was to run a SQL script. I did this by creating a component for my release with a small block of SQL to create DB owners, this is the same technique as used for the standard SQL create/drop activities shipped in the box with Release Management.

The arguments I used for the sqlcmd were -S __ServerName__ -b -Q "use __DBname__ ; create user [__username__] for login [__username__];  exec sp_addrolemember 'db_owner', '__username__';"

image

Once I had created this component I could pass the parameters I needed add DB owners.

Creating the web sites

This was straight forward, I just used the standard components to create the required AppPools and the web sites. It is worth nothing that these command can be run against existing site, the don’t error if the site/AppPool already exists. This seems to be the standard model with Release Management as there is no decision (if) branching in the workflow, so all tools have to either work or stop the deployment.

image

I then used the irmsdeploy.exe Release Management component to run the MSDeploy publish on each web site/service

image

A note here: you do need make sure you set the path to the package to be the actual folder the .ZIP file is in, not the parental drop folder (in my case Lab\_PublishedWebsites\SABSTestHarness_Package not Lab)

image

Running some integration tests

We now had a deployment that worked. It pulled the files from our corporate LAN and deployed them into a network isolated lab environment.

I now wanted to run some tests to validate the deployment. I chose to use some SQL based tests that were run via MSTest. These tests had already been added to Microsoft Test Manager (MTM) using TCM, so I thought I had all I needed.

I added the Release Management MTM component to my workflow and set the values taken from MTM for test plan and suite etc.

image

However I quickly hit cross domain authentication issues again. The Release Management component does all this test management via a PowerShell script that runs TCM. This must communicate with TFS, which in my system was in the other domain, so fails.

The answer was to modify the PowerShell script to also pass some login credentials

image

The only change in the PowerShell script was that each time the TCM command is called the /login:$LoginCreds block is added, where $LoginCreds are the credentials passed in the form corpdomain\user,password

$testRunId = & "$tcmExe" run /create /title:"$Title" /login:$LoginCreds /planid:$PlanId /suiteid:$SuiteId /configid:$ConfigId /collection:"$Collection" /teamproject:"$TeamProject" $testEnvironmentParameter $buildDirectoryParameter $buildDefinitionParameter $buildNumberParameter $settingsNameParameter $includeParameter
   

An interesting side note is that if you try to run the TCM command at the command prompt you only need to provide the credentials on the first time it is run, they are cached. This does not seem to be the case inside the Release Management script, TCM is run three times, each time you need to pass the credentials.

Once this was in place, and suitable credentials added to the workflow I expected my test to run. They did but 50% failed – Why?

It runs out the issue was that in my Lab Management environment setup I had set the roles of both IIS server and SharePoint server to Web Server.

My automated test plan in MTM was set to run automated tests on the Web Server role, so sent 50% of the tests to each of the available servers. The tests were run by Lab Agent (not the deployment agent) which was running as the Network Service machine accounts e.g. Proj\ProjIIS75$ and Proj\ProjSp2010$. Only for former of these had been granted access to the SQL DB (it was the account being used for the AppPool), hence half the test failed, with DB access issues

I had two options here, grant both machine accounts access, or alter my Lab Environment. I chose the latter. I put the two boxes in different roles

image

I then had to load the test plan in MTM so it was updated with the changes

image

Once this was done my tests then ran as expected.

Summary

So I now have a Release Management deployment plan that works for a network isolated environment. I can run integration tests, and will soon add some CodeUI ones, it is should only be a case of editing the test plan.

It is an interesting question of how well Release Management, in its current form, works with Lab Management when it is SCVMM/Network Isolated environment based, is is certainly not its primary use case, but it can be done as this post shows. It certainly provides more options than the TFS Lab Management build template we used to use, and does provide an easy way to extend the process to manage deployment to production.

Fix for ‘Web deployment task failed. (Unknown ProviderOption:DefiningProjectFullPath. Known ProviderOptions are:skipInvalid’ errors on TFS 2013.2 build

When working with web applications we tend to use MSDeploy for distribution. Our TFS build box, as well as producing a _PublishedWebsite copy of the site, produce the ZIP packaged version we use to deploy to test and production servers via PowerShell or IIS Manager

To create this package we add the MSBuild Arguments /p:CreatePackageOnPublish=True /p:DeployOnBuild=true /p:IsPackaging=True 

image

This was been working fine, until I upgraded our TFS build system to 2013.2. Any builds queued after this upgrade, that builds MSDeploy packages, gives the error

C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\Web\Microsoft.Web.Publishing.targets (3883): Web deployment task failed. (Unknown ProviderOption:DefiningProjectFullPath. Known ProviderOptions are:skipInvalid.)

If I removed the /p:DeployOnBuild=true argument, the build was fine, just no ZIP package was created.

After a bit of thought I realised that I had also upgraded my PC to 2013.2 RC, the publish options for a web project are more extensive, giving more options for Azure.

So I assumed the issue was a mismatch between MSBuild and target files, missing these new options. So I replaced the contents of C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\Web on my build box with the version from my upgraded development PC and my build started working again.

Seems there are some extra parameters set in the newer version of the build targets. Lets see if it changes again when Visual Studio 2013.2 RTMs.