But it works on my PC!

The random thoughts of Richard Fennell on technology and software development

Team Foundation Service RTMs

Today at Build 2012 it was announced that https://tfspreview.com has RTM'd as Team Foundation Service on https://tfs.visualstudio.com.

Up until now there has been no pricing information, which had been a barrier to some people I have spoken to as they did not want to started using something without knowing the future cost.

So to the really good news, as of now

  • It is free for up to 5 users
  • If you have an active MSDN subscription it is also free. So a team of any size can use it as long as they all have MSDN

The announcement said to look out for further price options next year.

Check the full details at Soma's blog

403 and 413 errors when publishing to a local Nuget Server

We have an internal Nuget Server we use to manage our software packages. As part of our upgrade to TFS2012 this needed to be moved to a new server VM and I took the chance to upgrade it from 1.7 to 2.1.

The problem

Now we had had a problem that we could publish to the server via a file copy to its underlying Packages folder (a UNC share) but could never publish using the Nuget command e.g.

Nuget push mypackage.nupkg -s http://mynugetserver

I had never had the time to get around to sorting this out until now.

The reported error if I used the URL above was

Failed to process request. 'Access denied for package 'Mypackage.'.
The remote server returned an error: (403) Forbidden..

If I changed the URL to

Nuget push mypackage.nupkg -s http://mynugetserver/nuget

The error became

Failed to process request. 'Request Entity Too Large'.
The remote server returned an error: (413) Request Entity Too Large..

Important: This second error was a red herring, you don't need the /nuget on the end of the URL

The solution

The solution was actually simple, and in the documentation though it took me a while to find.

I had not specificed an APIKey in the web.config on my server, obvious really my access was blocked as I did not have the shared key. The 413 errors just caused me to waste loads of time looking at WCF packet sizes because I had convinced myself I needed to use the same URL as you enter in Visual Studio > Tools > Option > Package Management > Add Source, which you don't

Once I had edited my web.config file to add the key (or I could have switched off the requirement as an alternative solution)

  <appSettings>
    <!--
            Determines if an Api Key is required to push\delete packages from the server.
    -->
    <add key="requireApiKey" value="true" />
    <!--
            Set the value here to allow people to push/delete packages from the server.
            NOTE: This is a shared key (password) for all users.
    -->
    <add key="apiKey" value="myapikey" />
    <!--
            Change the path to the packages folder. Default is ~/Packages.
            This can be a virtual or physical path.
        -->
    <add key="packagesPath" value="" />
  </appSettings>

I could then publish using

Nuget mypackage.nupkg myapikey -s http://mynugetserver

Nice introduction to the new features of VS2012

If you are looking for a nice introduction to the new features of Visual Studio 2012, I can heartily recommend Richard Banks 'Visual Studio 2012 Cookbook'.

This book covers a wide range of subjects including the IDE, .NET 4.5 features, Windows 8 development, Web development, C++, debugging, async and TFS 2012. This is all done in a easy to read format that will get you going with the key concepts, providing sample and links to further reading. A great starting off point.

There is stuff in the book for people new to any of the subjects as well as nuggets for the more expererienced users. I particularly like the sections on what is not in 2012 but was in previous versions, and what to do about it. This type of information too oftan left out of new product books.

So a book that is well worth a look, and has it has been published by Packt there are no shortage of formats to choose from.

Problems re-publishing an Access site to SharePoint 2010

After applying SP1 to a SharePoint 2010 farm we found we were unable to run any macros in an Access Services site, it gave a –4002 error. We had seen this error in the past, but the solutions that worked then did not help. As this site was critical, as a workaround, we moved the site to a non-patched SP2010 instance. This was done via a quick site collection backup and restore process.  This allowed us to dig into the problem at our leisure.

Eventually we fixed the problem by deleting and recreating the Access Services application within SharePoint on the patched farm. We assume some property was changed/corrupted/deleted in the application of the service pack.

So we now had a working patched farm, but also a duplicate of Access Services site with changed data. We could not just backup and restore this as other sites in the collection had also changed. Turns out getting this data back onto the production farm took a bit of work, more than we expected. This is the process we used

  1. Open the Access Services site in a browser on the duplicate server
  2. Select the open in Access option, we used Access 2010, which it had originally been created in
  3. When Access had opened the site, use the ‘save as’ option to save a local copy of the DB. We now had a disconnected local copy on a PC. We thought we could just re-publish this, how wrong we were.
  4. We ran the web compatibility checker expecting no errors, but it reported a couple of them. In one form and one query extra column references had been added that referenced the auto created SharePoint library columns (date and id stamps basically) These had to be deleted by hand.
  5. We then could publish back to the production server
  6. We watched as the structure and data was publish
  7. Then it errored. On checking the log we saw that it claimed a lookup reference had invalid data (though we could not see offending rows and it was working OK). Luckily the table in question contained temporary data we could just delete, so we tried to publish again
  8. Then it errored .On checking the logs again we saw it reported it could not copy to http://127.0.0.1 – No idea why it looking for localhost! Interestingly if we tried to publish back to another site URL on the non-patched server it work! Very strange
  9. On a whim I repeated this whole process but using Access 2013 RC, and strangely it worked

So I now had my Access Services site re-published and fully working on a patched farm. That was all a bit too complex for my tastes

Moving to an SSD on Lenovo W520

[Also see http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/22/More-on-HDD2-boot-problems-with-my-Crucial-M4-mSATA.aspx]

I have just reinstalled Windows 8 (again) on my Lenovo W520. This time it was because I moved to a Crucial m4 256Gb 2.5” internal SSD as my primary disk. There is a special slot for this type of drive under the keyboard, so I could also keep my 750Gb Hybrid SATA drive to be used for storing virtual machines.

I had initially planned to backup/restore my previous installation using IMAGEX as I had all I needed in my PC, but after two days of fiddling I had got nowhere, the problems being

  • The IMAGEX from my hybrid drive to an external disk (only 150Gb of data after I had moved out all my virtual PCs) took well over 10 hours. I thought this was due to using an old USB1 (maybe it was a USB2 at a push) disk caddy, but it was just as slow with a ESata. The restore from the same hardware only too an hour or so. One suggest made that I did not try was to enable compression in the image, as this would reduce the bandwidth on the disk connection, it is not as if my i& CPU could not handle the compression load.
  • When the images restored, we had to fiddle with Bcdedit to get the PC to boot
  • Eventually the Windows 8 SSD based image came up, you could open the login page with no issues but got no cursor for a long time, it it was sloooow to do anything – I have no idea why.

So in the end I gave up, and installed anew, including Visual Studio and Office it took about 30-45 minutes. There were still a couple of gotcha’s though

  1. I still had to enable the Nvidia Optumus graphics mode in BIOS, thus enabling both the Intel and Nvidia graphics sub systems. I usually only run on the discrete NVidia one as this does not get confused by projects. if you don’t enable the Intel based one then the Windows 8 install hangs after installing drivers and before the reboot that then allows you to login and choose colours etc. As soon as this stage is passed you can switch back to discrete graphics as you wish. I assume the Windows 8 media is missing some NVidia bits that it find after this first reboot or via WIndows Update.
  2. Windows 8 is still missing a couple of drivers for the Ricoh card reader and Power management, but these are both released on the http://support.lenovo.com/en_US/ site. You do have to download these manually and install them. All the other Lenovo drivers (including updated audio I have mentioned before) come down from Windows update.

So the moral of the story is reinstall, don’t try to move disk images. Make sure your data is in SkyDrive, Dropbox, SharePoint, source control etc. so it is just applications you are missing which are quick to sort out. The only pain of a job I had was to sort out my podcasts, but even that was not too bad

More fun with creating TFS 2012 SC-VMM environments

Whilst setting up new a new SC-VMM based lab environment I managed to find some new ways to fail above and beyond the ones I have found before.

We needed to build a new environment for testing CRM application, this needed to have its own DC, IIS server and a CRM server. The aim was to have this as a network isolated environment, but you have to build it first as the various VMs.

So we did the following

  • On the Hyper-V hosts managed by our SC-VMM server create 3 new VMs connect to our corporate LAN
  • Install the OS on the three VMs
  • Make one of the a DC for dev.local
  • Join the others to the DC’s domain dev.local (they are not joined to our corporate domain)
  • On the IIS box add the web server role
  • On the CRM box install Microsoft CRM

So we now have a three box domain that does what we want, but it is not network isolated. We could have used the features of SC-VMM to push these VMs into the library and hence import into the Lab Management library. However we choose to make sure we could connect to them first as an environment.

So first I tried to create a standard environment, not using SC-VMM. I had to create a local hosts file on the PC running MTM, but once this was done I could verify the environment, so all was OK. I did not actually create it.

Next I tried to create a SC-VMM based environment and this is where I hit a problem. I was basically trying to do something I have done before with all our pre-lab management test VMs, wrapper existing VMs in an environment. When I tried to do this the verification failed, saying I could not connect to any of the VMs. First we made sure file sharing was enable, firewalls were not blocking etc. All to no effect.

To cut a long story short I had a number of issues, mostly down to the reuse of names

  • The SC-VMM VM names for the VMs (e.g. LabDC) did not match the actual host name (DevDC). I had to rename the VM in SC-VMM to match the name of the host in the operating system (I am still unsure if this is a red herring and not really that important, but I think it is good practice anyway)
  • We had to have a hosts file on the MTM box with the fully qualified names for the three boxes (not just the server name). Not that this hosts entry (or could be DNS if you want) is only needed until the environment is built
  • 192.168.200.102 devcrm.dev.local
    192.168.200.103 deviis.dev.local
    192.168.200.104 devdc.dev.local

  • The name DevDC has been used on another VM that was being run on one of our developers Windows 8 Hyper-V setup. This was causing problems when MTM tried to resolve the machine name via SMB (Netbios, IP resolution was fine). Switching off this other VM fixed this, we only spotted it by using Wireshark on the PC running MTM (note you have to run the installer in Win7 compatibility mode to get it to work with Windows 8)
  • When entering the login details for development domain when creating the new environment in MTM  the user ID had to be entered as administator@dev.local and not dev\administrator

Once this was all done I could verify my environment and create it, the TFS agent was installed, but did not connect to the test controller. This is exactly as expect as details in my previous post.

I now have a few choices

  • If I don’t want to network isolate it I can install a Test Controller in the domain
  • I can save each of the three VMs into the SC-VMM library via MTM and create an isolated environment.

So I hope this helps you avoid some of the problems I have seen, I just wish that the MTM environment creation step gave out a better log file so i don’t have to second guess it or use wireshark.

Getting a SQL LocalDB to create an ASPNETDB data base without aspnet_regsql

I started working today on a solution I had not worked on for a while. It makes use of an ASP.NET web application as a site to host SharePoint webparts (using Typemock to mock out any troublesome calls). The problem I had was that when I opened this VS2010 solution in VS2012 I could not run up this test web site. As the test web pages have WebpartManager controls it needs an ASPNETDB in the AppData folder to persist the settings data. This is usually auto created when SQLExpress is installed, problem is with VS2012 you get the newer LocalDB and I am trying to avoid installing SQLExpress

So the first step was to modify the web.config to point to the right place, by adding

<connectionStrings>
    <clear/>
    <add name="LocalSQLServer" connectionString="Data Source=(LocalDB)\projects; Integrated Security=true ;AttachDbFileName=|DataDirectory|ASPNETDB.mdf" providerName="System.Data.SqlClient"/>
  </connectionStrings>

I then loaded the web site but got the error

An error occurred during the execution of the SQL file 'InstallMembership.sql'. The SQL error number is -2 and the SqlException message is: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.

after a retry I saw

An error occurred during the execution of the SQL file 'InstallCommon.sql'. The SQL error number is 5170 and the SqlException message is: Cannot create file 'C:\PROJECTS\SABS\SOURCE\SABS\SABSWEBSERVICETESTHARNESS\APP_DATA\ASPNETDB_TMP.MDF' because it already exists. Change the file path or the file name, and retry the operation.
CREATE DATABASE failed. Some file names listed could not be created. Check related errors.
Creating the ASPNETDB_5689a053209d438db3622d593ea632fb database...

So I decided to try the aspnet_regsql.exe in wizard mode from the .NET 4 framework folder to populate a pre-created DB, this gave the same timeout errors as seen when it was run by the web process

So finally I tried the following process

  1. Created a new empty DB in the APPDATA folder attached to my LocalDB instance in SQL Server Object Explorer
  2. From the .NET framework folder loaded and then ran the following SQL scripts (in the order they were listed in the folder)
  3. ---        16/08/2012     12:59      24603 InstallCommon.sql
    ---        16/08/2012     12:59      56073 InstallMembership.sql
    ---        16/08/2012     12:59      52347 InstallPersistSqlState.sql
    ---        16/08/2012     12:59      34950 InstallPersonalization.sql
    ---        16/08/2012     12:59      20891 InstallProfile.SQL
    ---        16/08/2012     12:59      34264 InstallRoles.sql

  4. Made sure my test harness targeted .NET 4
  5. and my test harness loaded

Not a great solution but it got me working, especially as I could find little on ASPNETDB and LocalDB

TFS Test Agent cannot connect to Test Controller – Part 2

I posted last week on the problems I had had getting the test agents and controller in a TFS2012 Standard environment talking to each other and a workaround. Well after a good few email with various people at Microsoft and other consultants at Black Marble I have a whole range of workarounds solutions.

First a reminder of my architecture, and note that this could be part of the problem, it is all running on a single Hyper-V host. Remember this is a demo rig to show the features of Standard Environments. I think it is unlikely that this problem will be seen in a more ‘realistic’ environment i.e. running on multiple boxes

 

image

 

The problem is that when the test agent running on the Server2008 should request the test controller (running the on VSTFS server) should call it back on either it 169.254.x.x address or on abn address obtained via DHCP from the external virtual switch. However the problem is it is requesting a call back on 127.0.0.1, as can be seen in the error log

Unable to connect to the controller on 'vstfs:6901'. The agent can connect to the controller but the controller cannot connect to the agent because of following reason: No connection could be made because the target machine actively refused it 127.0.0.1:6910. Make sure that the firewall on the test agent machine is not blocking the connection.

The root cause

It turns out the root cause of this problem was I had edited the c:\windows\system32\drivers\etc\hosts file on the test server VM to add an entry to allow a URL used in CodedUI tests to be resolved to the localhost

127.0.0.1   www.mytestsite.com

Solution 1 – Edit the test agent config to bind to a specific address

The first solution is the one I outlined in my previous post, tell the test agent to bind to a specific IP address. Edit

C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\QTAgentService.exe.config

and added a BindTo line with the correct address for the controller to call back to the agent

<appSettings>
     // other bits …
      <add key="BindTo" value="169.254.1.1"/>
</appSettings>

The problem with this solution you need to remember to edit a config file, all seems a bit complex!

Solution 2 – Don’t resolve the test URL to localhost

Change the hosts file entry used by the CodedUI test to resolve to the actual address of the test VM e.g.

169.254.1.1   www.mytestsite.com

Downside here is you need to know the test agents IP address, which depending on the system in use could change, and will certainly be different on each test VM in an environment. Again all seems a bit complex and prone to human error.

Solution 3 – Add an actual loopback entry to the hosts file.

The simplest workaround which Robert Hancock at Black Marble came up with was to add a second entry to the hosts file for the name loopback

127.0.0.1   localhost
127.0.0.1   www.mytestsite.com

Once this was done the test agent could connect, I did not have to edit any agent config files, or know the address the agent need to bind to. By far the best solution

 

So thanks to all who helped get to the bottom of this surprisingly complex issue.