BM-Bloggers

The blogs of Black Marble staff

Team Foundation Service RTMs

Today at Build 2012 it was announced that https://tfspreview.com has RTM'd as Team Foundation Service on https://tfs.visualstudio.com.

Up until now there has been no pricing information, which had been a barrier to some people I have spoken to as they did not want to started using something without knowing the future cost.

So to the really good news, as of now

  • It is free for up to 5 users
  • If you have an active MSDN subscription it is also free. So a team of any size can use it as long as they all have MSDN

The announcement said to look out for further price options next year.

Check the full details at Soma's blog

403 and 413 errors when publishing to a local Nuget Server

We have an internal Nuget Server we use to manage our software packages. As part of our upgrade to TFS2012 this needed to be moved to a new server VM and I took the chance to upgrade it from 1.7 to 2.1.

The problem

Now we had had a problem that we could publish to the server via a file copy to its underlying Packages folder (a UNC share) but could never publish using the Nuget command e.g.

Nuget push mypackage.nupkg -s http://mynugetserver

I had never had the time to get around to sorting this out until now.

The reported error if I used the URL above was

Failed to process request. 'Access denied for package 'Mypackage.'.
The remote server returned an error: (403) Forbidden..

If I changed the URL to

Nuget push mypackage.nupkg -s http://mynugetserver/nuget

The error became

Failed to process request. 'Request Entity Too Large'.
The remote server returned an error: (413) Request Entity Too Large..

Important: This second error was a red herring, you don't need the /nuget on the end of the URL

The solution

The solution was actually simple, and in the documentation though it took me a while to find.

I had not specificed an APIKey in the web.config on my server, obvious really my access was blocked as I did not have the shared key. The 413 errors just caused me to waste loads of time looking at WCF packet sizes because I had convinced myself I needed to use the same URL as you enter in Visual Studio > Tools > Option > Package Management > Add Source, which you don't

Once I had edited my web.config file to add the key (or I could have switched off the requirement as an alternative solution)

  <appSettings>
    <!--
            Determines if an Api Key is required to push\delete packages from the server.
    -->
    <add key="requireApiKey" value="true" />
    <!--
            Set the value here to allow people to push/delete packages from the server.
            NOTE: This is a shared key (password) for all users.
    -->
    <add key="apiKey" value="myapikey" />
    <!--
            Change the path to the packages folder. Default is ~/Packages.
            This can be a virtual or physical path.
        -->
    <add key="packagesPath" value="" />
  </appSettings>

I could then publish using

Nuget mypackage.nupkg myapikey -s http://mynugetserver

Nice introduction to the new features of VS2012

If you are looking for a nice introduction to the new features of Visual Studio 2012, I can heartily recommend Richard Banks 'Visual Studio 2012 Cookbook'.

This book covers a wide range of subjects including the IDE, .NET 4.5 features, Windows 8 development, Web development, C++, debugging, async and TFS 2012. This is all done in a easy to read format that will get you going with the key concepts, providing sample and links to further reading. A great starting off point.

There is stuff in the book for people new to any of the subjects as well as nuggets for the more expererienced users. I particularly like the sections on what is not in 2012 but was in previous versions, and what to do about it. This type of information too oftan left out of new product books.

So a book that is well worth a look, and has it has been published by Packt there are no shortage of formats to choose from.

SharePoint 2013 MySite Newsfeed displays "There was a problem retrieving the latest activity. Please try again later"

This is an issue that we've been bumping up against and have seen a number of other users seeing the same problem with SharePoint 2013 implementations.

When looking at the 'Everyone' tab on a user's MySite, the following message is displayed:

There was a problem retrieving the latest activity. Please try again later.

and the following entries appear in the SharePoint logs:

Failure retrieving application ID for User Profile Application Proxy 'User Profile Service Proxy': Microsoft.Office.Server.UserProfiles.UserProfileApplicationNotAvailableException: UserProfileApplicationNotAvailableException_Logging :: UserProfileApplicationProxy.ApplicationProperties ProfilePropertyCache does not have c2d5c86f-e928-4abf-b353-a8ab7809766c     at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.get_ApplicationProperties()     at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.get_AppID()           0e49dc9b-d278-1089-b021-6e2138766eae

SPMicrofeedFeedCacheService.GetUserProfile() - UserProfileApplicationProxy not available     0e49dc9b-d278-1089-b021-6e2138766eae

To correct this issue, complete the following steps:

  1. Log onto the SharePoint 2013 Central Administration site as a farm administrator
  2. Navigate to 'Manage Service Applications'
  3. Highlight the User Profile Service Application
  4. Click the 'Permissions' ribbon toolbar button:
    UPSA permissions
  5. Add the account that is used to run the User Profile Service Application and give it full control:
    UPSA connection permissions
  6. Click OK

At this point it is usual to see the following displayed in the 'Everyone' tab of the user's MySite:

Were still collecting the latest news. You may see more if you try again a little later.

It's worth checking the SharePoint logs at this point to see what additional errors may be reported (note that you will see 'We're still collecting the latest news' if no users have posted anything, so create a post to ensure that you have something waiting in the queue). In my case, I saw the following:

System.Data.SqlClient.SqlException (0x80131904): Cannot open database "SP_Content_MySite" requested by the login. The login failed.  Login failed for user 'Domain\UPSApp'.

This can be solved by completing the following steps:

  1. Open the SharePoint 2013 Management Shell by right-clicking and choosing 'run as administrator'
  2. Issue the following PowerShell commands
    $wa = Get-SPWebApplication http://<MySiteURL>
    $wa.GrantAccessToProcessIdentity("domain\UPSApp")

At this point, the newsfeed should be up and running successfully:

Functional everyone newsfeed

Visual Studio 2012 Team Explorer and Pending Changes Shortcut

The new Team Explorer in Visual Studio 2012 has taken me some getting used to.

When working with source control in VS2012, there is no longer a ‘Pending Changes’ View that is independent  of team explorer.  I missed it because now I must navigate 3 menus into Team Explorer to find out what files I have modified.

That was until I found the new Solution Explorer filters.  Pressing  Ctrl+[, Ctrl+P will filter the solution explorer to show only checked out files. Tap again to remove the filter. Sweet!

Another interesting shortcut is Ctrl+[, Ctrl+O that filters solution explorer to show only files you have open

 

Technorati Tags:

CRM 2011 Fetch-Based Reports Fail with 'rsProcessingAborted'

I recently saw a CRM 2011 instance which had had and issue with SQL Reporting Services. To correct the issues with the Reporting Services server, which was separate to the CRM 2011 server, SQL Reporting Services has been completely reinstalled on the server. Following this action, there were a few steps that needed taking to get reports working again in CRM.

  • The CRM reporting services extensions needed to be reinstalled and patched on the Reporting Services server.
  • The CRM reports needed republishing to the Reporting Services server. This was achieved by running the following command:
    PublishReports.exe <CRMOrganisationName>
    Note: The PublishReports.exe tool can be found in C:\Program Files\Microsoft Dynamics CRM\Tools folder on the CRM server.
    Note: The <CRM OrganisationName> is displayed under 'Organizations' in the Microsoft Dynamics CRM Deployment Manager.

Once both these steps were taken, some of the reports still didn't work, especially reports that had been generated using CRM's report wizard. The error reported on the report display page was:

Report render failure. Error: An error has occurred during report processing. (rsProcessingAborted)

Kerberos had been setup correctly for the CRM server and had been checked (see KB2590774 but note that the account for which SPNs are set should also be trusted for delegation).

Examination of the CRM logs showed the following errors:

System.ServiceModel.EndpointNotFoundException: Could not connect to net.tcp://CRMServer/CrmSandboxSdkListener-w3wp. The connection attempt lasted for a time span of 00:00:21.0185095. TCP error code 10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond XXX.XXX.XXX.XXX:808.

System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond XXX.XXX.XXX.XXX:808 ---> Microsoft.Crm.Reporting.DataExtensionShim.Common.ReportExecutionException: An unexpected error occurred. ---> Microsoft.Crm.Reporting.DataExtensionShim.Common.ReportExecutionException: Could not connect to net.tcp://CRMServer/CrmSandboxSdkListener-w3wp. The connection attempt lasted for a time span of 00:00:21.0185095. TCP error code 10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond XXX.XXX.XXX.XXX:808.  ---> Microsoft.Crm.Reporting.DataExtensionShim.Common.ReportExecutionException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond XXX.XXX.XXX.XXX:808

By default, a firewall rule named 'Windows Communication Foundation Net.TCP Listener Adapter (TCP-In)' on port 808 is available on Windows, but is not activated by the CRM installation. For fetch-based reports to work correctly when Reporting Services in installed on a different server to CRM, this firewall rule needs to be enabled on the CRM server.

Problems re-publishing an Access site to SharePoint 2010

After applying SP1 to a SharePoint 2010 farm we found we were unable to run any macros in an Access Services site, it gave a –4002 error. We had seen this error in the past, but the solutions that worked then did not help. As this site was critical, as a workaround, we moved the site to a non-patched SP2010 instance. This was done via a quick site collection backup and restore process.  This allowed us to dig into the problem at our leisure.

Eventually we fixed the problem by deleting and recreating the Access Services application within SharePoint on the patched farm. We assume some property was changed/corrupted/deleted in the application of the service pack.

So we now had a working patched farm, but also a duplicate of Access Services site with changed data. We could not just backup and restore this as other sites in the collection had also changed. Turns out getting this data back onto the production farm took a bit of work, more than we expected. This is the process we used

  1. Open the Access Services site in a browser on the duplicate server
  2. Select the open in Access option, we used Access 2010, which it had originally been created in
  3. When Access had opened the site, use the ‘save as’ option to save a local copy of the DB. We now had a disconnected local copy on a PC. We thought we could just re-publish this, how wrong we were.
  4. We ran the web compatibility checker expecting no errors, but it reported a couple of them. In one form and one query extra column references had been added that referenced the auto created SharePoint library columns (date and id stamps basically) These had to be deleted by hand.
  5. We then could publish back to the production server
  6. We watched as the structure and data was publish
  7. Then it errored. On checking the log we saw that it claimed a lookup reference had invalid data (though we could not see offending rows and it was working OK). Luckily the table in question contained temporary data we could just delete, so we tried to publish again
  8. Then it errored .On checking the logs again we saw it reported it could not copy to http://127.0.0.1 – No idea why it looking for localhost! Interestingly if we tried to publish back to another site URL on the non-patched server it work! Very strange
  9. On a whim I repeated this whole process but using Access 2013 RC, and strangely it worked

So I now had my Access Services site re-published and fully working on a patched farm. That was all a bit too complex for my tastes

Moving to an SSD on Lenovo W520

[Also see http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/22/More-on-HDD2-boot-problems-with-my-Crucial-M4-mSATA.aspx]

I have just reinstalled Windows 8 (again) on my Lenovo W520. This time it was because I moved to a Crucial m4 256Gb 2.5” internal SSD as my primary disk. There is a special slot for this type of drive under the keyboard, so I could also keep my 750Gb Hybrid SATA drive to be used for storing virtual machines.

I had initially planned to backup/restore my previous installation using IMAGEX as I had all I needed in my PC, but after two days of fiddling I had got nowhere, the problems being

  • The IMAGEX from my hybrid drive to an external disk (only 150Gb of data after I had moved out all my virtual PCs) took well over 10 hours. I thought this was due to using an old USB1 (maybe it was a USB2 at a push) disk caddy, but it was just as slow with a ESata. The restore from the same hardware only too an hour or so. One suggest made that I did not try was to enable compression in the image, as this would reduce the bandwidth on the disk connection, it is not as if my i& CPU could not handle the compression load.
  • When the images restored, we had to fiddle with Bcdedit to get the PC to boot
  • Eventually the Windows 8 SSD based image came up, you could open the login page with no issues but got no cursor for a long time, it it was sloooow to do anything – I have no idea why.

So in the end I gave up, and installed anew, including Visual Studio and Office it took about 30-45 minutes. There were still a couple of gotcha’s though

  1. I still had to enable the Nvidia Optumus graphics mode in BIOS, thus enabling both the Intel and Nvidia graphics sub systems. I usually only run on the discrete NVidia one as this does not get confused by projects. if you don’t enable the Intel based one then the Windows 8 install hangs after installing drivers and before the reboot that then allows you to login and choose colours etc. As soon as this stage is passed you can switch back to discrete graphics as you wish. I assume the Windows 8 media is missing some NVidia bits that it find after this first reboot or via WIndows Update.
  2. Windows 8 is still missing a couple of drivers for the Ricoh card reader and Power management, but these are both released on the http://support.lenovo.com/en_US/ site. You do have to download these manually and install them. All the other Lenovo drivers (including updated audio I have mentioned before) come down from Windows update.

So the moral of the story is reinstall, don’t try to move disk images. Make sure your data is in SkyDrive, Dropbox, SharePoint, source control etc. so it is just applications you are missing which are quick to sort out. The only pain of a job I had was to sort out my podcasts, but even that was not too bad

Many upcoming speaking engagements

I seem to have been doing a lot of presenting lately, and the next few weeks are similarly busy. As a one-stop shop to plug them all, here is a list of upcoming events I will be presenting, co-presenting or supporting:

 

  • VMUG Leeds, 25th October 2012
    I am co-presenting with the marvellous Andy Fryer on a range of content around Windows Server 2012 and Hyper-V. It’s the first time I’ve attended a VMUG so I’m looking forward to it!
  • Windows 8 IT Pro Camp Leeds, 13th November 2012
    The rolling thunder of IT Camps is back in Leeds for a day with Windows 8. Andy Fryer and Simon May should be there, with myself and Andrew Davidson chipping in and helping things to run smoothly. The last Leeds camp on Server 2012 was fully booked and really well received by attendees and presenters alike, so make sure you book for what should be a great day.
  • What’s New in SharePoint 2013, 21st November 2012
    When Andy gets back from the SharePoint conference we will make sure that we give you the latest information at the first of this year’s Black Marble events.
  • Architecture Forum in the North, 5th Dec 2012
    Black Marble are once again running the well received Architecture Forum and I will be speaking, once more with Andy Fryer, about how Windows Server 2012 can facilitate pragmatic cloud architectures with a mix of on-premise, hosted private cloud and public cloud hosting.
  • The Tenth Annual Tech Update, 30th January 2013
    See in the new year with our famous Tech Update. This year we’ve split the day into two, focused on IT Managers and Business Decision Makers in the morning and Developers and Technical Decision Makers in the afternoon.

 

I also have a raft of invite-only events I’ll be speaking at over the next few months. As always, please come and say hi and feel free to ask questions!

Windows Azure Queues vs Service Bus Queues

If you have been wondering why you would use Windows Azure Queues that are part of the Storage Service or the queues that are part of the service bus then the following MSDN article will give you full details.

http://msdn.microsoft.com/en-us/library/hh767287(VS.103).aspx

Recent changes in pricing make the choice even harder. There are two specific areas I like that makes the service bus queue a better offering than the storage queues:

  1. Long connection timeout
  2. Topics

The long connection timeout means that I don't have to keep polling the service bus queue for messages. I can make a connection for say 20 minutes and then when a message is added to the queue my application immediately returns the data, which you then process and then reconnect to the queue to get the next message. After 20 minutes without a message the connection closes in the same way it does when a message is received except that the message is null. You then just reconnect again for another 20 minute. The makes your application a more event driven application rather than a polling application and it should be more responsive. You can make multiple connections to the queue this way and load balance in the same way as you would when polling queues.

The following code shows how you can connect with a long poll.

 1: NamespaceManager namespaceManager;
 2: MessagingFactory messagingFactory;
 3: Uri namespaceAddress = ServiceBusEnvironment.CreateServiceUri("sb", "yournamespace", string.Empty);
 4:  
 5: namespaceManager = new NamespaceManager(namespaceAddress, TokenProvider.CreateSharedSecretTokenProvider("yourIssuerName", "yourIssuerKey"));
 6: messagingFactory = MessagingFactory.Create(namespaceAddress, TokenProvider.CreateSharedSecretTokenProvider("yourIssuerName", "yourIssuerKey"));
 7:  
 8: WaitTimeInMinutes = 20;
 9:  
 10: // check to see if the queue exists. If not then create it
 11: if (!namespaceManager.QueueExists(queueName))
 12: {
 13:     namespaceManager.CreateQueue(queueName);
 14: }
 15:  
 16: QueueClient queueClient = messagingFactory.CreateQueueClient(queueName, ReceiveMode.PeekLock);
 17:  
 18: queueClient.BeginReceive(new TimeSpan(0, WaitTimeInMinutes, 0), this.ReceiveCompleted, messageCount);

When a message is received or the 20 minute timeout expires then the ReceiveCompleted delegate is called and a check is made to see if the message is not null before processing it.Once processed another long poll connecting is made and the process repeats. The beauty of this method is that you don’t have to manage timers or separate threads manage the queue.

Topics are private queues that are subscribed to by consumers and each private queue will receive a copy of the messages put into the original queue and are all manages individually. Topics can also apply filters to the message so that they only receive messages that they are interested in.

Further details of Service bus topics and queues

http://www.windowsazure.com/en-us/develop/net/how-to-guides/service-bus-topics/