Six tips when deploying SharePoint 2013 masterpages, page layouts and display templates

I’ve been hat-swapping again since just before christmas (which explains the lack of Azure IaaS posts I’m afraid). I’ve been working on a large SharePoint 2013 project, most lately on customising a number of elements around publishing. Getting those custom elements into SharePoint from my solution raised a number of little snags, most of which were solved by the great internet hive mind. It took me a long time to find some of those fixes, however, so I thought I’d collect them here and reference the original posts where appropriate.

1. Overwrite existing files reliably

This has long been an old chestnut for as long as I have been working in SharePoint. Your solution deploys a file to the masterpage gallery or style library. You deploy an updated version and none of your changes are visible because SharePoint hasn’t replace the file with your new version. In previous versions careful use of things like ‘ghostable’ in library in the elements.xml when you deployed the file helped – files that are ghostable generally seem to be updated, unless you manually edit the file, thus ‘unghosting’ it.

In SharePoint 2013, however, we appear to have a new property that we can specify in our elements.xml for deployable files, ReplaceContent:

<File Path="myfile.aspx" Url="myfile.aspx" Type="GhostableInLibrary" ReplaceContent="TRUE" />

As far as I can tell, this does what it says on the tin. Overwrites existing files by default.

2. Provision Web Parts into page layouts safely as part of a feature

This is one I’d never personally tried before. I’ve seen many people struggling, pasting web part code into a masterpage or page layout and having problems during deployment. The way to do it (the ‘right’ way as far as I can know) is to use the feature to do it. When you list your masterpage or page layout in the elements.xml you can add a property that deploys a web part, AllUsersWebPart:

<File Path="myfile.aspx" Url="myfile.aspx" Type="GhostableInLibrary" ReplaceContent="TRUE" >
<AllUsersWebPart WebPartZoneID=”TopZone” WebPartOrder=”0”>

Simply specify the name of the web part zone in your page and it will be added during the deploy. The webpartorder setting should allow you to define where it appears. When adding multiple webparts I have had more success setting that to zero for each web part and just getting the order right. As you might have guessed, for multiple web parts, add multiple AllUsersWebPart sections.

But where’s the web part, I hear you cry! In that CDATA block, paste the XML for your web part. Getting that is easy – simply export the web part from SharePoint and paste the resulting XML straight in there. There are a couple of tweaks you may need to apply that I’ll list next.

3. Substitute ~ for ~ in paths within web parts in CDATA blocks

This one stumped me for a while and I was fortunate to come across a post by Chris O’Brien that solved it for me. I was trying to add a custom Content By Search web part to a page. That web part had custom control and display templates specified, which reference the current site collection in their path (~sitecollection/_catalogs). The problem is that the tilda gets stripped out by SharePoint when the page is deployed, breaking the setting.

The solution turns out to be one of those typical off the wall ‘I would never have thought of that!’ solutions that crop up all the time with SharePoint: Swap the ~ character for it’s XML entity reference: ~.

<property name="GroupTemplateId" type="string">~sitecollection/_catalogs/masterpage/Display Templates/Content Web Parts/MyTemplate.js</property>

4. Use <value> to include content in the Content Editor web part in CDATA blocks

Export a Content Editor web part and you will see that the HTML content that is displayed within it is in the Content element, wrapped in a CDATA block. The problem is that when deploying this web part into the page using the technique above you can’t nest a CDATA block within a CDATA block.

The solution? Change the CDATA wrapper to be the <value> element. The snag? I have found that I need to swap the < and > symbols for their HTML entity counterparts: &lt; and &gt;.

<Content xmlns=""><value>&lt;h2&gt;?My Content&lt;/h2&gt;</value></Content>

5. Provision Search Display Templates as draft and publish them with a feature receiver

This one is a bit contentious, as far as I can tell. I derived my (simple) approach from an article by Waldek Mastykarz. The crux of the matter is this: You can either edit the HTML part of a search display template or the javascript. The ‘correct’ way is another matter though. If you have publishing features enabled then when you save and publish the HTML file, SharePoint generates the javascript file with a triggered event receiver. If you don’t have publishing enabled, as far as I can tell only the javascript files are there and the event receiver doesn’t appear to be enabled.

So…  which way to jump? Well, in my case I am creating customisations that depend on publishing features, so I decided to deploy just the HTML file and let SharePoint generate the javascript. If I needed to use these things without publishing I may have extracted the javascript from my development sharepoint and deployed that.

The first part to my simple approach is to deploy the files as draft using the options available to me in elements.xml:

<File Path="MyTemplate.html" Url="MyTemplate.html" Type="GhostableInLibrary" Level="Draft" ReplaceContent="TRUE" />

I then use a fairly simple function that is called by the feature receiver on activition, once per file:

public static void CheckInFile(SPWeb web, string fileUrl)
    // get the file
    SPFile file = web.GetFile(fileUrl);
    // depending on the settings of the parent document library we may need to check in and/or (publish or approve) the file
    if (file.Level == SPFileLevel.Checkout) file.CheckIn("", SPCheckinType.MajorCheckIn);
    if (file.Level == SPFileLevel.Draft)
        if (file.DocumentLibrary.EnableModeration) file.Approve("");
        else file.Publish("");

If you look at the original article, the solution suggested by Waldek is jolly clever, but much cleverer that I needed for a couple of display templates.

6. Make your masterpages appear in ‘Change the look’ with a preview file

In the new SharePoint 2013 world site admins have a great deal of flexibility over how their site looks. I wanted to enable users of my custom masterpages to continue to use the theming engine – selecting their own colours and fonts – but to keep the custom masterpage I had built. Again, it’s actually really easy. Simply deploy a .preview file with the same name as your masterpage (e.g. mymaster.master and mymaster.preview). The .preview is actually a clever combination od setting, html and css that allows you to specify the default colour pallete file (.spcolor) and font file (.spfont) as well as draw a little preview of your page. I was lucky on that last one, as my look was the same as the default, so I simply copied seattle.preview.

I could go a step further in that I can create a Composed Look that would show my layout as a tile in the Change My Look UI, but that involves adding items to a SharePoint list and was more than I needed for this particular project. I will need to do that for my next one, however…

A Virtual Ice Cream Sandwich: Android 4 x86 in a Hyper-V VM

More and more of our projects include a stipulation from the client that any web sites must work on the tablet devices of senior management. Up until recently that was exclusively iPads, but we are now seeing more Android devices out there. I wanted to find a straightforward way for us to test on such devices, preferably without needing to build up a collection of expensive physical kit.

I read with interest Ben Armstrong’s post about running Android 2.2 (Froyo) in a VM using a build from the Android x86 project. I started my journey by replicating his steps, so I won’t document any of that here, other than to note that the generic x86 build you need is now a deprecated one, so I had to hunt a little to find what I needed.

Creating the VM was a doddle. However, once I’d got things up and running I hit a snag: The sites I needed to test were hosted on SharePoint and required authentication. The web browser on the Android 2.2 build steadfastly refused to present a logon dialog for any sites. I could rework my test sites with anonymous access or forms-authentication but that didn’t fill me with enthusiasm. I wondered, then, if a later Android version might be my salvation.

That in itself led to a long time spent digging around the corners of the internet: The Android x86 project has a number of Ice Cream Sandwich builds but all are targetted at various types of hardware device and whilst all had support for wifi, none had support for ethernet. Since I can’t present a wifi device within the Hyper-V VM I had to look elsewhere.

The build I finally used was one I found at – an Android 4 build with experimental ethernet support.

I ran through a number of installations as I edged my way through the different options each time I found that a choice I’d made prevented me from making some essential tweak. To save you all the effort, I’ve documented the steps here. Since I was a complete Android novice I’ve taken the approach of showing screenshots of every step for other novices like myself.

Step 1: Getting things installed

The Virtual Machine we need to create doesn’t have to be powerful. However, we are running an OS that is not Hyper-V aware, so we can’t just go with the defaults.

I created a machine with 512Mb of RAM and a single processor. I started with a 16Gb virtual disk as the hard drive but after a few passes I increased that to 32 to give me some headroom should I want to install apps later. The important step, however, is that you need to add a Legacy Network Adapter and remove the standard virtual adapter that Hyper-V will add.

hyper-v settings

Once you’ve got your VM built, insert the ISO for the Android 4 build into the DVD drive and boot the machine.

Select the option to install Android to the hard disk of the machine.


On the next screen choose Create/Modify partitions


In the partition editor, left and right cursor keys will move between the menu items; enter will select. Choose New to create a new partition.


You want to create a new primary partition


The utility defaults to the full size of the disk. Simply hit enter to confirm that.


Now we have our partition we need to mark it as bootable.


And finally we need to write the changes out to disk.


Now we have our partition we can exit the utility to continue the installation.


The installer will now show our new partition and allow us to select it as the target for the installation.


We then need to choose what format to use for the installation. I used ext3. I did try NTFS once, thinking that I could easily transfer files onto the system, but when I attached the VHD windows failed to recognise the file system, so I went back to Ext3, figuring I’d simply transfer stuff over the network.


Unsurprisingly, the installer asks for confirmation of the format.


Then it shows progress as it formats.


Next you need to install the Grub bootloader. Honestly, I’ve not tried without this, but I modify the bootloader options later so unless you want to plough your own furrow, install Grub.


The default option at the next step is to install the system directory as read only. I discovered very quickly that some of the things I might need to fiddle with are in that system directory so I’ve chosen to make it writable.


Now the installation occurs.


Once the installation is complete you should choose to create a fake SD card. I learned the hard way that if you don’t, saving stuff in your Android web browser won’t work.


Sadly the largest size we can create is 2Gb, which conveniently is the default.


Once again we get a progress bar whilst the SD card image is created.


Now we’re all done and we get the option to reboot. Note that you can’t eject the installation media yet – it’s locked, so you’ll have to reboot.


When the VM reboots you’ll be back at the first screen, allowing to choose to install or run the live CD. Turn the VM off so you can eject the media.

At this point the installation is done. You have a shiny new Android VM running Ice Cream Sandwich.

Step 2: The Android wizard

This isn’t difficult at all, other then you need to remember that when you click on the VM to capture the mouse, it’s really emulating your finger. That means that you need to click and drag in drop down menus. I also discovered that the right mouse button seems to act as the hardware back button. Clicking the mouse is equivalent to tapping with your finger.

I set the language to UK English as my first step.


Then the wizard will burble for a little while.


I chose to automatically set the time. Think grey outlines of check boxes are hard to see when they are on a black background!


The next step allows you to use your Google account to keep settings an stuff. I’m building a VM that will be generic and used by lots of people so I skip this one.


I am happy to use location services though – we want to use this thing for testing, after all.


Again, because this is a build for lots of users I’ve put the company in as the owner name. Note that even though we chose United Kingdom as the location, the keyboard setting is for a US keyboard.


Next we get an obligatory screen where we agree to stuff…


…and we’re done.


The system helps you through how to use it. The import bits are the icons at the bottom. The upward pointing outline of an arrow in the middle brings you back to the home screen.


A handy tip

This thing feels a lot like Linux to me. Conveniently, pressing alt+f1 will switch to a console screen. Alt+left arrow and alt+right arrow will switch between consoles and the graphical UI.

Inside the console you can use familiar tools like ping and nslookup. It’s not a full-fat linux box, mind you. The two commands I find myself using most in the console are reboot and halt. Odd that there’s no way to cleanly shutdown – no shutdown command or even an old school init 0!

A couple of minor hiccups

Having got my VM up and running and gone through the startup wizard in Android there were a few things not quite right. First of all the screen resolution was too low at only 800×600. Step forward my very rusty Linux experience and my much less rusty internet research expertise!

More worryingly, when I boot the machine it doesn’t always pick up the correct DNS settings.  Research showed that to be much more interesting. Strangely, things worked at home but not in the office. Research showed that it was to do with the DHCP responses being different on the two networks: The office network was not responding to the request for DHCP option 119 – domain suffix search order. Fixing that solved the problem (but that’s another can of worms and I’ll write up a separate post about that one!).

Step 3: Setting the screen resolution

This one turned out to be quite easy, although it involves using Vi, which is a text editor whose arcane commands I have very limited knowledge of.

The first thing we need to do is find information about what display modes are available. To do this we boot the VM and use the options available to modify the boot parameters. Be aware that when you boot the VM the Grub screen only shows for a few seconds before the first option is booted automatically. When you see the screen below, hit the ‘a’ key to easily append options to the boot command.


When you hit ‘a’ you will be presented with the boot command to edit. Options on the command line are separated by spaces. Add a new one: vga=screen31

Hit the enter key and the OS will boot. You will see a black screen with a number of options on it. Hit enter again at this screen in order to view the display modes available to us.


From the list of available modes, choose the one you want to use. The system is waiting for you to type in the three character hex code for the mode you wish to use. For 1024×768 at 32 bit, for example, enter 318


Assuming all works correctly  you will see Android running in your chosen resolution. Sadly, it’s not permanent yet. I’ve also become paranoid enough that before I edit the bootloader options permanently I like to try what I’m going to do first.

Reboot the system and hit ‘a’ to append boot options. This time we want to specify the display option we want to use. Just to bend your head a little, the boot option needs the decimal equivalent of the hex value that the display modes screen showed us. For Our 1024x768x32, the hex was 318. The decimal is 792, so we append vga=792 to the boot options.


When Android boots, you should see it in 1024×768 once more:


Now we need to make the change permanent. To do that we need to edit the configuration file that the Grub bootloader uses.

To do that we need to reboot the system in debug mode.

Boot the system and use the cursor keys to select the second option on the boot menu.


The system will boot to a command shell:


Once you’re in the command prompt, typing clear will clear the screen and get rid of the boot messages. Then you need to enter the following commands:

cd /

mount –o remount,rw /mnt

cd /mnt/grub

vi menu.lst

What does that lot do? The part of the filesystem that stores the bootloader is attached as read-only. The mount command effectively detaches and reattaches that part of the filesystem so we can modify it. The files we want are in the grub folder within mnt. Finally, we open the text editor Vi to change the file.

Vi is a bit arcane, although extremely powerful. For help with the commands look at online tutorials, like the one hosted by Washington University.

Once we’re in the config file we are going to add the vga=792 option to the end of the default boot command. I’ll tell you what Vi commands I use to get the job done – note that they are not necessarily the best ones, they just work for me. I know about half a dozen Vi commands and they allow me to get by. If I want to do something clever I have to look it up!


In Vi, the cursor keys allow you to move around the file. Pressing escape tells Vi to listen for commands. Move down to the start of the first line of the first boot section (the first occurrence of ‘kernel’). Press Esc then ‘o’. That should give you a new line after kernel.


Now use the cursor keys to navigate to the end of that first ‘kernel…’ line and you should be able to type ‘ vga=792’


Now we want to get rid of that extra line. Move the cursor to the start of it and hit Esc then dd (escape then hit ‘d’ twice).

Finally we save the file. Esc+:wq is the command to write out the file and quit.


You should now find yourself back at the command prompt. Type reboot –f to reboot the system.

You should now find that by default your Android VM boots into your chosen resolution.

A quick side note

If you don’t have control over your own DHCP server you can use the following command to poke the dns into life:

setprop net.dns1 x.x.x.x where x.x.x.x is the IP address of your DNS. You can also add a second with net.dns2.

You can also give the VM more memory with no issues – mine now runs with 1024Mb. I’ve also added a second CPU core as an experiment which works but I’m not sure it’s any quicker.

Mix Remixed

I don’t visit the Mix community site often – historically, the content has been of little interest and infrequently updated. Imagine my surprise, then, to find a relaunched Mix Online with a new Microformats project – Oomph.

In short, it’s cool – a microformats extension for IE plus other goodies to help implement them, including a live writer plugin for creating hcards. Go check it out, and I’ll try to post more later…

Browsers are like buses

You wait around for ages and then two come along, all at once! No sooner have I downloaded IE8 beta 2 than Google announce Chrome!

I’ve been using IE8 for a few days and I’m quite impressed. I’ve just downloaded Chrome and I have to say, it’s a darn good browser. The feature I most wanted from any tab-based browser and one I’ve mentioned before in the context of IE is present in Chrome – tabs you can drag between windows.

Anyway, I was planning to post in greater detail about IE8 and my take on the new beta 2. I think I might change tack a bit and play with Chrome as well. Don’t be fooled by some of the hype – some of the ‘cool features’ are not unique to Chrome, but more choice in the browser market can only ever be a good thing for the end user.

My one worry when the first rumours started was that they might have created yet another render engine. It’s interesting that Google chose webkit rather than Gecko, given their existing close relationship with Firefox. However, having more than one webkit implementation on Windows is a real benefit for testing.

Workflow and SQL Error: Update

I posted last week about a couple of issues we were experiencing with SharePoint. I made some traction on the Workflow History issue at the end of last week and the revelation was pretty far-reaching, so I’m posting again.

It turns out that the stuff I said about systemupdate was wrong… up to a point.

There is a bug with systemupdate and triggering events, but it’s not the one we thought it was! It turns out that the behaviour we are seeing is correct – systemupdate is supposed to trigger events, just not update things like the modified by and last updated columns. It’s actually the behaviour within a workflow which is at fault, in that events aren’t being triggered when they should be.

I had a chat with our developers about this and they told me that there are plenty of articles on the web suggesting that systemupdate is the way to update an item in a list without triggering events. Don’t do it! I was told by Microsoft that whilst the fault is not high on the list because there is a workaround (which I will list in a moment), it will be fixed. At that point, anybody who is using systemupdate expecting events not to fire will get a shock.

The MSDN documentation for system update is pretty clear:

When you implement the SystemUpdate method, events are triggered and the modifications are reported in the Change and Audit logs, but alerts are not sent and properties are not demoted into documents.

The explanation as to why events don’t fire is:

When you used in other places such as windows/console app, another workflow or webparts, you are not seeing the event trigger the workflow, this is due to the Workflow runs on separate threads from the main thread, so we cannot fire up the workflow and simply quit. Quitting an app before the async worker threads are finished causes those threads to simply abort, and in the case of workflow, nothing will appear to have happened.

And the fix:

Currently, all standalone applications must call spsite.workflowmanager.Dispose(). This call waits for the threads to complete and causes workflow to go into an orderly shutdown.

And the solution to the problem of wanting to not trigger events? Well, it looks like the method I described in my earlier post is the way to go.

Workflow History and SQL Error

When trying to view an item in a list which has workflows run against it, you get an error:

Some part of your SQL statement is nested too deeply. Rewrite the query or break it up into smaller queries

Problem Background

Trying to explain the exact nature of our configuration in this case would break many people’s heads. This, therefore, is a bit of a simplification.

We have a custom webpart which allows users to log an enquiry. We create an item in a list with the enquiry details, and send an email to the account responsible for dealing with those enquiries. A copy of the list item is created in another list (we’ll leave out the why and wherefore of that for now). The two copies must be kept in sync. More details on that later.

Those enquiries must be closed within 30 minutes. If not, an escalation email is sent. An enquiry is closed if a particular column changes value. To ensure the two lists are kept in sync, when an item is changed a workflow is triggered. If the column we care about has changed we sync up the item in the other list.

The escalation process is a timer. It checks the items and sends emails. It updates a column with the time of the last email sent so we can repeat the process every 30 minutes.

What we found was that the enquiries weren’t being closed for a few days and in that time we could then not access the enquiry item at all via the web interface (although datagrid view still worked!). We saw the error at the top of this post.

The Root of the Matter

This fault is currently with our Microsoft Support team and they are working through it. I do, however, have enough knowledge and understanding of why the fault occurred to explain it, and a few dirty hacks to avoid it.

The reason we can’t access the items is because when SharePoint pulls up the item for edit/view it checks the Workflow History for that item. If there are more than about 200 entries for that item in the Workflow History list, we get the SQL query error and boom! That’s the long and short of it.

The deeper question is why? More importantly, why do we have over 200 workflows running on the item?

Workflow History first. The Workflow History list is a hidden list which does exacly what it says on the tin. Items are created each time a workflow runs. It turns out that items in the Workflow History list have a time-to-live and that time is 60 days. That means that any item in the list will automatically be deleted after 60 days. With roughly a 200 item limit before you hit trouble that means about 3 workflows per list item per day is your maximum.

Personally, I think that is a scalability issue. I can envision a scenario where we might want to run that many workflows by design, perhaps more.

Back to the plot. I suspect you’re sitting there thinking that in our case, having that many workflows run is bad design or a fault. Well, you’re not wrong, although you’re not quite right either.

We knew when we built the workflow that we had to avoid circular references and update the lists as little as possible. There is code to make sure that changes made by the workflow itself are not reflected back, and if the change is not the column we care about then the workflow exists cleanly.

We also knew that because the timer job updates a different column in the list item, that would trigger the change event on the list item, running the workflow. As a result the timer performs a system update on the list item which should not trigger events (and indeed does not when we have used the method elsewhere).

What this means is that we actually have two problems:

  1. The system update method when used in our timer is not working correctly and events on the list item are being triggered. This means that the workflow is running too often.
  2. The issue with Workflow History means that we very quickly hit the 200 item limit and meet our end with the SQL query error.

A Legion of Dirty Hacks

As I write, these issues are with Microsoft Support who are ably working to resolve them. In the meantime, we have made the problem go away with two approaches, both of which I regard as dirty hacks.

The Workflow History Conundrum

Whilst investigating this problem I came across a discussion on the TechNet support forums. Ironically this was coming at the same problem but from a wholly opposite angle, whereby people wanted to keep items in the Workflow History list for longer!

What I found in that list was a post by Fred Morrison containing a PowerShell script. I am re-posting it here for completeness in case the forum disappears, but all credit to Fred for this – I didn’t write it!

   1: # SPAdjustAutoCleanupDays.ps1
   2: # Author: Fred Morrison, Senior Software Engineer, Exostar, LLC
   3: #
   4: # Purpose: Adjust SharePoint Workflow Association AutoCleanupDays value, where necessary
   5: # on all workflow associations for a specified List.
   6: #
   7: # Parameters:
   8: # siteName - The SharePoint Site to look at
   9: # listName - The SharePoint List to look at
  10: # newCleanupDays - The number of days to set the workflow association AutoCleanupDays value to, if not already set.
  11: #
  12: # Example call: SPAdjustAutoCleanupDays http://workflow2/FredsWfTestSite FredsNewTestList 180
  13: #
  14: # following makes it easier to work with SharePoint and also means you have to run this script on the SharePoint server
  15: [void] [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") | Out-Null
  16: # capture command line arguments
  17: $siteName = $args[0] # ex: http://workflow2/FredsWfTestSite/
  18: $listName = $args[1] # ex: FredsNewTestList
  19: [int] $newCleanupDays = [System.Convert]::ToInt32($args[2]) # ex: 1096
  20: Write-Host $siteName
  21: Write-Host $listName
  22: Write-Host $newCleanupDays
  23: # get a reference to the SPSite object
  24: $wfSite = New-Object -TypeName Microsoft.SharePoint.SPSite $siteName
  25: [Microsoft.SharePoint.SPWeb] $wfWeb = $wfSite.OpenWeb()
  26: Write-Host $wfWeb.ToString()
  27: # get a reference to the SharePoint list we wish to examine
  28: [Microsoft.SharePoint.SPList] $wfList = $wfWeb.Lists[$listName];
  29: Write-Host $wfList.Title
  30: [Microsoft.SharePoint.Workflow.SPWorkflowAssociation] $wfAssociation = $null
  31: [Microsoft.SharePoint.Workflow.SPWorkflowAssociation] $a = $null
  32: [int] $assoCounter = 0
  33: [string] $message = ''
  34: # Look at every workflow association on the SPList and make sure the AutoCleanupDays value is correctly set to the desired value
  35: for( $i=0; $i -lt $wfList.WorkflowAssociations.Count; $i++)
  36: {
  37: $a = $wfList.WorkflowAssociations[$i]
  38: [string] $assocName = $a.Name
  39: Write-Host $a.Name
  40: if ( $a.AutoCleanupDays -ne $newCleanupDays )
  41: { 
  42: $oldValue = $a.AutoCleanupDays 
  43: $a.AutoCleanupDays = $newCleanupDays
  44: # save the changes 
  45: $wfList.UpdateWorkflowAssociation($a) 
  46: $message = "Workflow association $assocName AutoCleanupDays was changed from $oldValue to $newCleanupDays"
  47: }
  48: else
  49: {
  50: $message = "Workflow association $assocName AutoCleanupDays is already set to $newCleanupDays - no change needed"
  51: }
  52: Write-Host $message
  53: }
  54: Write-Host 'Done'

I simply ran that script on our system, setting the value for newCleanUpDays to 1. I waited a day and voila! All the list items were now accessible. Note that, as repeated in the forum discussion time and again, messing about with this is not a good idea. I simply have no choice right now.

The Timer Incident

It was all very well fixing the Workflow History list, but we really shouldn’t be seeing all those workflows in the first place. For some reason, our method of updating the list item from the timer, whilst being the official approach, triggered the workflow anyway.

To the rescue, a method we found on the blog of Paul Kotlyar. In that post, Paul talks about disabling event firing for the list item to ensure that no events get triggered. Why do I think this is a hack? Because the functionality is not normally found in workflows and timers – the method is part of SPEventReceiverBase.

Where Do We Go From Here?

Right now, I have support cases logged with Microsoft and engineers are working on the matter. We’ve already been via the SQL team, who looked at the original query that triggered the whole shebang, and they have returned an updated query for the SharePoint guys to look at. We also need to get to the bottom of a ‘correct’ way of updating list items without triggering events. As soon as I get a resolution from Microsoft, I will let you know.

A great article on handy SharePoint controls

I don’t know about you, but I always mean to gather various bits of knowledge into one place, but just like tidying my filing at home, I never quite get around to it. Fortunately for me, Chris O’Brien is a bit more organised and in my ever expanding blogroll today I saw a great article about really useful SharePoint controls to use in custom pages for that handy bit of functionality.

IE8 Rapid Fire Site Test

I can’t spend much longer playing with IE8 or my wife will skin me. However, from my cursory browsing experience I’m worried. Either the devs have a good deal of work to do or I’m going to be very busy with CSS rules for a while.

Here’s the University of Bradford site in IE8:


And to try to compare apples with oranges, here it is in Firefox 3 beta 3:


More surprisingly, here is the Web Standards Project site in IE8:


and again, in Firefox 3b3:

image, IE8:


and, Firefox 3b3:


As you can see, there are issues with the placement of some elements on each of these pages. I have not yet started to investigate why, but I am honestly surprised. Given the much-touted, ‘vastly-improved’ web standards support I was not expecting the number of issues I have seen. Ironically, as usual, old style table layouts look great…

I’ll try to do more comprehensive testing tomorrow. Stay tuned.

Internet Explorer 8…

Well, as expected, the public beta of IE8 appeared on the web pretty much straight after the Mix08 keynote mentioned it. I managed to grab it within mere moments and I now have it installed on my trusty laptop.

As announced only a day or two ago, it defaults to the new rendering mode, with a big toolbar button to toggle back to IE7 mode. I haven’t had time to test the browser with any sites yet, but I’ll try to do that in the next few days and maybe post again.

What did strike me, though, was that there are developer tools right out of the box. Reminiscent of the Safari tools that let you view the page code, the IE dev tools are enabled with a simple icon on the toolbar. Once enabled, what you get is cool:

HTML developer window

A nice CSS/HTML view where you can see the elements and which style rules are being inherited by the element, along with the opportunity to enable/disable individual rules. Suddenly IE is no longer Firebug’s poor cousin.

Javascript debugging window

More on for my developer colleagues, script debugging is also available. This will also come in jolly handy.

The new Favorites Bar I can take or leave- I just don’t browser like that. Underneath, it feels like the old links toolbar to me. Nothing to see here, move along.

Activities look ike they may have legs, though, particularly in the corporate sector where they can guarantee the desktop browser. Being able to right click on the page, element or highlighted text and call functions from other web sites, such as searching for a term or cross-connecting business applications – I can see uses for that and I’ll be playing with this as soon as I can.