EMEA Project Conference – Madrid

Finally, after all the excitement that Richard and Robert had in Seattle and Barcelona, I find myself in the Auditorium Hotel, Madrid for the EMEA Project Conference.

According to the multilingual sales blurb in my room, the hotel is the largest in Europe, and I must say it’s very nice. We flew in yesterday and today is an MS Partner-only day before the conference itself kicks off tomorrow.

Project Server is something we’re very interested in using ourselves, and it’s integration with SharePoint (MOSS/WSS) makes it an attractive solution to anybody who has already deployed MOSS for their corporate intranet, as we have.

Also on the agenda today is VSTS integration with Project Server, which I’m keen to see more on. Closing the loop between developer activity and project planning and monitoring can make a big difference to whether a project comes in on time and budget.

I’m here for the SysAdmin track, whilst Paul and Jim cover the managerial and best practice side of things. I’ll do my best to blog on what I see, although it’s a pretty packed few days, ending in a good sprint from the end of the last session at 3:15 on Wednesday to make it to the airport in time for our 5:25 flight back to Blighty.

Mix:UK 07 Round-up

We’re back up north after Mix:UK 07 and I thought I’d follow up my earlier post with a few thoughts on the event and it’s content.

Before I do that, however, I need to give a cheer for our guys: Jonny performed incredibly in the Guiter Hero competition to  be triumphant in front of his screaming supporters, and Sam, Mat, Tom and Jonny cleaned up the the goody-bagging stakes of the Swaggily Fortunes quiz!

Anyway, back to the plot. Day two of the event had some good sessions. Kicking off the day with good humour was a pretty inspiring talk by Beau Amber of Metaliq.He apologised for not being awake, having not slept. He then showed the fruits of his sleepless night by demoing an iPhone built in Silverlight! It was a great session on what kinds of things you can do with Silverlight 1.0 and I look forward to his continued development of the Silverphone.

Next up was Todd Landstad, who was infectiously enthusiastic about mobile devices. He was showing interesting stuff using tablet PC, Sideshow and a suit of UMPC devices. As an avid Engadget reader none of the devices came as a surprise, but it was a great demo on how a little lateral thinking can result in useful software for people on the move, and the things to consider when targeting mobile devices.

Now it gets tricky. The next session was all about accessibility. It wasn’t bad, I have to say, and the guy running the session showed a couple of things I didn’t know about how to kick ASP.NET into generating some of the elements that are needed when doing accessible tables. The trouble is, that it was like watching a presentation from about five years ago. The points covered were all WCAG 1 level A, with little mention of level AA. More worryingly, the speaker referenced WCAG 1 but called it WCAG 2. He didn’t seem versed in current thoughts and best practices regarding semantic structure, skip links and access keys. He even admitted to using tables for layouts!

I appreciate that he only had an hour, but I’m not convinced that anybody left the room really understanding what their obligations were or where to go to find out more.

So, if you were in the room and want to find out about accessbility here are a couple of links to get you started:

  • AbilityNet – a UK organisation who give support and advice on accessibility.
  • JuicyStudio – the site of Gez Lemon, who’s involved in WCAG 2 and knows his accessibility onions.
  • Joe Clark – extremely passionate about accessibility across a broad spectrum of areas.
  • Accessify – a community site founded by Ian Lloyd and a hub for accessibility discussion.
  • Further Ahead – run by Derek Featherstone, who’s a really cool guy and knows his stuff.

Overall I was at times impressed, inspired, disappointed and frustrated at Mix:UK, but I have to say that at all times the guys running the conference were helpful and organised and all the Black Marble posse had a great time.

In the Mix

Well, it’s just after 3pm on day one of Mix:UK 07. I’m taking a break with a coffee so I thought I’d post.

It’s mixed bag down here (sorry – no pun intended). The technology is fantastic – the stuff that can be achieved with WPF and Silverlight is excellent. I’m still a little uncertain that usability has been sacrificed on the sacrificial alter of bling, however. To be fair, that’s more telling about the rapid-development nature of conference demos, where the wow-factor is more important, but I think it’s a very, very significant issue which should not be allowed to get lost in the excitement.

So, keynote was good, but a little patchy, with lots of people showing off their latest and greatest example of of WPF or Silverlight. The first session was really useful for me. I’ve done some XAML, but to watch a guy who really knows his way around Blend really helped gel things in my mind.

More interesting still, however, was the next session, where a gret guy called Nathan Buggia from Live Search talked about SEO. It was a good session, with a lot of straight talk from a guy who works at a search engine about SEO, nicely pointing out some of the less honorable practices of SEO sharks. Overall his message was what I’ve said all along – build good, semantic pages with informative content and you’ll get good rankings. There’s a bit more to it than that, obviously, but that’s broadly it.

What I did discover during that session, which I really ought to have seen before (I may even have seen it but not have it register), was the XML sitemap format, detailed at sitemaps.org. This can be pushed to the search engines to give them prior information, if you like. It doesn’t let you ‘fix’ your results, but it can be used to give helpful hints to the search engine, particularly on refresh rates for changing pages or even just giving them the nod that things have changed. I will research this more thoroughly now – I may even manage a post on what I find.

Anyway, I will sign off with an apology – sorry Nick, I’m in London and I haven’t called. Next time, I promise!

Web site development: University of Bradford Part 1

One of the last projects I was involved in before I left the University of Bradford to join Black Marble was a new design for the external web site of the institution. I’d pretty much finished the construction of the page layouts and styles before I left, but it’s only now that the site is about to go live. I’ve threatened a few people with a series of posts on how the site is constructed and although I’m not there any more it seems topical.

In this post I’ll give some background, describe the project and run through why things were done in a certain way. Over the next few posts I’ll cover the construction in more detail – what styling problems I hit and how they were fixed, and how the site tries to make use of things like microformats and opensearch.

A Brand Refresh; A whole new look

The University of Bradford old website

The University’s external web site hasn’t really changed much in years. Having said that, in spite of not necessarily being the snappiest dresser on the block, it was always extremely easy to find what you were after. Back in early 2006 the marketing department were engaged in a ‘brand refresh’ which to you and me means fiddling with the logo and corporate colours. Also to be included in the spruce-up was the web site.

The University of Bradford WebsiteFor those of you who don’t know, my role at the University expanded to take in the web when one of my colleagues, who ran the web servers, left the organisation. I’ve always been passionate about web development (and I use that term advisedly) and I spent a fair amount of my time trying to expand the level of knowledge and appreciation of web standards, issues and technology throughout the university. It was because of this that I was asked if I could assist with the development of the new web site.

University internal page new designThe design for the site was done by the same agency responsible for the brand refresh. It is extremely striking, and still in keeping with trying to make the site as navigable as possible. A meeting was had with the designer, the University’s Web Officer, the Head of Marketing and myself. In that meeting we agreed that the University would build the site itself from the designs created by the agency. This would allow us to make sure that we met our legal obligations in terms of Accessibility, and also ensure that the was knowledge and understanding within the organisation of how the site was built.

A series of laudable aims

It was agreed that the site should meet a series of requirements from a technical perspective:

  • It should be a fully fluid design – not a thin sliver down the middle of your monitor but able to flow and take up as much space as allowed.
  • It should work in all modern browsers, including mobile browsers such as Opera, and text-only browsers such as Lynx.
  • It should be as accessible as possible, using accepted best-practice for ensuring users of assistive technologies would be able to get the most out of the site.
  • It should attempt to include new technologies such as OpenSearch and Microformats if and where appropriate.

Assigning roles

There were a number of areas that required work to make the new web site a reality. It was agreed that I would build the external homepage and a template for the content pages. I would not deal with site structure or content- those would be managed by the Web Officer and the marketing team.

Starting Out

I started out with a series of visual comps given to me in PDF format. I began with the homepage and started to work out how to tackle taking the design and building the underlying HTML structure.

I’m a bit of a luddite at heart, so I printed all the comps out at A3, got some large sheets of tracing paper and traced my initial wireframe, labelling the parts as I went.

Once I’d got a basic structure I then made some scribbled notes about how certain elements should function – using remote rollovers, for example.

After that, I pulled the comps up in my bitmap editor (Corel PhotoPaint, if you care) and took some dimensions to inform the initial styling, and lifted the colour values from the design element to feed into the stylesheets.

Once I had my trusty paper notes to work from, I started to tackle the creation of the site. I code by hand – I hate GUI editors – so I did most of the work in HTML-Kit from Chami.com. I now tend to use Expression Web, although I dip into Dreamweaver occasionally and I suspect that I will use Visual Studio 208 more as the projects I work on at Black Marble tend to involve ASP.Net coders as well.

In my next post I’ll run through how the homepage was built and what hurdles the web browsers threw into my path along the way!

Web development helpers: Redux

After posting yesterday about useful tools for development I stumbled across another little gem of a utility. IE7Pro is much more of a usability-enhancing tool but it has a wonderfully handy tool nestling within – Save Current Tab As Image. If you need to do grabs of pages for documentation or presentations and the page is more than a single screen in length this will transform your life – no more cropping and stitching!

IE7Pro also has a raft of features such as adblocking and mouse gestures, which I will admit to switching off immediately. However, it’s inline search (not quite Find As You Type, but pretty close) is jolly useful.

Get IE7Pro

Web development little helpers

As web development gets more and more complex having the right tools to help you figure out what’s going on is essential. I thought I’d do a quick post on the ones I find most useful. In no particular order, then, here they are.

  1. Virtual PC
    This one is a godsend, because as we all know, running multiple versions of Internet Explorer is hard. VPC, now available as a free download from Microsoft, allows me to run the numerous variants of IE our clients require me to test against.
    If you just want IE6, Microsoft have a handy downloadable pre-built VPC:
    Download Virtual PC
    Download the Internet Explorer Compatibility VPC Image
  2. Firebug for Firefox
    Now imitated for other browsers, Firebug is fantastic. A clear and straightforward way to identify the bugs in your pages or styles, it allows you to easily identify which stylesheet rules are being applied and in what order, and to hack ’em on the fly as you test your fixes. Add to that the ability to mangle the page and debug javascript and we have a winner.
    Download Firebug
    Firebug running in Firefox
  3. Chris Pederick’s Developer Toolbar for Firefox
    Even though Firebug is great, I still use Chris Pederick’s trusty developer toolbar for enabling and disabling styles, accessing the W3C validator and other stuff. Couldn’t live without it, in fact.
    Get Developer Toolbar
  4. Nikhil Kothari’s Web Development Helper for IE
    Broadly offering the same level of information as Firebug, but without the ability to hack on the fly, this is a handy way of seeing what IE is doing with your page under the hood.
    Get Web Development Helper
    Webhelper in IE
    The Webhelper DOM Inspector
  5. Inspector for Safari (for Windows)
    I have a trusty Mac Mini that I use for checking Safari as well, but the advent of Safari for Windows has made my life easier, I must admit. How excited was I, then, to find that you get Inspector working with the Windows version. Again, loads of info about the page, although hacking on the fly. Instructions courtesty of David Barkol’s blog. A note – as I write this the latest nightly crashes horribly – I am using the nightly from the 21st June and it works well. At some point I will try later builds but right now a stable platform that I can enable easily and consistently is more important.
    Enable Web Inspector for Safari on Windows
    Inspector in Safari for Windows
    The Inspector Information Window

I’d love to hear from anybody who uses other cool tools that I may not have come across. I’m particularly interested in these kind of things for Opera.

SharePoint problems with access rights

I spent a while knocking my head against a problem with a SharePoint server farm that’s worth posting about. It’s also worth a big hats-off to our Technical Support Coordinator at Microsoft Partner Support who dredged up the article that finally pointed us in the right direction.

The problem

I’ll post later about our approach to SharePoint installations, but I’ll summarise thus: We create multiple user accounts to the SharePoint services – a db access account, an installation account etc etc. In this instance we were building a three server setup – db server, web server, index server. The accounts were created first, logged in as a domain admin. I also installed SharePoint as the domain admin, but didn’t run the config wizard.

I then logged in as our installation user, which has local admin rights to the two servers and dbcreator and securityadmin roles in the SQL server. I ran the config wizard on the web server and created the farm, specifying the db access account for (shock!) db access! The web server got to host the central admin site, which was tested and worked.

Before doing anything else I ran the config wizard on the second server and connected to the farm. At this point I had three servers listed in the Central Admin site, and it was time to configure services.

At this point we hit the snag – when I tried to configure the Office Server Search Service to run on the second server I got a SharePoint page telling me access was denied (‘The request failed with HTTP status 401: Unathorised’. There was a similar error in the event log with an event ID of 1314, and we also found an event log error with ID 5000.

I bashed my head against this for a while, checking user rights, group memberships and stuff. I checked the DCOM IIS WAMREG activation rights for the users that the app pools were running as and just in case did an aspnet_regiis -ga <username> for those accounts to ensure that all the .Net registrations and rights were correct. No success.

I removed SharePoint and reinstalled the farm with the roles reversed. The fault moved to the other server. I confirmed that I could configure the service on the same server as the central admin site but never on the other server. I looked at the system registry, compared service configurations with a working system and tried manually hacking the config to no effect.

In the end I uninstalled everything, installed the farm clean and unconfigured and called in air support.

The fix

I can’t praise our support guy at Microsoft enough. He’s incredible – I emailed him and got a phone call within five minutes! We ran through the problem and he consulted his support resources. What he came back with took a few goes to make stick, but it worked, and in fixing SharePoint pointed to the root of the problem.

The solution is to edit the web.config for the Office Server Web Services site. On our system that file is in C:\Program Files\Microsoft Office Servers\12.0\WebServices\Root. The original file looks like this:

<?xml version=”1.0″ encoding=”utf-8″?>
        <sectionGroup name=”microsoft.office.server” type=”Microsoft.Office.Server.Administration.OfficeServerConfigurationSectionGroup, Microsoft.Office.Server, Version=, Culture=neutral, PublicKeyToken=71e9bce111e9429c” > 
            <section name=”sharedServices” type=”Microsoft.Office.Server.Administration.SharedServiceConfigurationSection, Microsoft.Office.Server, Version=, Culture=neutral, PublicKeyToken=71e9bce111e9429c” /> 
            <allow roles=”.\WSS_ADMIN_WPG” /> 
            <deny users=”*” /> 
                <clear /> 
                <add name=”AnyHttpSoap” /> 
                <add name=”Documentation” /> 

The solution is to edit the <authorization> section, adding entries to grant access to the user accounts for installation and db access:

            <allow roles=”.\WSS_ADMIN_WPG” /> 
            <allow users=”ondemand\MOSSdba” /> 
            <allow users=”ondemand\MOSSsetup” /> 
            <deny users=”*” /> 

However, the gotcha is that SharePoint puts the settings back – don’t do an IISreset; don’t recycle the app pool. Simply edit the file then go the page to configure the search service and it works. Once you’ve done that the service will start.

I then found that I couldn’t get back into the page because the web.config got reset (grr), but that’s not important right now.

The cause

The key in all this is that the two users I added explicit rights for were members of the WSS_ADMIN_WPG group specified inthe original file. This pointed at an issue with the domain – the server was failing to get a list of members for that group.

The servers themselves were built and managed by our customer’s hosting provider, so I passed the fault to them. They checked the systems and found a domain fault affecting synchronisation. Result!

Updating firmware on SPV M3100 (HTC TyTN)

Still no Windows Mobile 6 update for my Orange SPV M3100, but they did release an update to WM5 recently.

Installing said update turned out to be slightly trickier than I expected. I don’t know if anybody else has experienced the same problem, but a word to the wise – don’t try the update on Windows Vista!

The first part works OK – it connects to the device and interrogates it, but when it actually tries to connect and download the new firmware it fails.

Luckily, I still have a PC at home running XP (Media Centre Edition, as it happens), so I installed Active Sync 4.5 and rant he update successfully on that.

I’m surprised though – Vista’s nearly six months old now and there must be others with the same phone who don’t have the recourse to an old PC (!) to run the update.

What happened to the idealists?

Douglas Coupland’s Jpod has been doing the rounds in the office of late. I enjoyed MicroSerfs, so approached Jpod with excitement.

Frankly, I’m disappointed.

It’s not the writing – I ‘ve enjoyed pretty much all of his books. It’s not that the books are similar in approach and style (they are) but rather the contrast in the lives of the characters.

Overall, MicroSerfs was optimistic. The characters in the book were using their talent to make the world a better place. The technology in Jpod is cynically created to make the most money. I finished MicroSerfs feeling good about what I do for a living; I’m stuggling through Jpod as it slowly destroys that feeling.

Let’s set aside whether this contrast is intentional – I don’t want to discuss what Mr Coupland is trying to say. What I want to get across is something that I have felt for a while and which Jpod merely reinforced:

The IT industry is becoming more and more cynical.

Perhaps this is a function of its age and maturity; perhaps it has more to do with the complexity of modern IT solutions; perhaps it is that we have accomplished so much so quickly that progress can only become harder and slower.

When I started working, the University for which I worked was only just embracing desktop computers. I was involved in promoting desktop PCs and workgroup servers to departments and it was an exciting time. Throughout my career there, I was involved in the creation of new services that were intended to make people’s lives better, easier, simpler, more efficient, and I got a great deal of satisfaction from it.

I still get satisfaction from delivering those kind of solutions, and I like to think that myself and my colleagues here at Black Marble still aim to make the world a better place through technology, in our own way.

I’m less convinced that the rest of the world still feels that way. What do you think?

Analysing Active Directory

I think I’ve mentioned before how I’ve been updating our IT infrastructure. Company growth has meant a need for expanded services. Add to that new versions of SharePoint and Exchange, mix in a need to run virtual servers for development and you have a need for more tin.

Over the past six months I’ve expanded our domain to keep pace with our growing needs. The number of physical servers we have has increased, with a few more virtual servers for specific roles that I prefer to keep separate but which don’t really merit their own box.

As part of this growth, I added a second domain controller. Our existing DC was also running Exchange 2003, and this situation has caused me the most headaches in the sliding block puzzle of service upgrade and migration: We couldn’t demote the DC on our old server because of Exchange 2003, but I was reticent about putting in Exchange 2007 until I had redundancy of critical services (DC, DNS, etc).

Updating Domains, getting ready for Exchange

I will admit at this point that my knowledge of AD is not as deep as I would like, although it is increasing daily. That does mean, however, that I check before I leap – find articles on MSDN, TechNet and the wider blogosphere to find the pitfalls so I avoid pratfalls.

So, I read carefully about raising the functional level of the Forest and Domain when installing a 2003 R2 domain, made sure everything was patched and service packed before starting, read and re-read the instructions. When confident I had run through all the prerequisites I ran dcpromo to add my domain controller.

I was then left with two servers, both of which had the necessary tools to mange AD, both of which were registered in DNS as DC’s, both of which appeared to be fine.

Nothing I read suggested that I needed to check anything else to make sure the process had completed… (You can see where this is going, can’t you…?)

Exchange 2007 – the big transition

Over the first weekend in April we transitioned from Exchange 2003 to Exchange 2007. Once again, I did my reading. I ran the Exchange Best Practice Analyser and made sure that our Exchange 2003 installation was in tip-top condition. I compared two or three different sets of instructions on how to run throughthe process, setting on one from an Exchange community site because of some extra little nuggets of insight it contained.

The transition went relatively smoothly. The new server went in, was configured correctly and the Exchange 2007 site was connected to the Exchange 2003 site. Mailboxes were transferred (we had a problem with one, but we fixed it) and clients were checked to have connected to the new server.

Once happy, we uninstalled the old Exchange, as per instructions.

It took a full day, but we were being careful and thorough. We thought it had gone fine.

The next step would be to remove our old DC from the AD and decommission the server. Being cautious, we wanted to test that things wouldn’t stop if we removed the old DC, so we unplugged the network cable…


Everything stopped – Exchange clients disconnected, logons stopped, everything!

Is there a doctor in the house?

Stage one when hit with a problem – gather as much information as possible.

We looked at our systems, we checked logs, we watched the Outlook clients connecting to exchange. When we disconnected our old DC, nothing seemed to want to talk to the new DC. I checked the Exchange server settings and made sure the server was set to use the new DC for its configuration and all seemed fine.

We noticed an error that the clients couldn’t connect to Global Catalogue server, so I did some more reading, realised that the old DC was our global catalogue server and so followed the steps to change the role over to the new DC. Everything said it had worked, but nothing changed.

I did some more reading about role masters and set the new DC to be the master for each role – at least I thought I did – through the AD users and groups tool. Still nothing.

At this point I decided that either I could spent days or weeks researching and prodding, or I could call in the cavalry. The support team we have access to as a Gold Partner are fantastic – I can never praise them enough – and sure enough I had people on the problem within an hour of logging the call.

Because we initially thought the problem was with our exchange config, we dealt with a very efficient Exchange support guy. He worked methodically through the problem, and started to look deeper into our domain and DC’s as he zeroed in on it being a domain issue.

At this point, I encountered the AD support tools being used in anger for the first time. I passed the support guys dozens of log files. We also discovered what appeared to be the problem – my new DC wasn’t really a DC!

That last statement is a bit too simplistic. Our new DC was happily replicating the AD. It reported everything being fine when examined with replmon. Both DC’s agreed on their view of the world.

What I didn’t know was that in addition to the AD replicas, a NETLOGON share is created on the new DC by dcpromo. I also did not know that this process had failed – at no point did anything tell me. Because there was no share, the server was not dealing with client requests correctly, which is why our systems had a fit when I unplugged the old DC.

Peering into a deep, dark well

Having identified the fault, my exchange guy called in an AD specialist to assist. He ably worked through the fault. There are a sequence of steps to follow which will trigger a rebuild of the netlogon share. We worked through them. They didn’t work. We knew they didn’t work because the share wasn’t created. Apart from a couple of event log messages which I didn’t consider to be helpful, nothing told us what was wrong.

Having failed to rebuild the share on the new DC, my AD ninja looked at the old DC. He decided to rebuild the same share on the existing DC, the thinking being that the replication was failing because of a fault on the source, rather than the destination. In order to do this, the domain group policies would be destroyed and rebuilt as defaults.

This process took some time, but to cut a very long story short, it appears that our default group policy objects were corrupted, which was blocking the replication. By deleting them and rebuilding the sysvol directory structure on our original DC, then forcing a rebuild on the new DC, the AD was fixed.

My eternal gratitude to the Microsoft support guys. My point, long and meandering though the journey has been, is this: At no point did I see anything which suggested corruption of those objects. At no point did I see anything which suggested they were the cause of the replication fault.

My toolbox is missing!

In order to get the information the support guys needed, I had to install first the Support Tools from the installation media and then the resource kit tools downloaded via the web. Those tools should have been installed by default, or at least should have been added when I created my new DC.

Even when I’d installed the tools, they didn’t really give me much information. Now, I will readily admit here that I am new to the tools, and continued reading will doubtless help me in this regard, but the key point is a simple one:

I can’t see what’s going on!

Shhh… say it quietly… NDS

I supported IT solutions including Novell servers for fifteen years before joining Black Marble. In my previous role we had some thirty servers with a fairly complex, but well structured NDS directory. Over those years, we had some problems with replication and corruption, and every time we did, we started with the same procedure: We watched.

What Active Directory is lacking, in my humble opinion, is an equivalent of the Novell DStrace tool. DSTrace allows you to watch the activity of your directory replicas. By careful use of the various options you can configure your servers to show you replication traffic, requests and responses and more. Colour coding allows you to spot errors and warnings and after a while you start to see patterns in the mass of text. If we had an NDS problem we could use DStrace to get a feel for the cause – you could see if there were corrupt objects which weren’t replicating between servers. You could even figure out which servers were right and wrong.

Once you’d seen the fault, the dsrepair tool allowed you to tackle it either with surgical precision or with heavy artillery. You could force a replication of an individual object, overwriting the corrupted copy by force, or use drastic measures like deleting a replica of the directory or a partition.

Where are those tools for active directory? If they exist, please tell me, because I’d like to get my hands on them. I can’t imaging dealing with huge installations of AD without that kind of toolset.

A wishlist…

What would I like to see then? I’m writing this post before I start rummaging around the web, and if I find examples of these tools I’ll post about them.

  1. A tool which checks the integrity of the directory and it’s objects, and identifies where replicas on different servers disagree.
  2. A tool that allows me to see all the AD traffic in real time – logging to a database might be useful, but just seeing the messages on screen would be a start. I want to be able to toggle different messages – errors, warnings, replication traffic, client requests and responses etc to get a feel for what works and what doesn’t.
  3. A tool to allow me to fix individual objects – to replace them from backup or to overwrite them with a copy from another replica (by far my preferred method).

If this lot already exists then tell me. If there are good books on the subject then point me at them. I’ve found some support articles which are helpful, but not as much as I’d like. I’m not precious – if this all stems from a fundamental misunderstanding or lack of knowledge on my part I’m happy to admit my mistake. However, at this point I’m leaning more to it being an indication that AD still hasn’t matured to the level of NDS in terms of management and control.