The blogs of Black Marble staff

Tech Ed EMEA 2008 IT – Day 2

My first session of the day was about project Gemini. Following the brief demonstration in the keynote, I was really looking forward to a better look at Gemini, its background and how it will help information workers and IT professionals; I wasn’t disappointed. Cristian Petculescu, the Principal Architect for Gemini is an excellent speaker and gave a very good presentation about Gemini. I was once again blown away by the features demonstrated. Unfortunately due to limitations imposed upon him, Cristian couldn’t give a demo of the client part of Gemini, instead running a video which he stopped frequently so we could get a better look at the interface and to explain what was happening. We did get a demo of the SharePoint administrator interface however, which is something I’m now itching to play with. Unfortunately the release of Gemini will be with SQL Server ‘Kilimanjaro’ sometime in the first half of 2010. A beta should be available sometime in 2009 however.

Gemini is an Excel add-in allowing information workers access to data stored on remote systems (Gemini it appears can load data from just about anywhere including web feeds). It allows them to continue working in the way that they currently do (load the data into Excel, then start manipulating it) and it allows them to do with with a much larger data set than they have been typically working with until now while at the same time providing a useful set of tools to work with the data. The demonstration during the session showed 20 million rows of data being manipulated, the demo in the keynote showed 100 million rows. All of this manipulation is done in memory making it very quick indeed. In fact, the demonstrations showing the huge data sets that were being worked with were almost instantaneous. Once the data has been imported and manipulated, the model developed can be shared with co-workers via SharePoint (with auto-refresh options set for the data). There are limits on the data set that can be managed, specifically SharePoint has a 2GB memory limit for items imported into it. The 20 million row data set mentioned above resulted in a 1.2Gb file being uploaded to SharePoint. Cristian stated that the 2Gb limit will typically limit the data set size to between 100 million and 200 million rows of data.

SharePoint uses Excel Services to expose the data to other information workers, each data set being sandboxed. IT can keep an eye on the reports from within SharePoint Administration, with information on sandbox size, total memory use, CPU use and response time allowing easy management of resources. There is also a very neat time-based animation showing the number of users, number of queries and memory size of all of the reports currently on the system allowing easy identification of the resource hogs you may wish to move to Performance Point.

A few other things stood out during the day for me:

Gershon Levitz presented information on Threat Management Gateway (TMG), the new version of ISA Server which will be integrated into Forefront ‘Stirling’. As part of Stirling, there should be communication between the various elements of the system to help co-ordinate the response to internal an external threats. Incoming files can be scanned, even those being transported over SSL (more about this some other time).

Martin Kearn presented information on the protocols used for SharePoint communication – this was an enlightening look at what actually goes on inside a SharePoint farm, I hadn’t realised quite how much network chatter occurred even when the farm wasn’t actively serving pages.

Alexander Nikolayev presented information on the next generation of forefront for Exchange. A number of the features mentioned should help in the fight against spam into the future, though I was a little concerned as to whether any (or all) of the features were Exchange only. Again, more on this at some other time.

Tech Ed EMEA IT: Day 3 - Microsoft Enterprise Desktop Virtualisation (MED-V)

OK, MED-V is cool! Sadly, cool though it is, it's not something we'll use3 at BM, but in my previous lives doing large organisation IT, MED-V would have been a killer.

In a nutshell, it is this: create a Virtual PC image with your legacy OS and legacy App. Deploy that VPC to your users desktop so they can run your legacy app but let them run the app without needing to start the VPC and use two desktops.

That's right - MED-V apps appear in the host OS Start Menu and fire up windows which, although using the appearance of the guest OS, are hosted straight on the desktop. Not only that, but they get task bar entries, and even tray icons!

It's really well thought out - admins create the VPCs, publish them into a server infrastructure and publish the images and apps to users. The system takes care of versioning for the images and pushes them out to users which reduces the amount of data transferred.

You can allow roaming users to work remotely as well, but do clever things like setting a time limit, after which the virtual apps won't work because the user needs to connect to the main system to get updates to the guest OS.

It's great. It's also not out yet. Beta 1 is expected Q1 2009, although they are looking for early access users. Release is projected for H1 2009. If you're a big organisation and migration to Vista is a pain, MED-V may be for you, although it's only available to SA customers, as far as I can tell.

The snags (there are always some, right?): Host OS is Vista Sp1 or XP SP2/3 32-bit only. Guest OS is Windows XP or Windows 2000 only.

It was a great session, and you definitely want to find out more about this.

Tech Ed EMEA IT: Day 3 - Server 2008 R2

We were in early today, looking forward to a session on SharePoint with Bill Engolish. Sadly, that was cancelled so Andy and I sat in on the Server 2008 R2 overview session presented by Iain McDonald. That was very interesing, and we learned a bit more about BranchCache. It doesn't look like it will replace WAN accelerators like Riverbved, because it doesn't appear to function at their low level. However, it does a similar thing at the file level. The client requests a file from the remote server, which instead replies with hashes. The client PC the requests those hashes from the local cache, improving performance. The cache itself is built on request so does not need to be pre-populated (which is good). I think WAN accelerators have nothing to fear from this, but for smaller organisations or ones which aren't able to put the accelerators in (perhaps their servers are hosted, for example) BranchCache looks like a very promising technology.

Something I saw and got excited about is DHCP failover. We don't suffer much with DHCP outage, but because the only way to sync up two DHCP servers is to export and import it's very hard to do resilient services. DHCP failover should solve that, and it looks good.

Also, more on the >net on server core front. The key 'takeaway' is that it is a subset of .Net 2, .Net 3 (WCF and WF, not WPF) and .Net3.5 (WF additions and Linq). That makes sense - why include elements related to the GUI. However, subset obviously means compatibility pitfalls and I am still very interested to see where this goes.

We spoke to a few guys on the IIS stand yesterday about SharePoint on IIS7. I need to talk to the SharePoint guys about the same thing. The IIS chaps were optimistic that what I wanted to do would work, but there had been no effort put into testing of the scenario as yet. As far as I am concerned, at the very least I want to be able to run my WFE servers as server core for security reasons. I'd really like to be able to deploy the app server roles to core as well, which falls in line with a single-purpose server, virtualised strategy.

I'm writing this as I wait for the MED-V session to start. The brief intro to this given during the Windows 7 session made it sound exciting and I really hope to come away from this feeling energised. Whilst it's been a solid conference so far, there's not been much to give me a buzz - perhaps this is it. I'll take notes and try to post my thoughts later.

Going to conferences is worth it, well the chats in the corridor certainly are.

I am down in Reading for the VBug conference where I am speaking on TFS tomorrow.

Whilst in the bar chatting to Roy Osherove from Typemock, the keynote speaker for the conference, he asked if I had looked at the Sharepoint patterns and practices document that details using Typemock Isolator for unit testing in Sharepoint.

On a first look it seems very interesting; as usual at this point  I just wonder how I missed the announcement of this document last month! Is is just me or does everyone struggle to keep up with the the blogs and site you should read?

Update 7th Nov - 


TechEd EMEA IT: Day 2 - Threat Management Gateway

Andy and I are now in a TMG preview demo. This looks really interesting - we spoke to the guys at ATE last night and saw a few items that I hope to see now in more detail. TMG is ISA Server vnext - codenamed 'Nitrogen' and part of the 'Stirling' next wave of Forefront.

Stirling family members exchange information to allow 'dynamic response' - trigger actions from different forefront elements (client sec etc) based on alerts from other elements (eg mail scanner). That looks really powerful.

New in TMG is web client protection - threat protection. Scan downloaded files as they pass through for malware. This blocks download of malware and shows the user a message page. Finally - way to save some users from themselves!

TMG can now also inspect ssl traffic! TMG encrypts between client and itself using it's own certificate to the client, assuming the cert from the actual site is valid. Notably, if you enable https inspection you can make TMG tell the users - warn them, if you like - that their 'secure' connection is being inspected. You can also exclude categories of sites from this inspection.

For large files, TMG will show the user a 'comforting' page informing them that the file has been downloaded by TMG and is being scanned for malware.

TMG inspects traffic and will try to detect if a download manager is being used. At that point the 'comforting' page won't be displayed. Interestingly, you can also block the download of encrypted zip files if you like - i.e. if TMG can't scan it, don't let it through.

TMG can also now do URL filtering. This is category-based, so you can block categories of sites. The site lists can be acquired through an external service. Can override the https inspection for categories of sites as well - e.g. banking sites.

These are gathered into the heading of Web Access Policies, which cover URL filtering, https inspection and malware inspection.

Also interesting is the Intrusion Prevention Systems which allows TMG to detect and block exploits for vulnerabilities, even if the hotfix is not yet released (such as the sql worm, for example). The demo of this was really cool, albeit in a geeky kind of a way. The exploit protection uses signatures which will be dowloaded and deployed, and my understanding is that they are not limited to TMG.

The firewall can also now continue to run if the logging DB server goes away. TMG creates a log queue locally, continues to operate normally, and will update the DB when it comes back online. The log viewer also continues to work, albeit only accessing the local queued items.

This is all cool stuff. There's lots more too, but the things I've mentioned here are of use to everyone, whereas some of the other stuff covered is certainly less applicable to us at BM because of the way we work. Another solid-looking new product that I would recommend anybody to look into, and particularly if you're currently using ISA 2006.

TechEd EMEA IT: Day 2 - Windows 7 Feature Preview

So, the first session of the day was an extremely well-attended overview of Windows 7 features. When they talk about evolution rather than revolution with regard to Windows 7, I think that's accurate. It was very much about developing and extending the foundations of Vista.

A few things stuck out, however. An almost throwaway comment about DirectConnect requiring IPSEC and IPv6 means that I must dig deeper, and that the technology, whilst cool, is almost totally useless to me, stuck behind two layers of NAT in a managed building. BranchCache was again mentioned with, again, no indication of how it works - more digging required.

Most pertinent to me, however, was the development of Bitlocker. I am typing this as I sit in the room waiting for the deep dive session on Bitlocker enhancements to start. The key new feature in Windows 7 is the ability to encrypt removable drives using Bitlocker. Interestingly, admins can also use policies to enforce encryption, at which point unencrypted drives become read only. Backwards compatibility ensures that 'Windows XP and Vista' can 'read' data from the drives. I'm guessing they can't write, and I'm also guessing (as it wasn't mentioned) that non-windows systems need not apply.

That lack of cross platform (and now I'm talking about OSX and Linux) support may anger some, but for our company needs  it's irrelevant. We already ensure no customer or sensitive data is copied on removable storage, but being able to encrypt, and force the encryption of all removable media attached to systems I own will help be be able to guarantee that any data copied from our systems is stored securely.

NOTE: Having now been to the deeper dive on Bitlocker, the current build of Windows 7 has no downlevel support. I'm really hoping this will change prior to launch (the presenter was carefully non-comittal, and probably rightly so at this stage). If it doesn't the technology is a dead duck for us, as I can't guarantee being able to get all our machines up to Windows 7 in a reasonable timeframe.

Also of interest to me were the developments in deployment technologies. I will try to attend the appropriate sessions on these too - the ability to add new drivers to wim and vhd files offline (and post-sysprep) could be a big benefit to use in extending the life of our system images, particularly as we look towards more automated provisioning of virtual machines from vhd and wim files onto varied hardware (especially when I get my hands on hyper-v in Windows 7!).

Overall it was a very interesting session, albeit shallow. Windows 7 is exciting - not because it is new and cool, but almost precisely because it isn't. It is to Vista what Windows 2000 was to NT4 and XP beyond - evolved, more stable, more trustworthy.

Barcelona Metro

I think that the Barcelona Metro is superb. So far over the last couple of days it’s been an extremely good, fast service that has got us around the city with no problems at all.

In addition, nobody seems to bat an eyelid when people take all sorts of things with them you’d never expect to see at home.  So far I’ve seen

  • 2 guys carrying a mattress
  • a bloke on a BMX
  • a couple with two (well behaved) dogs not on leads

Tech Ed EMEA 2008 IT – Day 1 reflections

Today has been interesting. Rik and I started the day doing the sightseeing we had time for. The Gaudi cathedral had been particularly recommended, so with limited time at our disposal, that’s what we decided to see. We arrived at the gate just as it opened, and were in within a few minutes. The cathedral is very, very impressive, though there is an awful lot of construction work going on at the moment. It is an amazing structure, with a very impressive sense of light and space inside:


Following the trip to the cathedral, we headed back towards the convention centre to get lunch and to try to get into the main auditorium for the keynote early enough to get a good seat. I was glad that we made the effort as we managed to get seats near the front tucked off to one side. Here’s our view of the stage, and the auditorium once it had nearly filled:

IMG_2901 IMG_2908

The keynote by Brad Anderson was interesting, with a number of announcements and some very useful demos. I was particularly impressed with the drive towards virtualisation, and the available and forthcoming tools to help you manage the resulting data centre. There was a live migration demo using Server 2008 R2 which demonstrated a live move of a virtual machine from one host to another with no interruption of service. In addition, Gemini was demonstrated; a self service BI offering allowing anyone within the organisation to view and manipulate data from sources such as SQL Server. The most impressive part of the demonstration as far as I was concerned was the ease (and speed!) with which the data could be published to SharePoint for consumption within the business:


Also mentioned were items such as Cross Platform Extensions for SCOM allowing monitoring and management of non-Microsoft systems and server Application Virtualisation allowing the separation of the server OS and the server application allowing each to be managed (and patched) separately – all very interesting! A number of announcements were also made, for example System Center Operations Manager 2007 R2 Beta will be available for download at the end of November.

From there it was off to the first session; Planning and Operations Tools for SharePoint which provided some useful pointers and allowed the possibility of some feedback to the managers of the solution accelerators programme.

After the sessions this afternoon, Rik and I spent some time wandering around the Ask The Expert area generally asking awkward questions of most of the people we could find.

All in all it’s been a very useful first day.

patterns & practices Acceptance Test Engineering Guidance

Robert blogged about the new beta release of the patterns & practices Acceptance Test Engineering Guidance document. I have had a chance to do a quick read now and I have to say I am impressed. If nothing else it gives great comparative look at waterfall and agile methods for delivery, and a review of many types of acceptance testing.

As with many of the p&p documents it is not exhaustive in what it covers, but what it does give is an excellent and detailed starting point for you to make the decisions that are right for your project. It does not give all the answers just most of the right questions.

Tech Ed EMEA IT 2008: Day 1 - Keynote

So, the keynote was interesting. Much of the content I had seen before, but there were some demos that were interesting and a few snippets that made me take note.

For example, I had not understood that the acquisition of Kidaro will enable interaction between applications running within a virtual machine and the host desktop in ways that are not currently achievable. That the technology will ship as part of a new Desktop Optimisation Pack was news. I believe the technology is name MEDV - Microsoft Enterprise Desktop Virtualisation.

Softgrid was also mentioned as solid way to achieve application virtualisation - a technology that I have not previously had chance to play with, but which is most definitely on my To Do list - I think of a few specific practical uses for us. One of the 'announcements' of the keynote was the RTM of Application Virtualisation 4.5 (I believe, the solution formerly known as SoftGrid). Critically, the team behind application virtualisation are working on virtualising the server applications. That has big implications for simplifying the deployment of new virtualised solutions and the stack of differencing disks and other VHDs needed.

Also of note - Server 2008 R2 includes the ability to live migrate virtual machines. What I did not know until today was that Server 2008 R2 M3 is available for download. I can feel some testing coming on...

On the subject of virtualisation, the release of System Centre Virtual Machine Manager including support for Hyper-V was also 'announced'. I believe we've been running that for about a week now and I am pretty impressed with it (we're currently migrating our Virtual Server 2005 VMs to Hyper-V - I'll post about that experience another time).

What was new to me was the idea being worked on of using M - the modelling language launched as part of Oslo - to create models of systems which can then be provisioned using SCVMM. For the creation of development and test environments that sounds cool!

All of this is part of a concerted (if a little low-key, I thought) push to position Microsoft as the cost effective (read, cheaper!) solution for virtualisation and virtualisation management.

A couple of enviro-quickies:

  • Microsoft is the largest commercial purchaser of servers in the world and is brining a new datacenter on-stream roughly once per quarter.
  • Their new DC in Quincy, WA is built next to a hydro-electric dam to ensure a clean source of energy.
  • The upcoming Dublin, Ireland DA will use natural air cooling, not air-con (and I'd love to hear more about that).

Announcement quickies:

  • SCOM 2007 R2 beta will be available for download at the end of November.
  • Centro - Essentual Busines Server will be 'announced' on November 12th.
  • Identity Lifecycle Manager '2' RC is now available

A key new feature in Server 2008 R2 is the availability of ASP.Net on Server Core. That has big implications for SharePoint and you can bet I will be talking to the guys from Microsoft about that one later!

Also interesting were a few new Server 2008 R2 features:

  • DirectAccess - device can connect securely over internet without requiring VPN. We currently use ISA server but there are limitations. This might be handy...
  • Bitlocker to Go - encryption for USB drives (and other removable storage, I assume). Definitely interested in that one.
  • BranchCache - branch office caching solution for data. Sounds like WAN acceleration a la Riverbed to me, and the demo did nothing to change that view. Does this mean the caching server has to be the gateway for the WAN? What does it support in terms of applications, protocols etc? Another one to discuss during the week.