Tech Ed EMEA 2008 IT – Day 2
My first session of the day was about project Gemini. Following the brief demonstration in the keynote, I was really looking forward to a better look at Gemini, its background and how it will help information workers and IT professionals; I wasn’t disappointed. Cristian Petculescu, the Principal Architect for Gemini is an excellent speaker and gave a very good presentation about Gemini. I was once again blown away by the features demonstrated. Unfortunately due to limitations imposed upon him, Cristian couldn’t give a demo of the client part of Gemini, instead running a video which he stopped frequently so we could get a better look at the interface and to explain what was happening. We did get a demo of the SharePoint administrator interface however, which is something I’m now itching to play with. Unfortunately the release of Gemini will be with SQL Server ‘Kilimanjaro’ sometime in the first half of 2010. A beta should be available sometime in 2009 however.
Gemini is an Excel add-in allowing information workers access to data stored on remote systems (Gemini it appears can load data from just about anywhere including web feeds). It allows them to continue working in the way that they currently do (load the data into Excel, then start manipulating it) and it allows them to do with with a much larger data set than they have been typically working with until now while at the same time providing a useful set of tools to work with the data. The demonstration during the session showed 20 million rows of data being manipulated, the demo in the keynote showed 100 million rows. All of this manipulation is done in memory making it very quick indeed. In fact, the demonstrations showing the huge data sets that were being worked with were almost instantaneous. Once the data has been imported and manipulated, the model developed can be shared with co-workers via SharePoint (with auto-refresh options set for the data). There are limits on the data set that can be managed, specifically SharePoint has a 2GB memory limit for items imported into it. The 20 million row data set mentioned above resulted in a 1.2Gb file being uploaded to SharePoint. Cristian stated that the 2Gb limit will typically limit the data set size to between 100 million and 200 million rows of data.
SharePoint uses Excel Services to expose the data to other information workers, each data set being sandboxed. IT can keep an eye on the reports from within SharePoint Administration, with information on sandbox size, total memory use, CPU use and response time allowing easy management of resources. There is also a very neat time-based animation showing the number of users, number of queries and memory size of all of the reports currently on the system allowing easy identification of the resource hogs you may wish to move to Performance Point.
A few other things stood out during the day for me:
Gershon Levitz presented information on Threat Management Gateway (TMG), the new version of ISA Server which will be integrated into Forefront ‘Stirling’. As part of Stirling, there should be communication between the various elements of the system to help co-ordinate the response to internal an external threats. Incoming files can be scanned, even those being transported over SSL (more about this some other time).
Martin Kearn presented information on the protocols used for SharePoint communication – this was an enlightening look at what actually goes on inside a SharePoint farm, I hadn’t realised quite how much network chatter occurred even when the farm wasn’t actively serving pages.
Alexander Nikolayev presented information on the next generation of forefront for Exchange. A number of the features mentioned should help in the fight against spam into the future, though I was a little concerned as to whether any (or all) of the features were Exchange only. Again, more on this at some other time.