Whist at PDC and the VBug conference I have heard a a good deal of chat over the future of paying for conferences and user groups. This is in the light of all the PDC sessions being available on Channel9 in under 24 hours and that the content at the Vbug conference is also available at free events like DDD.
The question boils down to can a person or company justify paying a good few thousand Pounds, Euro or Dollars to fly half way round the world when they could see the same content at home? In my previous post on the PDC I suggested it was worth it for the networking, and I still think this is so. However, I have heard an interesting slant on this from more than one person; this is go to the city were the conference is but not to the actual conference; just taking in the parties and maybe watching content via the Internet where available.
For some people I think this might be a viable option; as long as you get the right party invites! For example at TechEd Europe there many community orientated events organised outside the conference because this is the one time most relevant people are in the same city. Also if you are in this group then you may struggle to find time to go to the actual conference so maybe this plan is viable or even preferred. However, for the average developer I am not certain it is the case, too much of the networking happens randomly inside the conference corridors and at meal tables. For this ‘outside the conference’ model to work you have to know who you want to meet and get invited to the right places/parties i.e. you need some profile in the community
As to the other point whether Vbug like events will continue I think we need to consider who they are aimed it. I had expected at the Vbug conference to see a lot of faces in the audience who I see at DDD, but this was not the case. There were a few but not a majority. Then again I don’t see the same faces at DDD as Alt.net. We have a number of distinct communities going on here, there is some cross over but not that much. I think the three broad groups are:
- People who go to events (free or otherwise) during office hours – VBug attendees, and people who come to the events we host with Microsoft.
- People who will go an event in their own time, but it is a passive learning experience – like DDD on a Saturday or a speaker at a user group
- People who want to discuss what they do either in a user group over a beer or at an Open Spaces format conference – like Alt.net
We are never going to get all three groups merged into one. People will move from one to another and maybe attend all three, but that is their choice.
We are lucky in the UK that we have such an active and high quality community so all three groups can be supported, it will be interesting to see if any one type prevails (judged by attendance) as time goes on. However I do not expect to see any type disappear soon.
Since we added the new themes to our community server we have not been getting any updates on the Blogs control panel as to the number of times a post has been viewed (but the aggregate views via RSS are incremented OK)
After a bit of digging it seems that we missing the IncrementViewCount flag on the in the post.aspx file. It should be as shown below.
<CSBlog:WeblogPostData Property="FormattedBody" runat="server" IncrementViewCount="true" />
If you miss this flag out you page shows OK but the statistics are not updated.
In addition to the two Steve Riley presentations I saw, there were a few other items that caught my attention on day 3:
- Windows Server 2008 R2 will be 64-bit only. yes, that’s right, there will be no 32-bit version. WoW64 will still be available for those of you needing to run x86 apps on the server, but it’s now an optional component, so no need to install it if you don’t need it for anything.
- The .NET framework (or at least a part of it) will be available on Server Core in Windows Server 2008 R2. Unfortunately it looks like there won’t be enough of it on Server Core to allow us to run SharePoint (or even say the WFEs only) on it. Hopefully this is something that will happen into the future. As an aside, Rik and I did mention this to someone on the Server 2008 team, and someone from the SharePoint team; hopefully it will make its way up the chain.
- Hyper-V v2 looks cool. Many improvements, including support for many more CPUs than v1. The ability to add VHDs to a running system without rebooting was also mentioned; a very useful feature in my estimation.
- Remote desktop services will support multi monitors in Server 2008 R2 – at last!
- R2 power management appears to be getting an overhaul as well; core parking (the ability to entirely switch off a processor core, or an entire socket) looks to be something that will help power consumption on systems when they are lightly loaded.
- R2 will give us an Active directory recycle bin! Hopefully no more ‘oops… oh no!’ moments.
- Microsoft uses TS Gateway to allow remote users to access systems within the corpnet network. I was very interested to hear how few machines they use (albeit in clusters) to support their users. A reasonably high spec server (dual Xeon, 4Gb memory IIRC) supported 200-300 users simultaneously.
- TS Gateway in Server 2008 R2 will allow device redirection limitations set from within TS Gateway to be enforced, even for TS clients that typically ignore these settings. In addition, consent messages can be sent to users forcing them to agree to access policies before they can access the remote systems.
So, we're on the penultimate day of TechEd EMEA and I have to say that exhaustion is starting to creep in. However, the day had a great start with sessions by Steve Riley and then Mark Russinovich.
Steve was talking about security implications of virtulisation and his views were stimulating. He was talking in depth about what to consider when virtualising machines and why Microsoft took the architectural approach that they did for the Hyper-V stack when security was considered. I could post more, but I would urge you to go and find the video of the session when it's available as Steve himself gave a much better delivery of the material than I ever could.
Next up was Mark Russinovich, of sysinternals fame. I've been using tools produced by sysinternals for a long time, but almost always from the standpoint of figuring out how to make apps run with the least possible security. That means filemon and regmon, now replaced by Process Explorer. What Mark was showing was how to use ProcExp with some of his other tools to analyze why applications crash and how to drill right down into crash dump files to identify the offending code. It was a very cool presentation and his delivery was both engaging and amusing. If you get the chance to see him speak I would highly recommend it.
Following a strong recommendation from Robert, I attended a couple of talks by Steve Riley (Senior Security Strategist for Microsoft). The first presentation was titled ‘21st Century Networking: throw away your medieval gateways’, the second was ‘Do these ten things or get 0wn3d!’. Steve has to be one of the best speakers I have ever seen, In each case he wandered round the auditorium keeping everyone entertained while at the same time getting across a very important message. I cannot recommend his talks strongly enough. If you get the chance to hear him speak (I would say on any subject, but frankly it will be security with probably some politics thrown in), do so! If you’re at Tech Ed EMEA IT, Steve still has sessions on Thursday.
The last session of the day was just incredible. A surfer-dude with boundless energy wandering around the audience in shorts, cracking jokes and telling stories and every single one related in some way to his point. Steve Riley is a fantastic presenter, and his session - Do these ten things now or else get 0wned was a great session on security. Sadly, I don't think it's repeated or I would urge you all to attend the next viewing. If you have the chance to see Steve speak, grab it with both hands - especially if you are involved in any way with security or IT management.
You can find the slides for my Vbug sessions on the Black Marble web site.
I hope those you attended found it useful.
My first session of the day was about project Gemini. Following the brief demonstration in the keynote, I was really looking forward to a better look at Gemini, its background and how it will help information workers and IT professionals; I wasn’t disappointed. Cristian Petculescu, the Principal Architect for Gemini is an excellent speaker and gave a very good presentation about Gemini. I was once again blown away by the features demonstrated. Unfortunately due to limitations imposed upon him, Cristian couldn’t give a demo of the client part of Gemini, instead running a video which he stopped frequently so we could get a better look at the interface and to explain what was happening. We did get a demo of the SharePoint administrator interface however, which is something I’m now itching to play with. Unfortunately the release of Gemini will be with SQL Server ‘Kilimanjaro’ sometime in the first half of 2010. A beta should be available sometime in 2009 however.
Gemini is an Excel add-in allowing information workers access to data stored on remote systems (Gemini it appears can load data from just about anywhere including web feeds). It allows them to continue working in the way that they currently do (load the data into Excel, then start manipulating it) and it allows them to do with with a much larger data set than they have been typically working with until now while at the same time providing a useful set of tools to work with the data. The demonstration during the session showed 20 million rows of data being manipulated, the demo in the keynote showed 100 million rows. All of this manipulation is done in memory making it very quick indeed. In fact, the demonstrations showing the huge data sets that were being worked with were almost instantaneous. Once the data has been imported and manipulated, the model developed can be shared with co-workers via SharePoint (with auto-refresh options set for the data). There are limits on the data set that can be managed, specifically SharePoint has a 2GB memory limit for items imported into it. The 20 million row data set mentioned above resulted in a 1.2Gb file being uploaded to SharePoint. Cristian stated that the 2Gb limit will typically limit the data set size to between 100 million and 200 million rows of data.
SharePoint uses Excel Services to expose the data to other information workers, each data set being sandboxed. IT can keep an eye on the reports from within SharePoint Administration, with information on sandbox size, total memory use, CPU use and response time allowing easy management of resources. There is also a very neat time-based animation showing the number of users, number of queries and memory size of all of the reports currently on the system allowing easy identification of the resource hogs you may wish to move to Performance Point.
A few other things stood out during the day for me:
Gershon Levitz presented information on Threat Management Gateway (TMG), the new version of ISA Server which will be integrated into Forefront ‘Stirling’. As part of Stirling, there should be communication between the various elements of the system to help co-ordinate the response to internal an external threats. Incoming files can be scanned, even those being transported over SSL (more about this some other time).
Martin Kearn presented information on the protocols used for SharePoint communication – this was an enlightening look at what actually goes on inside a SharePoint farm, I hadn’t realised quite how much network chatter occurred even when the farm wasn’t actively serving pages.
Alexander Nikolayev presented information on the next generation of forefront for Exchange. A number of the features mentioned should help in the fight against spam into the future, though I was a little concerned as to whether any (or all) of the features were Exchange only. Again, more on this at some other time.
OK, MED-V is cool! Sadly, cool though it is, it's not something we'll use3 at BM, but in my previous lives doing large organisation IT, MED-V would have been a killer.
In a nutshell, it is this: create a Virtual PC image with your legacy OS and legacy App. Deploy that VPC to your users desktop so they can run your legacy app but let them run the app without needing to start the VPC and use two desktops.
That's right - MED-V apps appear in the host OS Start Menu and fire up windows which, although using the appearance of the guest OS, are hosted straight on the desktop. Not only that, but they get task bar entries, and even tray icons!
It's really well thought out - admins create the VPCs, publish them into a server infrastructure and publish the images and apps to users. The system takes care of versioning for the images and pushes them out to users which reduces the amount of data transferred.
You can allow roaming users to work remotely as well, but do clever things like setting a time limit, after which the virtual apps won't work because the user needs to connect to the main system to get updates to the guest OS.
It's great. It's also not out yet. Beta 1 is expected Q1 2009, although they are looking for early access users. Release is projected for H1 2009. If you're a big organisation and migration to Vista is a pain, MED-V may be for you, although it's only available to SA customers, as far as I can tell.
The snags (there are always some, right?): Host OS is Vista Sp1 or XP SP2/3 32-bit only. Guest OS is Windows XP or Windows 2000 only.
It was a great session, and you definitely want to find out more about this.
We were in early today, looking forward to a session on SharePoint with Bill Engolish. Sadly, that was cancelled so Andy and I sat in on the Server 2008 R2 overview session presented by Iain McDonald. That was very interesing, and we learned a bit more about BranchCache. It doesn't look like it will replace WAN accelerators like Riverbved, because it doesn't appear to function at their low level. However, it does a similar thing at the file level. The client requests a file from the remote server, which instead replies with hashes. The client PC the requests those hashes from the local cache, improving performance. The cache itself is built on request so does not need to be pre-populated (which is good). I think WAN accelerators have nothing to fear from this, but for smaller organisations or ones which aren't able to put the accelerators in (perhaps their servers are hosted, for example) BranchCache looks like a very promising technology.
Something I saw and got excited about is DHCP failover. We don't suffer much with DHCP outage, but because the only way to sync up two DHCP servers is to export and import it's very hard to do resilient services. DHCP failover should solve that, and it looks good.
Also, more on the >net on server core front. The key 'takeaway' is that it is a subset of .Net 2, .Net 3 (WCF and WF, not WPF) and .Net3.5 (WF additions and Linq). That makes sense - why include elements related to the GUI. However, subset obviously means compatibility pitfalls and I am still very interested to see where this goes.
We spoke to a few guys on the IIS stand yesterday about SharePoint on IIS7. I need to talk to the SharePoint guys about the same thing. The IIS chaps were optimistic that what I wanted to do would work, but there had been no effort put into testing of the scenario as yet. As far as I am concerned, at the very least I want to be able to run my WFE servers as server core for security reasons. I'd really like to be able to deploy the app server roles to core as well, which falls in line with a single-purpose server, virtualised strategy.
I'm writing this as I wait for the MED-V session to start. The brief intro to this given during the Windows 7 session made it sound exciting and I really hope to come away from this feeling energised. Whilst it's been a solid conference so far, there's not been much to give me a buzz - perhaps this is it. I'll take notes and try to post my thoughts later.