It was almost exactly one year ago that we signed off on beta 2 of Windows HPC Server 2008 and today we made available the Community Technical Preview (CTP) of Windows HPC Server 2008 R2. Based on Windows Server 2008 R2, the CTP is a preliminary release, allowing an early preview of planned functionality for HPC Server 2008 R2. 
In our labs, we’re up and running on Windows Server 2008 R2. We’re regularly running stress runs on Dan Reed’s 1000 node cluster in Microsoft Research, teasing out distributed admin issues with virtual machines that turn our own 540 node cluster into a 2000 node süper-cluster, and using solid state drives to identify database scalability issues. Later this year we’ll do some runs on huge (2000+ node) supercomputers.
HPC is becoming part of mainstream computing, and that’s a nice way of saying it can’t fail. We have customers in production and they’ve provided us with a list of great features for our upcoming release. Some of them told us they wanted to use our SOA programming model in mission critical environments. With this CTP we provide the first of many mission critical features, disconnection/reconnection for SOA workloads. Say you submit a million Monte Carlo simulations to the cluster. In the past you would receive the calculations as they were completed, one by one. With our CTP you could submit from your laptop, disconnect, go home, play with the dog, sleep soundly, and come back in the morning to collect the complete set of results. We’ll have additional mission critical features in our future pre-releases.
HPC Server 2008 allowed people who weren’t supercomputing geniuses to set up and run a cluster, easily and quickly. Our CTP includes a number of enhancements to our distributed management infrastructure, including the ability to create custom heat maps and use 3rd party reporting tools.
Using a cluster should be as easy as using any other network resource. Just like you can connect to a network printer and print jobs, you should be able to connect to a cluster, submit a computationally challenging job, and get the results. Our improved job scheduler includes a number of customer requested features including job progress and pre/post tasks. Pre/post tasks are cool because you can use them to set up and tear down compute nodes. Say you’re doing a bunch of genomic searches. With pre-tasks you could stage parts of the genomic database across your compute nodes, run your searches in parallel, and then clean up when you’re finished. Hey, it’s a simple map/reduce system.
Finally, we’re about to ship a new toolpack and it has our favorite new tool: Lizard. Lizard is a tuning wizard for the Linpack benchmark (get the name???). If you’re a Top500 geek you’ll love this tool. Using a set of parametric sweeps we identify the best input parameters for Linpack, helping you get a killer Linpack benchmark score. Oh, and it’s called Lizard. Did we mention the name? Hmmm… What could we call a GPU tuning wizard…
Back to the CTP. HPC is all about scale, performance, and applications. With this CTP we start with scale, and this Fall we’ll demonstrate performance as well as some exciting improvements to help application developers, from the Python/Excel users to MPI programmers and everyone in between.
More information on the CTP is at:
Thank you!
Ryan Waite
Product Unit Manager Windows HPC Server