· Scalability and performance. We’ve continued to improve scalability, regularly testing on the 1,000 node cluster in Microsoft Research—we plan to pursue Top500 runs that prove much greater scalability. We also know customers want to make use of spare processing cycles as part of their overall HPC infrastructures. Windows HPC Server 2008 R2 Beta 2 now integrates with workstations running Windows 7, enabling organizations to use them as cluster compute nodes.
· Simplified parallelism. HPC starts with parallel code, so my team is particularly excited about next week’s Visual Studio 2010 launch. Windows HPC Server 2008 R2 empowers parallel development, providing a platform for traditional (batch-based) and service-oriented (interactive) HPC applications. And, Visual Studio 2010 helps developers create, debug, and trace HPC applications using already-familiar tools.
· Excel integration and ease of use. Whenever we speak with scientists, engineers, and analysts about their HPC needs we hear how they rely on Microsoft Excel for computations and how they’d love to scale those computations to run in parallel on a cluster. We’ve responded with HPC Services for Excel 2010. Top systems integrators and consultants like Wipro, Infusion and Grid Dynamics are now ready to help customers deploy and take advantage of HPC Services for Excel 2010.
· Interoperability options. We’ve heard from customers that “rip and replace” isn’t often a viable option for building out their clusters. So, we have started collaborating with industry-leading HPC management companies like Adaptive Computing, Clustercorp and Platform Computing to enable hybrid options where Windows HPC Server and Linux work together. Whether it’s a dual boot or dynamic cluster, hybrid options help organizations get more out of HPC investments and provide broader access to HPC resources.