Advertisment

Penguin brings cloud computing to HPC

author-image
CIOL Bureau
Updated On
New Update

SAN FRANCISCO, USA: Penguin Computing, provider of high performance computing solutions, announced the immediate availability of “Penguin on Demand” – or POD – a new service that delivers, for the first time, a complete high performance computing (HPC) solution in the cloud.

Advertisment

POD extends the concept of cloud computing by making optimized compute resources designed specifically for HPC available on demand. POD is targeted at researchers, scientists and engineers who require surge capacity for time-critical analyses or organizations that need HPC capabilities without the expense and effort required to acquire HPC clusters.

Charles Wuischpard, CEO, Penguin Computing, said: “The most popular cloud infrastructures today, such as Amazon EC2, are not optimized for the high performance parallel computing often required in the research and simulation sciences. POD delivers immediate access to high density HPC computing, a resource that is difficult or impossible for many users to utilize in a timely and cost-effective way. We believe POD will promote new and faster innovation by enabling access to scalable high performance computing to a much broader market.”

POD provides a computing infrastructure of highly optimized Linux clusters with specialized hardware interconnects and software configurations tuned specifically for HPC. Rather than utilizing machine virtualization, as is typical in traditional cloud computing, POD allows users to access a server’s full resources at one time for maximum performance and I/O for massive HPC workloads.

Comprising high-density Xeon-based compute nodes coupled with high-speed storage, POD provides a persistent compute environment that runs on a head node and executes directly on the compute nodes’ physical cores. Both GigE and DDR high-performance Infiniband network fabrics are available. POD customers also get access to state-of-the-art GPU supercomputing with NVIDIA Tesla processor technology. Jobs typically run over a localized network topology to maximize inter-process communication, to maximize bandwidth and minimize latency.

tech-news