While price and performance have typically been used when evaluating, selecting, and operating HPC storage systems, increasingly productivity is a more important parameter. In essence, productivity means getting the best insights and answers in the shortest possible wall clock time within an envelope of other factors such as cost, and when the result is needed.
Productivity not only takes into account processing speed, the ability to keep CPUs satiated, and fast data transfers from storage to compute resources, but also factors in reliability, availability, speed to insight, and the completion of the greatest number of compute jobs in a given time. One additional productivity criteria to consider is how long a system can remain useful without excessive downtime and maintenance. Essentially, the discovery process needs high-performance systems that can quickly run the most compute and data analysis jobs over many years without excessive downtime or maintenance.
NCSA Blue Waters: The HPC high productivity poster child
A good example that embodies what is needed in the most demanding discovery environments is the National Center for Supercomputing Applications Blue Waters system. The Blue Waters team partnered with the Cray supercomputing company for compute and data storage to help achieve the high productivity goals.
From the start, those managing the Blue Waters program focused on productivity. Its mission is to enable science and engineering that cannot be done otherwise, and to greatly improve time to insight. Examples of things not even feasible before Blue Waters include:
- all atom simulations of viruses using over 100 million atoms
- simulation of the first billion years of the universe evolution and galaxy formation after the big bang, which required all the memory on the Blue Waters compute system
- producing high resolution digital elevation models of 1/3 of the earth in less than two years.
Some teams report a factor of ten or more increases in productivity while others report an infinite increase because they can do things that prior to Blue Waters were simply not possible.
Beyond raw computer power, the system had to have a tightly integrated, high-performance storage system that could support the compute data demands. Additionally, the entire compute and storage infrastructure had to be tightly integrated, flexible, optimized, and be highly reliable to avoid diminishing the quality of services.
One great challenge with computing systems used for scientific discovery is sustaining productivity over the years. Each year, the modeling, simulations, and analysis gets more complex, more granular, and makes use of vastly more data.
According to Bill Kramer, Principle Investigator and Director of the Blue Waters Project, “Despite its scale, Blue Waters is incredibly reliable. Over the last project year, Blue Waters had only one unscheduled system wide interruption, due to a campus wide power issue, and has a 99.7% scheduled uptime. Mean Time Between System-wide Interrupt have been four to eight months for the past several years. Even more importantly, (knock on wood somewhere please), is the node and disk reliability. On a daily basis, the individual node failure rate is below 1.5 nodes/day (0.006% of all the nodes) and a drive failure rate continuing below 0.43 drives/day (0.0026% of all the disks). What that means is Blue Waters is running better now than it did in the first couple of years of operations and there is no indication that will change in the next couple of years.”
Supercomputing systems must scale in compute power, as well as storage capacity and performance. In every field, new lab equipment, satellites, telescopes and sensors have produced higher-resolution and finer granular data. In the life sciences, next-generation sequencers have become faster (producing more data in a given time) and much less expensive to operate (allowing many more sequences to be done in given time). Storage systems have had to keep pace with the data volume growth. And their data analysis systems had to scale in performance.
Blue Waters has kept pace with the demands of the scientific research community. Today, Blue Waters continues to be a discovery mainstay for the open research community as a workhorse system at NCSA. Since March 2013, Blue Waters has delivered over 26 billion core-hours to scientific research. A key to this success has been the ability to scale the system’s storage bandwidth and metadata to accommodate the enormous amounts of data being analyzed.
“The storage system requirements on Blue Waters are extremely ambitious. The 1.1 TB/s sustain I/O bandwidth for Blue Waters is still unmatched by any other production open system in the world today. Combined with 36 racks of ClusterStorTM (36+of raw PB of storage PB) and over 250 PB of near-line storage, Blue Waters is still the most data capable system in the HPC world.” explained Kramer.
The key to HPC productivity: Find the right technology partner
Building a fast and highly productive system for discovery requires the integration of many advanced technologies. For leading-edge systems, that means incorporating solutions that are new to the market. Most organizations simply do not have the across-the-board expertise in every element, nor the in-house skill set to evaluate, select, and integrate the elements and then optimize and maintain the system’s performance.
That’s where an experienced technology partner can help. Cray designed, architected, and built the Blue Waters extreme scale system and maintains and improves the system to keep it at the leading edge of productive computing.
The system’s ClusterStor storage is a good example of how Blue Waters can remain a high performer and continue to be a productive resource. The system’s productivity remains high due to the ClusterStor storage’s high-performance and high level of resiliency. Furthermore, the Blue Waters, with the ClusterStor storage system is a leading enabler of being able to converge big data and extreme computing workload on the same system running very large modeling to extreme data intensive and machine/deep learning workload.
“The ClusterStor organization has had the privilege to partner with the Blue Waters team for the past 6 years. As a result, we have carried forward the lessons learned into the latest ClusterStor storage solutions”, according to Don Grabski, who has been with the ClusterStor product management team for 7 years. The ClusterStor L300 and L300N line of parallel file system storage represents the latest HPC storage solution. This storage is a perfect match for the most demanding discovery workloads. ClusterStor parallel file systems balance the value equation by delivering the exact performance, speed, scalability, data protection, and availability to match an organization’s requirements and budget demands.
The systems deliver enterprise-level performance with more capacity, fewer drives, less need for IT support and more data access. Furthermore, ClusterStor technology optimizes performance productivity and system availability, accelerating time to insight.
The current ClusterStor product line includes:
- ClusterStor L300 Storage System, which is an all-HDD Lustreâ It achieves performance requirements with the lowest number of HDDs, enclosures, and racks by maximizing the performance of each storage device.
- ClusterStor L300N Storage System, which is a hybrid SSD/HDD solution with flash-accelerated NXD software that redirects I/O to the appropriate storage medium. It delivers cost-effective, consistent performance on mixed I/O workloads while shielding the application, file system, and users from complexity through transparent flash acceleration.
- ClusterStor L300F Storage System, which is a scalable storage unit that provides the opportunity to add Flash storage pool creating a truly hybrid system. The L300F is designed and optimized to overcome the latency experienced by rotating media – the remaining IOPS bottleneck for Lustreâ.
The ClusterStor line includes engineered HPC Storage Solution features including:
- Integrated software and hardware solution
- Test validation
- Management
- Support automation
Taking these features together, ClusterStor storage systems enable the fast time-to-results and sustained performance needed in today’s most demanding discovery environments. The systems also scale to meet future demands by offering a way to increase capacity without impacting performance or availability seamlessly.
To learn more about increasing HPC productivity, visit:
https://www.cray.com/products/storage/clusterstor