Berkeley Releases Cloud Computing Study

By Nicole Hemsoth

February 12, 2009

Researchers at the Reliable Adaptive Distributed Systems Laboratory (RAD Lab) at UC Berkeley have released a 23-page white paper, Above the Clouds [PDF], that provides an in-depth analysis of the emerging cloud computing model. The paper is one of the first academic treatises on the subject to offer a critical profile of the cloud computing landscape today.

We asked two of the paper’s authors, David Patterson, Professor in Computer Science at UC Berkeley, and Armando Fox, Adjunct Associate Professor at UC Berkeley’s RAD Lab, to elaborate on the findings and offer their perspective on how the cloud will impact high performance computing.

HPCwire: Cloud computing has come to mean a variety of things. For the purpose of our discussion here, how would you define it?

David Patterson: Cloud computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services. The services themselves have long been referred to as Software as a Service (SaaS). The datacenter hardware and software is what we will call a cloud. When a cloud is made available in a pay-as-you-go manner to the general public, we call it a “public cloud”; the service being sold is utility computing. We use the term “private cloud” to refer to internal datacenters of a business or other organization, not made available to the general public. Thus, cloud computing is the sum of SaaS and utility computing, but does not include private clouds.

We don’t use terms such as “X as a service” (XaaS); values of X we have seen include infrastructure, hardware, and platform, but we were unable to agree, even among ourselves, what the precise differences among them might be.

Armando Fox: The key ingredient is having tremendous computing resources instantly available on-tap with no advance arrangements needed and pay-as-you-go billing. Especially relevant is the fact that once you release unused resources, you don’t have to pay for them anymore. This property of “elasticity” shifts many risks from the users of that equipment to the provider of the equipment, creating new economic models that can change the way that startups, researchers, and even established enterprises think about IT spending.

HPCwire: Cloud computing is arguably the biggest paradigm shift in IT since the PC. Although similar concepts like utility computing and grid computing have been around for some time, they never attained widespread commercial success. What pieces of technology have come together to make cloud computing viable today?

Fox: While there are many technical factors, we believe the most important is the existence of extremely large datacenters built from tens of thousands of commodity computers. It turns out also that there are cost advantages of a factor of five to seven in capitalizing a datacenter at this scale compared to, say, a medium-sized enterprise datacenter of hundreds of computers. And the huge growth of the Internet drove companies such as Google, Amazon, eBay, and others to build such datacenters, to develop infrastructure software for them, such as Google File System or Amazon Dynamo, and to develop the operational expertise to armor them against the hostile environment of the public Internet.

Patterson: These technical advances were matched by a business model that offers three key features: 1) The illusion of infinite computing resources available on demand; 2) The elimination of an up-front commitment by cloud users, thereby allowing companies to start small; and 3) The ability to pay for use of computing resources on a short-term basis as needed and to release them when unneeded. Past efforts at utility computing failed because one or two of these three critical characteristics were missing. For example, Intel Computing Services in 2000-2001 required negotiating a contract and longer-term use than per hour.

Alas, grid computing created protocols that offered shared computation and storage over long distances and did not lead to a software environment that grew beyond the HPC community.

HPCwire: There are some prominent people in the industry like Richard Stallman — quoted in the paper — who portray cloud services as marketing hype and who are wary of becoming dependent on cloud and service providers. Is this just resistance to new paradigms or do people like Stallman have a valid point?

Fox: While we believe that cloud computing is definitely more than just “marketing hype,” we agree that the uncertainty of having one’s data and applications “locked in the cloud” may be a potential obstacle to cloud adoption. As we describe in the paper, cloud offerings may differ in the level of management and functionality offered in the cloud. For example, Amazon’s offering relies heavily on the appeal of a robust open-source software ecosystem and provides relatively little in the way of “built-in” functionality; whereas, Microsoft Azure allows deployed applications to run in a managed .NET environment and make use of the .NET framework and libraries, making those applications (and potentially, the data they manage) more difficult to move to another cloud provider that might not offer .NET.

Patterson: We think there is a potential danger to business continuity if you are dependent on a single cloud computing provider. We argue that such concerns can be addressed by standardizing APIs so that multiple providers can offer the same service, so that cloud computing users can move their application if a provider offers poor service or goes out of business.

The obvious fear is that this would lead to a “race-to-the-bottom” and would flatten the profits of cloud computing providers. We offer two arguments to allay this fear. First, the quality of a service matters as well as the price, so customers will not necessarily jump to the lowest cost service. Some Internet service providers today cost a factor of ten more than others because they are more dependable and offer extra services to improve usability. Second, standardization of APIs enables a new usage model in which the same software infrastructure can be used in a local datacenter and in a public cloud. Such an option could enable “surge computing,” in which the public cloud is used to capture the extra tasks that cannot be easily run in the datacenter (or private cloud) due to temporarily heavy workloads. We think surge computing could significantly expand the size of the cloud computing market.

HPCwire: The paper lists ten obstacles to cloud computing. Can you point to one or two that seem the most important overall, and also for high performance computing in particular?

Fox: It was really hard to rank-order these, and even the order in the paper is only a partial order. But we all agreed that cloud computing needs standardized APIs that would work across cloud vendors. This would help address TWO obstacles, namely maintaining high availability and preventing data lock-in. As far as technical obstacles, we observed that just as in the past, the cost of long-haul network bandwidth is falling more slowly than all other hardware costs, so we would like to see novel ways that cloud providers could address this high cost of data transfer, such as allowing customers to FedEx a box of disks directly to the cloud datacenter.

For HPC, we think some basic software infrastructure, such as gang scheduling for clouds, would help a lot; but in general, the HPC community has not had to go through the process of re-architecting software that the Web community went through in the 90s. We think there are plenty of opportunities for innovation if HPC steps up to the plate, and an early demonstration would go a long way toward jump starting that area. We’re discussing some possibilities at the Berkeley Par Lab, just upstairs from the RAD Lab.

HPCwire: The paper also describes some new application opportunities. Can you outline these and talk about why they are particularly suitable for cloud computing?

Fox: A major new area is allowing desktop apps to extend seamlessly into the cloud; for example, the popular analysis software MATLAB and Mathematica both support this now. Also, because of the “cost associativity” of the cloud — using 1,000 computers for an hour is the same price as one computer for 1,000 hours — it is great for apps that parallelize well, like document conversion, photo or video rendering, and so on. Of course, because of the relatively high cost of data transfer, the key is applications for which a lot of computing can be done on each byte transferred into the cloud — an observation made by Jim Gray in 2003 — and for which the latency to transfer that data is small compared to the time during which the data will remain “useful” in the cloud.

We also see the cloud supporting surge computing, where a private datacenter can temporarily overflow into a public cloud to support unexpected surges in workload.

HPCwire: Where do you think cloud computing will fit into the HPC application space?

Patterson: If technical issues like gang scheduling of VMs and higher network bandwidth within the datacenter are addressed, we think many users of HPC applications would love to take advantage of the cloud’s new cost associativity: no extra charge for using 20 times as many computers to get your results back in 1/20th the time. We’re conditioned to buying a set of computers and then trying to keep them uniformly busy. This elasticity of resources, without paying a premium for large scale, is unprecedented, so it will take a while for clever people to exploit this opportunity.

When HPC users don’t have to pay the costs of operating their computers — someone else pays for the building space, electricity, air conditioning, and so — they may conclude that on average they can get their work done for less than commercial cloud computing, but that seems more like bad accounting than good science.

HPCwire: How does future hardware and software need to be built to take advantage of the cloud model?

Fox: For software, one key approach is focusing on horizontal scalability — the ability to accommodate more users by adding more servers. At the level of storage systems and databases, this remains elusive, as evidenced by the various offerings such as Google AppEngine’s MegaStore, Amazon’s S3 and SimpleDB, and other scalable storage services. Also, to take advantage of elasticity means that software must automatically be able to adapt to unexpected workload changes, machine failures, and eventually, even whole-datacenter outages. Looking at the spectrum of clouds today, Amazon doesn’t provide any built-in service like this (though third parties such as RightScale are stepping in to fill that gap) but allows the developer to architect anything he wants; whereas, Google AppEngine severely constrains the software architecture of your app, but in return you get a lot of that automatic management for free.

Patterson: Hardware systems should be designed at the scale of a container (at least a dozen racks), which will be the minimum purchase size. Cost of operation will match performance and cost of purchase in importance, rewarding energy proportionality, which puts idle portions of the memory, disk and network into low power mode. Processors should work well with VMs; flash memory should be added to the memory hierarchy; and LAN switches and WAN routers must improve in bandwidth and cost.

—–

For more discussion of Berkeley’s cloud computing research, go to the Above the Clouds Web site.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Best Networking Experience on the Planet: Join the 2024 SCinet CommUNITY Program

July 1, 2024

Join the SC24 SCinet team in Atlanta, GA, and learn high-performance networking while you network with high-performance people! Applications close July 15. Apply Now The CommUNITY@SC24 Professional Development program Read more…

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardware to keep up with newer AI models to drive revenue and prod Read more…

Four Steps to Ensure GenAI Safety and Ethics

June 27, 2024

With the deployment of generative artificial intelligence (GenAI) happening at a rapid pace, organizations of all sizes are tasked with navigating the challenges around implementation, especially regarding ethics and Read more…

AI-augmented HPC and the Inflation of Science and Technology

June 27, 2024

Everyone is aware of the inflationary model of the early universe in which the volume of space expands exponentially then slows down. AI-augmented HPC (AHPC for short) has started to expand creating new space in the scie Read more…

Top Three Pitfalls to Avoid When Processing Data with LLMs

June 26, 2024

It’s a truism of data analytics: when it comes to data, more is generally better. But the explosion of AI-powered large language models (LLMs) like ChatGPT and Google Gemini (formerly Bard) challenges this conventional Read more…

Summer Reading: DARPA Showcases Quantum Benchmarking Progress

June 25, 2024

Last week, the Defense Advanced Research Projects Agency (DARPA) issued an interim progress update from the second phase of its Quantum Benchmark (QB) program. Begun in 2021 the QB effort has the ambitious “goal of rei Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 2338659951

AI-augmented HPC and the Inflation of Science and Technology

June 27, 2024

Everyone is aware of the inflationary model of the early universe in which the volume of space expands exponentially then slows down. AI-augmented HPC (AHPC for Read more…

Summer Reading: DARPA Showcases Quantum Benchmarking Progress

June 25, 2024

Last week, the Defense Advanced Research Projects Agency (DARPA) issued an interim progress update from the second phase of its Quantum Benchmark (QB) program. Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – Read more…

HPE and NVIDIA Join Forces and Plan Conquest of Enterprise AI Frontier

June 20, 2024

The HPE Discover 2024 conference is currently in full swing, and the keynote address from Hewlett-Packard Enterprise (HPE) CEO Antonio Neri on Tuesday, June 18, Read more…

Slide Shows Samsung May be Developing a RISC-V CPU for In-memory AI Chip

June 19, 2024

Samsung may have unintentionally revealed its intent to develop a RISC-V CPU, which a presentation slide showed may be used in an AI chip. The company plans to Read more…

Qubits 2024: D-Wave’s Steady March to Quantum Success

June 18, 2024

In his opening keynote at D-Wave’s annual Qubits 2024 user meeting, being held in Boston, yesterday and today, CEO Alan Baratz again made the compelling pitch Read more…

Shutterstock_666139696

Argonne’s Rick Stevens on Energy, AI, and a New Kind of Science

June 17, 2024

The world is currently experiencing two of the largest societal upheavals since the beginning of the Industrial Revolution. One is the rapid improvement and imp Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Leading Solution Providers

Contributors

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire