MAKING HPC HISTORY: THE MANNHEIM SUPERCOMPUTER CONFERENCE

June 9, 2000

by Uwe Harms

Mannheim, GERMANY — The big picture:

In 1979, Max-Planck-Gesellschaft IPP installed the first Cray in Germany in Garching. Some years later, in 1981/1985, industrial early adopters ran their CDC 205, Fujitsu or Cray vector supercomputers. The time was ripe to start the Mannheim Supercomputer Seminar series in 1986 by an “old fashioned” triumvirat: Hans Meuer, Lutz Richter and Hans-Martin Wacker. At that time, less than 200 supercomputers – millions of dollars worth – were in use worldwide. No one believed at the first seminar that we would be celebrating the conference’s 15th anniversary in 2000. Hans Meuer deserves special credit for navigating this uniquely German Supercomputer ship through dangerous shoals of financial challenges, competitors, and divisive arguments to become the premier European event that it is today.

Such a long time-frame allows us to reflect on the significant changes in technology, architecture and performance that have occurred – true “supercomputer archaeology”, if you will. A penetrating look back reveals a host of compelling new ideas and start-up vendors. We see names like Alliant, BBN, Control Data Corporation, Convex, Cray Computer, Cray Research, Denelcor, Digital with its Vector Facility, Dressler – and who remembers Dressler today? – Floating Point Systems, Intel iPCS, and Paragon, IBM Vector Facility, IP-Systems, Kendall Square, MasPar, nCUBE, Parsytec, Suprenum, Thinking Machines, and all the others.

What became of them? Some gave up entirely, others left the scene and looked for new markets, several have been taken over by big hardware vendors, and some declared bankruptcy. Others re-emerged as major players after spending years in the background – like Burton Smith with Tera, which is now going back to Cray roots – Cray Inc. Yet many of their ideas survived and were subsequently adopted by others – for example VLIW (Very Large Instruction Word) by Intel/Hewlett-Packard in the IA64 series – Merced now Itanium.

At a certain inflection point – as Andrew Grove of Intel named it – the list of supercomputers became enormous, and its accurate characterization was fraught with complexities. One vital issue echoed repeatedly: just what is a supercomputer anyway? The situation world-wide burgeoned with thousands of machines – this was a primary impetus for the birth of the TOP500 list in 1993.

Other issues were very personal. Evening gatherings in the Katakombs – in German Katakomben – of the University which Hans Meuer has changed today and others over several years in the Sonne in Neidenstein allowed a get-together and permitted a tremendous cross-pollination of ideas and often led to fruitful international friendships. For a lot of the “old” participants this is like a family meeting. We have witnessed so many personal changes in our colleagues – and myself – over these fifteen years. Many of the participants – especially from the vendor sector – have moved on or have been moved into other firms and businesses. Others resigned and changed their careers or retired.

When preparing this talk, I went into my cellar and looked for the old proceedings. The first ones were indeed printed in a rather crude fashion! However, their careful perusal offers us an unparalleled view of supercomputing history.

The first event started on June 20 and 21, a Friday and Saturday in 1986. About 100 computing “freaks” came together, but three of them have stayed around since that time: Kristin Mierzowski, Wolfgang Bez and Wolfgang Gentzsch. A look into the list of participants show the changes that happened during that time frame. For example Wolfgang Bez, then with Control Data, is now independently working for NEC; Gerhard Holzner, selling Japanese – Fujitsu – vectors for Amdahl back then, sailed to Cray when Amdahl closed its high-end business and is today offering Japanese supers again, but this time for NEC. Another interesting case is Wolfgang Nagel: in 1986 he gave a talk as a scientific employee at Research Center Juelich with the title: “Accessing vector computers by operating system, compiler and standard application software.” Fifteen years later he re-entered the arena as a Professor and head of the Center for High-Performance Computing at Technical University Dresden. By the way, Research Center Juelich has always been a good source for interesting talks and speakers!

Every single year Hans Meuer has presented an overview of the pragmatic supercomputer situation in his opening remarks. What did he say in 1986, ten years after the first Cray 1? In 1986, about 200 supercomputers are installed world-wide, 65% Cray, 15% Control Data and 20% Japanese vendors, 15% Fujitsu, sold by Amdahl and Siemens in Europe. In Germany, one found 15 supercomputers, all of them vector-computers: 7 Crays, 4 Cyber 205, 3 Fujitsu VPs and one Hitachi IAP. Hans Meuer expected in 1986 a major push by IBM’s Vector facility, probably resulting in the sales of several hundred machines world-wide – as IBM said.

Here are Meuer’s personal 1986 predictions on development until 1990:

“Lots of IBM VF deliveries. Control Data would ship the first ETA that year. And the Cray 3 was expected in 1988 with a cycle time of 1 nanosecond. In the low-end market, Meuer expected Floating Point System with its array processors and the new T-series – having up to 16.384 processors (262 GFlop/s) and minisupercomputers like Convex C1, 20 MFlop/s, and Alliant FX/8 with up to 8 computational elements, 95 MFlop/s peak using 32 bit words.”

He also mentioned iP-Systems from Karlsruhe with its TX2, a tree architecture and the Suprenum project, both projects failed. And with incredible prescience, he observed, “All the newcomers in this market have to prove that they can deliver a marketable product. Industrial customers cannot have the burden of a day-fly; one should think of the troublesome way of Denelcor.”

After an overview of the actual architectures and trends given by Lutz Richter, University Zurich, Hans Martin Wacker, head of IT at DLR (German Aerospace Center), analysed the situation after the announcement of the IBM 3090 VF. There he started his unforgettable series of economical computations and comparisons of supercomputers.

In 1986, Wacker compared the IBM VF, the Cray X-MP 2 and the VP200 following the degree of vectorisation and costs:

Computer Costs Scalar Vector Performance

IBM 3090-200VF 4 Mio DM/year 15 MFlop/s 60 MFlop/s Cray X-MP 2 8 Mio DM/year 15 MFlop/s 240 MFlop/s VP 200 6 Mio DM/year 7.5 Mflop/s 240 MFlop/s

The vector performance was measured with the DLR benchmark.

This means that IBM 3090 was up to a degree of vectorisation of 88%: at that time the most economical solution. In the range of 88% to 97% Cray was ahead; and starting from 97% it was the Fujitsu. He concluded that one must look at the IBM VF in the supercomputing arena, especially as an entry level model.

In 1987 he presented a revised edition, as the vendors reduced the prices.

1986 1987

Cray X-MP 30 Mio DM 20 Mio DM IBM 3090-200 VF 15 Mio DM 10 Mio DM Fujitsu VP200 15 Mio DM 10 Mio DM

This resulted in revised economic figures, specifically the VP200 was superior to the others with a degree of vectorisation of 80%. That year he computed the economics based on Jack Dongarra’s Linpack benchmark, n=100 and n=300. A lot of participants afterwards discussed the usefulness of running a supercomputer to solve a small linear equation of dimensions 100 and 300. I have really missed Hans-Martin Wacker the last few years: now he cannot attack me when I smile about his economic observations, as he did five years ago.

Now he is calculating his economics on an industrial base, he became one of the two General Managers of debis Systemhaus Solutions for Research, a Joint Adventure – oh pardon a Joint Venture of debis Systemhaus, daughter of DaimlerChrysler and Deutsche Telekom, and DLR. They hope that this Oursourcing will reduce the IT spendings of DLR.

Guest star in 1986 was Raul Mendez – as they said in a very popular German television show: Switch lights off, the spotlight comes on and here he is, Raul! He was the only American who had benchmarked both US and Japanese vector computers. Hans Meuer had great expectations of Raul, but he put himself into a rather tight situation during his seminar, thus quickly acquiring the grey hair that you see now! The day before his seminar Raul suddenly stood in the doorway, wearing Texas boots and brandishing a shoe box full of slides, typed, handwritten and in chaotic order. Hans and his team selected 25 to 30. But these were only readable in the first row of the lecture hall. So a student had to prepare them further during a night shift.

The expected importance of the IBM VF at that time was highlighted in a talk by Professor Martin B¸rkle, of the Regional Computer Center of the University Kaiserslautern. He compared the VF with a VP100/200 in detail, although at the time the proceedings were printed there was no VF in Europe. He concluded that the machines attacked different market segments.

The other addresses covered application-specific areas like numerical simulation in aerodynamics, exploration and finite element computations with the package PERMAS. The concluding section concerned networking and supercomputers and the Cray 2 at Stuttgart, a challenge for industry. Its conclusions are just as true for small and medium but also big enterprises 15 years later.

The first Supercomputer seminars took place right in the University, in lecture halls as I wanted to say – some meters away from this Auditorium. They were conducted with wooden chairs, overhead projector and powerful slides – without colorful powerpoint and laptop. Then, as now, the seminars represented a close community of vendors, users and computer centres. Looking at that old list of participants, some laughed at the fact that Siemens seemed to sponsor the event – as they filled 10% of the auditorium!

Some years before, Hans conducted a seminar with Denelcor followed by a wonderful evening event in Heidelberg, focusing on medieval times – Burton, you remember? Following these experiences, Hans Meuer successfully started his evening event series. In those early days we visited wineries nearby in the Pfalz, long-famous for its fabulous food and spirits. Many new friendships were forged there, and a network of supercomputer fans gradually emerged. During the following year some, like me, even got a bottle to take away – to Munich. When we left the bus at the Holiday Inn, a group of 15 fans exchanged opinions with the intensity of theological debate: Cray against the Japanese, vectors against Minisupercomputers, and so on.

We opened our bottles and inaugurated an open-ended session – birds of a night feather! I think it was about half past three or much later that morning that we finally went to bed in our hotels. This morning Professor Wolfgang Gentzsch, Fachhochschule Regensburg – still feeling some of the previous night’s spiritous effects – was slated to give a speech benchmarking and comparing the minisupers Convex C-1, Alliant FX/8 and SCS 40. To prepare, he drank lots of water, started in alphabetic order with Alliant – and closed his talk with Alliant! His unique presentation certainly made a deep impression on all of us. I think that might be the reason why he now mostly confines his talks to the first day…

Highlights 1987

The second event was interesting because of the Daimler Benz benchmark. Here Hans Meuer gained some interesting experience in decision making and industrial efficiency. That year he invited Sidney Fernbach and Jack Dongarra from America and seven other speakers from Germany. Gaining new experiences, Hans developed an expense equation: the effort expended in getting Fernbach and Dongarra as speakers plus their printable talks for the proceedings equaled half the effort required to get Daimler speakers plus another half to get the 7 others. He published the ratio equation in the 1987 proceedings:

Timely effort for speaker and printed talk

(Fernbach + Dongarra) : (Haase + Heib, Daimler Benz) : (7 other speakers)

= 1 : 2 : 1

This clearly demonstrates and underlines a work overload of industrial representatives. Haase and Heib wanted to disclose the secrets of the Daimler Benz benchmark and the wall clock times they measured. Today, however, rumours still persist that they said in the discussion the benchmark never was the foundation of that decision. This might open new vistas for heads of computer centres in academia and research too.

Supercomputer 1988 and my daily work at IABG (Industrieanlagen-Betriebsgesellschaft)

In 1988, the supercomputer seminar suddenly influenced our daily work at IABG. A colleague and myself listened to one of our customers, Professor Hirschel, Messerschmidt Bˆlkow Blohm, now Dasa, Ottobrunn, who reported on whether the supercomputers of the day had sufficient power to support aircraft design. We proudly listened to his recitation of the performance factors of our VP200 at IABG, Ottobrunn too, compared to an MBB IBM 3090: a factor of 25 with the NSFLEX and 17 with the EUFLEX code, but only 3.4 with the HISSS code. Back home, we discussed this topic with the responsible engineers and soon succeeded in gaining a factor of 10 — and with some new data structures even got a factor of 30. This incident clearly demonstrated how dramatic improvements can be achieved with vectors.

A perennial highlight of the Supercomputer Seminars has been the debates between so-called supercomputer Protestants and Catholics, moderated by Hans Meuer. Two of them were truly unforgettable because of the partisans’ attitudes and intense personal involvement. The first addressed vectors against MPPs in 1989, or I should say Professor Willi Schˆnauer’s “Why I Like Vector Computers”, Computer Center University Karlsruhe, going head-to-head against Professor Ulrich Trottenberg, GMD Research center information technology Sankt Augustin’s “Parallel Computing is the Future”.

Willi Schˆnauer clearly underscored benefits for the user and the computer center: easy programming, short turnaround, throughput for the center. I still recall his ringing assertions: Message passing systems can easily be built, but they cannot be programmed. Shared memory systems can easily be programmed, but they cannot be built. MIMD shifts the problems to the user. Only a Monoprocessor SIMD multiparallel pipeline computer as the CPVC is a user-friendly architecture.

CPVC was his proposal for a Continuous Pipe Vector Computer.

The contrary view presented by Ulrich Trottenberg, one of the fathers of Suprenum, was equally uncompromising: Only parallel computers offer unlimited (in principle!) high performance. The limits of traditional supercomputers are obvious. Parallel computers based on off-the-shelf technology can be developed and produced cheaply, while costly supercomputer technology is totally avoidable. Vector performance can be integrated into the processors of a parallel system so as to utilize the computational experiences gained with vectors. All vectorisable tasks can be parallelised in the trivial SIMD-way. However, many tasks are MIMD parallelisable but not vectorisable. Typical tasks of scientific supercomputing come close to the theoretic optimal speedup of a parallel computer. Parallel computers can be programmed more easily and with more clarity than vector computers – and everybody who has done it will confirm it! Vectorisation goes against the grain of numerical analysis: it is a poor, inefficient numeric which only knows vectors and matrices.

Perhaps he directed these marketing statements to the German Ministry of Research which funded his MPP Suprenum project…

In 1988, Meuer listed vector computers, which had a peak performance per processor of up to 500 MFlop/s. Today’s machines show a vector performance of up to 9.4 GFlop/s with the VPP5000 – a factor of 19 – and a dramatic cost reduction in that area by CMOS technology. Now MPPs play their role in a market niche like vectors. Cluster of workstations and moderate parallelisation are the actual trends. As Ulrich Trottenberg mentioned in March this year during the TTN (Technology Transfer Node) event at GMD, the key issue now is parallelised application software for industry on low cost workstation clusters.

Another memorable debate took place in 1993: parallel computer versus workstation cluster. Here the opponents were Professor Reinhard Ahlrichs, Chemistry at Universit‰t Karlsruhe, and Professor Andreas Reuter at that time with University Stuttgart. Six years earlier Reinhard Ahlrichs gave a talk on prospects and limitation for theoretical chemistry on vector computers. In the meantime, he and his group ported the program TURBOMOLE with 100 000 lines of code onto a workstation cluster. His arguments: It is impossible to port the code onto an MPP. Such a machine, based on off-the-shelf processors, would require 3 to 5 years of development, while the performance improvement of workstations would be 10 to 100 in that timeframe. 10 to 100 workstations may be readily utilized with 50% efficiency. Problems of message passing and data transfer for more than 1000 nodes are intractable.

Reuter stated that workstation clusters cannot replace MPP systems because: The relationship of instruction execution and non-local memory access must be low – and this is not the case with clusters. Virtual shared memory lowers the burden of message passing. Applications like data mining have I/O in the Gigabyte to Terabyte range, and this is not efficiently realisable on clusters. The problems of fault tolerance and availability of clusters and MPP remained unsolved.

Now Clusters are “in”, as we have a specific session on experiences with clusters on Saturday. In one of the talks tomorrow Paderborn and Fujitsu Siemens will present the SCI-based hpcLine. Wolfgang Dreyer will report on the poor man’s Alpha cluster as well as Thomas Lippert from University Wuppertal on their Compaq-Linux-Alpha Cluster. This means Compaq refers to its Digital roots in their workstation cluster based on Alphas and having a high-performance interconnect from Myrinet or Quadrics to replace MPPs like the Cray T3Es.

Top500 in 1993

The same year, Meuer started his fabulously popular Top500 List in Mannheim together with Jack Dongarra and Erich Strohmaier. Rumours say that the notion was born from a discussion during a wine- and beer-centric evening event at one of the Supercomputer Conferences. Based on the Linpack benchmark, the 500 fastest computers enter the list, with results proffered by the vendors. I was proud – as an independent consultant – that the Fujitsu VP200, in 1985 at IABG and 1990 outsourced to debis Systemhaus, entered rank 500 in the first list, as I supported this machine for several years. Of couse, a few attempted to best their rivals through various tricks, like using the Strassen algorithm which has less operations, but dividing it by the number of Gauss operations. This results in Linpack performances that exceed peak performance, observed in the vector computer arena. Over the years, this list has provided us with a wonderful overview, although it has begun to take on a life of its own – and one never anticipated by those who initated it. Self-styled marketing experts from different vendors have grown intoxicated with the power of figures. So today we see that there are machines on the list from the commercial sector that will never solve a linear equation. I have personally observed the remarkable potency of the Top500 in a Hewlett-Packard users meeting. Frank Baetke, HP – one of the “old” guys in supercomputing – asked his customers, for whom is this List important? The German centres immediately chimed in: for us, as DFG the German Research Society is looking into it.

Mannheim and HPCN Europe 1994, Conference and Exhibition

In 1994 Hans Meuer watched a crisis unfold for his event: competition from his own friends, Wolfgang Gentzsch and myself. We discussed with Royal Dutch Fair, whose 1993 Supercomputer conference and exhibition in Utrecht failed, a new start in Munich with our personal support. Wolfgang would be Chairman and I, the local organiser. But I think, like classic German tanks, we rolled right over Royal Dutch Fair – along with Bob Hertzberger and his HPCN Europe. With our personal effort, we had a wonderful and successful show in Munich. But Hans likes fair play – as we told him, we did not intend to destroy his event – and participated in HPCN Europe. Meuer won that year too and had the highest number of participants since 1986. Soon the Dutch Mafia obtained leadership of HPCN Europe and made bad mistakes – like inviting two vendor representatives for keynote talks, provoking the competitors, and holding the exhibition in a garage. So in 1997 Hans became number one, as he was application oriented. HPCN Europe became just another scientific HPC conference.

Supercomputer moving through Mannheim

Until 1991, the event took place in the University. That year we had to move to Hotel Wartburg; an inspection had found the chandeliers in Auditorium maximum to be unsafe. As I mentioned five years ago, a major academic deficit for a very human problem was also solved. In Wartburg our toilet paper and towels are nice and soft, instead of being hard and rough as in the university. The uni-paper, however, is not adequate for the industrial managers, who pay high conference fees. This evening I wanted to propose a real-life test go to the toilet downstairs and check the quality of the paper – but Hans suddenly has moved the event to Rosengarten. Probably the Wartburg migration, the rising number of participants, and the requirements of having an exhibition for hardware and software vendors have been primary reasons for moving from the “old” walls of the university to the Congress Center Rosengarten. This and the new facilities give the Supercomputer Seminar a new flavour and a more professional atmosphere and some exhibition space. Keynote Speakers

All through the years Hans Meuer has succeeded in catching interesting keynote speakers. The full list is quite impressive. In 1986, Raul Mendez took center-stage, as I noted earlier, and Sidney Fernbach followed in 1987. In the following years, there were Kenichi Miura, Fujitsu Kawasaki, Hugh Walsh, IBM Kingston and Dennis Duke from Florida State University. Then he invited Enrico Clementi, Chemistry IBM. 1990 Ulrich Seiffert from Volkswagen, now in the top management, discussed HPC usage in the automotive industry. The topic of microprocessors as the basic technology of coming computers was beautifully covered by Professor F‰rber from Munich. In 1993 Steve Nelson presented the Cray view of designing MPPs; next year the German architect Wolfgang Giloi described the parallel computer development at GMD, from Suprenum to Manna and Meta.

Christopher Johnson and Steven Parker, University of Utah, presented simulation and visualisation in medicine. Horst Kˆrner, DLR Braunschweig described the same technologies as they are used in aircraft design. In 1997, David Burridge, ECMWF – European Centre for Medium Range Weather Forecasts, discussed the European weather. In 1998, the others – without me – listened to Larry Smarr. Last year we heard Gordon Bell, now with Microsoft, on “The Next Ten Years of Supercomputing”. This year we listened to one of the most well-renowned specialists in the parallel and vector computer field, Gene Amdahl. His terrible law is still working and puts a big burden on the programmer, to achieve a higher degree of vectorisation or parallelisation. We often discussed this law or the modified Gustavson version, especially Hans-Martin Wacker in his economical examinations.

Some very personal observations and gags

To conclude, let me point out that wine and beer, food and sweets are waiting for you and me. And I also want to describe some humorous events. Tomorrow we will have our usual 15-minute vendor presentations. As always, Hans Meuer will use his hour-glass to stop speakers. On one prior occasion, a vendor, reading from a huge sheaf of slides, started too slowly and then was abruptly stopped – much to his chagrin. During another presentation, a young and dynamic salesman – he is now older but no less dynamic! – described in detail how he stealthily entered the Research Center J¸lich – like a thief – to benchmark his competition’s computer. Although this account was really meant as a gag, Professor Friedel Hoflfeld, head of that computer center, saw all eyes turning towards him and noticed an increasingly profound silence. So he stood up, walked to the stage and declared: “Everybody who has scientific reasons and professional and technical interest will find open doors and ears in my center for minor benchmarking – he need not climb over the fence!”

The results of another humorous event moved to Japan and back to Germany. As all the participants have since left their vendors, I can now tell the story. Siemens representatives announced the Fujitsu VP50 to VP400 EX-series. As I mentioned earlier, Amdahl sold the same machine in Germany, and Amdahl was a Fujitsu daughter. This caused the Amdahl representative to say: “We announce a product only when our mother, Fujitsu, does so, and when it is tested and available.” This caused the Siemens managers to send an official complaint about this behaviour across the Pacific to Fujitsu in Japan. The result was an email sledgehammer back to the first Amdahl salesman.

Now I want to thank Hans Meuer for 15 years of interesting Supercomputer entertainment, especially in the discussions he chaired. As you now have resigned and work as president of Prometeus ( http://www.prometeus.de ), I want to say that I love your homepage, as you use the Primeur green, the color of our virtual magazine on the Web. Your topics include much fun, but I would still like to see you connected to one of the private television channels, so you can finally take your place as the TV star of supercomputing. Perhaps you can ask Ad Emmen, as he lives in Almere, Netherlands. There Endemol the big TV entertainment company has its headquarter and with Ad’s personal support you can become star in the new serie “Big Brother in Supercomputing”.

I wish you good luck with your enterprise and many years of success with your Supercomputer Seminar in Mannheim.

There are some others I want to thank for their work in the background over the last 15 years. They are responsible for everything that is working – like acoustics and electricity, registration, and solving many unforeseen problems. I personally want to applaud the students, the photographers, and especially the ladies: Mrs. Sheedy, Mrs. Babietz, Mrs. Joeck, Mrs. Pippert and Mrs. Schnell. Nor will I forget Herbert Schneider, the “good” soul of the event. I believe he has been involved since the beginning, organising all the multimedia – acoustics, audio, video, and critical powerpoint presentations on different laptops.

Additionally great thanks to Alan Beck from Tabor Griffin Communications and Ron Elliott, who was responsible at IBM for the analysts and has now retired. Both looked into this talk and polished with great enthusiams my English.

Thank you all! I wish you a wonderful evening replete with heated discussions and provocative talks during the next few days.

——- Uwe Harms is a supercomputing consultant and owner of Harms-Supercomputing-Consulting in Munich, Germany.

============================================================

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Yes, Big Data Is Still a Thing (It Never Really Went Away)

July 3, 2024

A funny thing happened on the way to the AI promised land: People realized they need data. In fact, they realized they need large quantities of a wide variety of data, and that it would be better if it was fresh, trusted Read more…

Point and Click HPC: High-Performance Desktops

July 3, 2024

Recently, an interesting paper appeared on Arvix called Use Cases for High-Performance Research Desktops. To be clear, the term desktop in this context does not refer to a machine but rather a computing desktop environme Read more…

Quantinuum, CU Succeed Implementing Nonlocal qLDPC Code

July 2, 2024

Trapped ion quantum computing specialist Quantinuum and University of Colorado (Boulder) researchers reported yesterday they had implemented nonlocal qLDPC codes for the first time and exceeded the breakeven point (error Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Next up on the product roadmap is Forte Enterprise, intende Read more…

Best Networking Experience on the Planet: Join the 2024 SCinet CommUNITY Program

July 1, 2024

Join the SC24 SCinet team in Atlanta, GA, and learn high-performance networking while you network with high-performance people! Applications close July 15. Apply Now The CommUNITY@SC24 Professional Development program Read more…

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardware to keep up with newer AI models to drive revenue and prod Read more…

Point and Click HPC: High-Performance Desktops

July 3, 2024

Recently, an interesting paper appeared on Arvix called Use Cases for High-Performance Research Desktops. To be clear, the term desktop in this context does not Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 2338659951

AI-augmented HPC and the Inflation of Science and Technology

June 27, 2024

Everyone is aware of the inflationary model of the early universe in which the volume of space expands exponentially then slows down. AI-augmented HPC (AHPC for Read more…

Summer Reading: DARPA Showcases Quantum Benchmarking Progress

June 25, 2024

Last week, the Defense Advanced Research Projects Agency (DARPA) issued an interim progress update from the second phase of its Quantum Benchmark (QB) program. Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – Read more…

HPE and NVIDIA Join Forces and Plan Conquest of Enterprise AI Frontier

June 20, 2024

The HPE Discover 2024 conference is currently in full swing, and the keynote address from Hewlett-Packard Enterprise (HPE) CEO Antonio Neri on Tuesday, June 18, Read more…

Slide Shows Samsung May be Developing a RISC-V CPU for In-memory AI Chip

June 19, 2024

Samsung may have unintentionally revealed its intent to develop a RISC-V CPU, which a presentation slide showed may be used in an AI chip. The company plans to Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Leading Solution Providers

Contributors

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire