University of Wuppertal Research Team Uses Jülich’s HPC Resources to Reduce AI Uncertainty

February 9, 2024

Feb. 9, 2024 — As artificial intelligence enters new corners of society, academic researchers are hard at work making sure that the applications interacting with our day-to-day routines are ready for whatever life throws at them. A team at the University of Wuppertal uses supercomputing resources at the Jülich Supercomputing Centre to make AI training more efficient, improving problem-solving capabilities for autonomous driving and other complex systems in the process.

In the last decade, artificial intelligence (AI) has gone from science fiction fodder and early experiments in computer language processing to a key technology remaking industries from transportation and energy to media and the arts. While AI applications quickly made their marks on many aspects of life, perhaps no other use case has excited and worried policymakers, industry experts, and the public like the idea of autonomous-driving cars.

Computer scientists understand how to program a vehicle to drive under ideal, predictable conditions: a train driving down the same pattern on a track, or even a car driving around in a closed environment. Much more difficult, however, is preparing those vehicles for how to react to new information or objects that they have not “seen” before. For researchers like Dr. Matthias Rottmann at the University of Wuppertal, training AI-governed systems to make the right choice when they face the unknown requires the help of high-performance computing (HPC) resources.

“If you want to train an AI application for situations where you don’t have real, organic data, or you want to use less data to train an application, you are going to have to pay your tab with a lot of computing power,” Rottmann said. He and his team have been using the GPU-equipped JUWELS Booster supercomputer at the Jülich Supercomputing Centre (JSC)—one of the three centers that comprise the Gauss Centre for Supercomputing (GCS)—to work on improving computational decision making that can be applied to real-world applications like autonomous driving.

Think for Yourself

Artificial intelligence applications often learn how to perform a task by being shown millions or billions of images, word sets, minutes of video footage, or another input. Researchers ask the program to distinguish one object from another, pick out anomalies in a situation, or find a pattern in a large, unstructured dataset. When training an application in this manner, researchers give the program negative feedback for each wrong answer, helping sharpen the application’s ability to make the right choice in the future. While this approach is effective, it can become prohibitively expensive in time and resources when trying to train an application for an extremely large set of possibilities.

A visualization of an active learning method for the semantic segmentation of camera images with street scenes. Left: an uncertainty quantification (red color represents high uncertainty of a given AI, green low uncertainty). Right: Image regions selected for labelling based on the uncertainty quantification, presented to the AI after the fact such that the latter learns from it. Image credit: Matthias Rottmann.

“In active learning, we train models over many iterations, with the model flagging objects it is unsure of, requesting feedback, and acquiring a bit more information that can be used for training in the process,” he said. “For this work, I know that I need to run thousands of these experiments, so I need HPC.”

Among other research focuses, Rottmann’s team works on improving a different machine learning approach, what he referred to as active learning, which allows the application to flag objects it is unsure of rather than waiting for negative feedback. For Rottmann, working on improving active learning ultimately addresses two of his primary research motivations—how to lower the computational demands for training an AI model and how to progress AI research more efficiently. “With regular, supervised AI training, the application is almost like someone watching TV with labels in the subtitles—they are consuming many thousands of labelled images passing by. While I am motivated to see advancements in autonomous driving and other applications, my research interest is more fundamental than that: why does AI need to see 10,000 images of cats and dogs before it is able to distinguish them?”

Using JUWELS, the team was able to use active learning to improve an application’s performance during semantic segmentation—the process of an AI application dividing up a training image into pixels and labelling each one based on what it thinks the objects in the image are. The team has a pre-print journal article in review and has presented its findings during the 18th International Joint Conference on Computer Vision, Imaging, and Computer Graphics Theory and Applications in 2023.

For Rottmann and his team, ushering in the AI revolution in a safe, transparent, and inclusive manner requires a groundswell of academics working on complex, foundational challenges that come with adopting AI into these important—and occasionally dangerous—societal spaces. While many large, foundational AI models are owned by private companies, Rottmann believes that an equitable AI future will be balanced out by open-source initiatives that ensure these technologies are available for everyone.

Rottmann and his team are now focusing more on the key component for successful AI programming: data. “I am moving more toward studying data-centric AI,” Rottmann said. “Selecting data that matters for a given application, improving data quality by using AI to detect label errors during training, and finding new sources of data as AI trainers face more challenges in sourcing new, high-quality data. I also want to work more with these foundational models and explore new applications for my methodology outside the realm of autonomous driving tasks.”


Source: Eric Gedenk, Gauss Centre

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantum Companies D-Wave and Rigetti Again Face Stock Delisting

October 4, 2024

Both D-Wave (NYSE: QBTS) and Rigetti (Nasdaq: RGTI) are again facing stock delisting. This is a third time for D-Wave, which issued a press release today following notification by the SEC. Rigetti was notified of delisti Read more…

Alps Scientific Symposium Highlights AI’s Role in Tackling Science’s Biggest Challenges

October 4, 2024

ETH Zürich recently celebrated the launch of the AI-optimized “Alps” supercomputer with a scientific symposium focused on the future possibilities of scientific AI thanks to increased compute power and a flexible ar Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvidia GPUs). Recently, MLCommons introduced the results of its Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever physical processor they want, without making code changes, the Read more…

IBM Quantum Summit Evolves into Developer Conference

October 2, 2024

Instead of its usual quantum summit this year, IBM will hold its first IBM Quantum Developer Conference which the company is calling, “an exclusive, first-of-its-kind.” It’s planned as an in-person conference at th Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed that the company will release Falcon Shores as a GPU. The com Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvi Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever ph Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed t Read more…

How GenAI Will Impact Jobs In the Real World

September 30, 2024

There’s been a lot of fear, uncertainty, and doubt (FUD) about the potential for generative AI to take people’s jobs. The capability of large language model Read more…

IBM and NASA Launch Open-Source AI Model for Advanced Climate and Weather Research

September 25, 2024

IBM and NASA have developed a new AI foundation model for a wide range of climate and weather applications, with contributions from the Department of Energy’s Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Building the Quantum Economy — Chicago Style

September 24, 2024

Will there be regional winner in the global quantum economy sweepstakes? With visions of Silicon Valley’s iconic success in electronics and Boston/Cambridge� Read more…

How GPUs Are Embedded in the HPC Landscape

September 23, 2024

Grasping the basics of Graphics Processing Unit (GPU) architecture is crucial for understanding how these powerful processors function, particularly in high-per Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Granite Rapids HPC Benchmarks: I’m Thinking Intel Is Back (Updated)

September 25, 2024

Waiting is the hardest part. In the fall of 2023, HPCwire wrote about the new diverging Xeon processor strategy from Intel. Instead of a on-size-fits all approa Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Ansys Fluent® Adds AMD Instinct™ MI200 and MI300 Acceleration to Power CFD Simulations

September 23, 2024

Ansys Fluent® is well-known in the commercial computational fluid dynamics (CFD) space and is praised for its versatility as a general-purpose solver. Its impr Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Leading Solution Providers

Contributors

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

IBM Develops New Quantum Benchmarking Tool — Benchpress

September 26, 2024

Benchmarking is an important topic in quantum computing. There’s consensus it’s needed but opinions vary widely on how to go about it. Last week, IBM introd Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qu Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

US Implements Controls on Quantum Computing and other Technologies

September 27, 2024

Yesterday the Commerce Department announced export controls on quantum computing technologies as well as new controls for advanced semiconductors and additive Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire