Artificial Intelligence Learns to Judge Mass of Galaxy Clusters

November 2, 2022

Nov. 2, 2022 — For cosmologists trying to study the formation of the Universe, knowing the mass of everything is critical. But the need to estimate the mass of dark matter, which can’t be observed directly, limits their accuracy. A team of scientists led from Carnegie Mellon University (CMU) has trained artificial intelligence (AI) on data from simulated clusters of galaxies, in which the composition of all the components is known. This AI went on to predict a mass for the real-world Coma Cluster of galaxies that agrees with those from earlier, more human-labor-intensive attempts. The result offers the possibility of faster, more accurate assessment of the masses of galaxy clusters.

The Coma Cluster contains more than 1,000 galaxies. Scientists have long been frustrated by large uncertainties in its mass. Credit: PSC.

Today, cosmologists are wrestling with how galaxy clusters form and persist. The hundreds to thousands of galaxies these vast structures contain appear to be moving too fast for their collective gravity to keep them together. Even when scientists take into account mysterious dark matter — which is impossible to detect directly despite making up 85 percent of the matter in the Universe — the uncertainties are much larger than scientists are comfortable with.

Because galaxies in a cluster revolve around the center of its mass, scientists can tell how much mass is in that cluster by how fast they’re moving. The galaxies revolving away from us are slightly red shifted – much like the lower tone of a train moving away, their light is a bit more red. Light from the galaxies moving toward us are in the same way shifted a bit more blue. Measuring the difference between the two shows how fast the galaxies are wheeling around. Higher speeds means there has to be more mass holding the cluster together. But the need to estimate the (invisible) dark matter, hot ionized gasses and visible galaxies means large uncertainties. Also, scientists haven’t yet worked out the three-dimensional structures of the clusters, which further limits their confidence that they understand what’s going on.

Matthew Ho, a graduate student working in Hy Trac’s group at the McWilliams Center for Cosmology at CMU, wanted to know whether there was a way using AI to determine the mass of the Coma Cluster, a huge array of galaxies 321 million light years from Earth. An AI approach to the problem, he reasoned, would allow the mass of galaxy clusters to be estimated much more quickly than the painstaking surveys of the past. Just as importantly, it offered a way around the uncertainties — as well as, potentially, other biases that humans inevitably introduce with their initial assumptions.

“Galaxy clusters are exactly what they sound like … groups of hundreds to thousands of galaxies that all seem to be in an equilibrium orbit around each other,” Ho explained. “But realistically, the amount of matter in the individual galaxies isn’t enough to … keep them all in orbit … Understanding their distribution in space and time is very important for us to constrain models of cosmology.”

To tackle the Coma Cluster problem, Ho would use a powerful AI tool called deep learning. This type of AI works by first feeding the computer data in which the right answer is labeled by humans. Because the computer is so much faster than humans, it can learn how to connect the data with the correct answer by trial and error. Initially, it creates a series of interconnected “layers” that represent different aspects of the data. It then adjusts these connections until its answers match the human-supplied labels. Once it does that, scientists test the AI against data that isn’t labeled. Once it gives correct answers in this testing phase, it’s ready to work on data for which humans don’t already have the answers.

Constructing an accurate training data set, then, is key to getting good results. This is particularly the case when we know that the real data have issues, such as the ones that limit the cluster mass measurements. So Ho used his AI to analyze earlier simulations of galaxy clusters on the National Science Foundation-funded Bridges-2 as well as Vera. By using artificial galaxy clusters whose composition was completely known, he could be sure that the computer was working with accurate data.

Creating accurate artificial galaxy clusters, though, was a tall order given that the simulation had to include so many “particles.” In all, the simulation would begin with hundreds of gigabytes of data, enough to fill dozens if not hundreds of laptops. Then it would have to carry out computations on that data, which would balloon the electronic bits being juggled.

Bridges-2’s Big Data capabilities made it an ideal fit for the problem. With large memory nodes of 512 GB and 4,000 GB, it could fit all the data in one node, greatly speeding the largest simulation processing tasks by cutting down the time necessary for communications between nodes. Along with processing on Vera, this allowed Ho to create a clean training data set that his AI program, also running on Vera, used to learn how to judge galaxy cluster mass. In previous work the team had also used Bridges-2’s advanced GPU nodes, perfect for the many, parallel AI computations needed.

When let loose on real-world data from the Coma Cluster, the AI produced results that agreed with previous, human-guided estimates of the galaxy cluster’s mass. This result lent credence to the earlier attempts to remove the observation biases, as the computer had started with none of the assumptions that the humans had. It also gave Ho confidence that the computer was giving a correct answer, not just one that agreed with the earlier studies. More importantly, it suggests that the AI is capable, when given data for other real galaxy clusters, of producing similarly reliable results. The scientists published their results in the journal Nature Astronomy in June, 2022.


Source: Ken Chiacchia, PSC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia’s Newest Foundation Model Can Actually Spell ‘Strawberry’

October 23, 2024

A new AI model from Nvidia knows just how many R’s are in the word strawberry, a feat that OpenAI’s GPT-4o model has yet to achieve. In what is known as the "strawberry problem," GPT-4o and a few other established mo Read more…

GPUs Help Establish New Milestone in Mathematics

October 23, 2024

Citizen mathematicians are using GPUs to find the highest prime numbers based on specific computing formulas. As it turns out, it is also a good way to stress test GPUs and map progress in computing capabilities over dec Read more…

Ayar Labs CEO: Optical Chiplets Coming to SOCs Soon

October 22, 2024

In AI, time is money. Top AI players are spending billions to create computing infrastructures to satisfy that need for speed. However, these companies are bottlenecked by computing constraints at the chip, memory, and I Read more…

Quantum Nuggets: Riverlane’s 2024 QEC Study, IBM’s V-score, Twisted Semiconductors

October 22, 2024

Quantum error correction specialist Riverlane today released a fascinating report — The Quantum Error Correction Report 2024 — that’s worth scanning; IBM and 28 collaborators last week released V-score, a new metri Read more…

Celebrating Intel-AMD Unity: Looking Back at x86 Flubs

October 21, 2024

Both AMD and Intel were founded in the Disco era, and it took them decades to establish a brotherhood to protect the long-term interests of x86. But what took them so long? There's a history of bad blood between the riva Read more…

HPC Debrief: Matthew Shaxted, CEO of Parallel Works

October 20, 2024

In this installment of The HPC Debrief, we will discuss a big topic in HPC -- cluster provisioning. Getting hardware on-prem or in the cloud is often the easy part of standing up an HPC or AI cluster. Indeed, cloud techn Read more…

Nvidia’s Newest Foundation Model Can Actually Spell ‘Strawberry’

October 23, 2024

A new AI model from Nvidia knows just how many R’s are in the word strawberry, a feat that OpenAI’s GPT-4o model has yet to achieve. In what is known as the Read more…

Ayar Labs CEO: Optical Chiplets Coming to SOCs Soon

October 22, 2024

In AI, time is money. Top AI players are spending billions to create computing infrastructures to satisfy that need for speed. However, these companies are bott Read more…

Celebrating Intel-AMD Unity: Looking Back at x86 Flubs

October 21, 2024

Both AMD and Intel were founded in the Disco era, and it took them decades to establish a brotherhood to protect the long-term interests of x86. But what took t Read more…

HPC Debrief: Matthew Shaxted, CEO of Parallel Works

October 20, 2024

In this installment of The HPC Debrief, we will discuss a big topic in HPC -- cluster provisioning. Getting hardware on-prem or in the cloud is often the easy p Read more…

In This Club, You Must ‘Earn the Exa’

October 17, 2024

There have been some recent press releases and headlines with the phrase "AI Exascale" in them. Other than flaunting the word exascale or even zettascale, these Read more…

Research Insights, HPC Expertise, Meaningful Collaborations Abound at TACCSTER 2024

October 17, 2024

It's a wrap! The Texas Advanced Computing Center (TACC) at UT Austin welcomed more than 100 participants for the 7th annual TACC Symposium for Texas Researchers Read more…

Nvidia’s Blackwell Platform Powers AI Progress in Open Compute Project

October 16, 2024

Nvidia announced it has contributed foundational elements of its Blackwell accelerated computing platform design to the Open Compute Project (OCP). Shared at th Read more…

On Paper, AMD’s New MI355X Makes MI325X Look Pedestrian

October 15, 2024

Advanced Micro Devices has detailed two new GPUs that unambiguously reinforce it as the only legitimate GPU alternative to Nvidia. AMD shared new facts on its n Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Granite Rapids HPC Benchmarks: I’m Thinking Intel Is Back (Updated)

September 25, 2024

Waiting is the hardest part. In the fall of 2023, HPCwire wrote about the new diverging Xeon processor strategy from Intel. Instead of a on-size-fits all approa Read more…

Ansys Fluent® Adds AMD Instinct™ MI200 and MI300 Acceleration to Power CFD Simulations

September 23, 2024

Ansys Fluent® is well-known in the commercial computational fluid dynamics (CFD) space and is praised for its versatility as a general-purpose solver. Its impr Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

xAI Colossus: The Elon Project

September 5, 2024

Elon Musk's xAI cluster, named Colossus (possibly after the 1970 movie about a massive computer that does not end well), has been brought online. Musk recently Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

IBM Develops New Quantum Benchmarking Tool — Benchpress

September 26, 2024

Benchmarking is an important topic in quantum computing. There’s consensus it’s needed but opinions vary widely on how to go about it. Last week, IBM introd Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

US Implements Controls on Quantum Computing and other Technologies

September 27, 2024

Yesterday the Commerce Department announced export controls on quantum computing technologies as well as new controls for advanced semiconductors and additive Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire