Years of Supercomputer Research Probed the Mysteries of the Muon Particle

By Oliver Peckham

May 8, 2020

Muons (elementary subatomic particles, similar to electrons) are the center of a mystery that implicates our very understanding of the universe. Decades ago, muon measurements taken at Brookhaven National Laboratory produced a troubling disagreement between the Standard Model (the commonly understood physical foundations of the universe) and real-world measurements. In the intervening years, researchers have failed to conclusively resolve the discrepancy. Now, a collaborative research team has returned to the muon experiment, leveraging Argonne National Laboratory’s Mira supercomputer in an attempt to pin down the elusive physical mystery behind the behavior of the muon. 

The culprit is the muon’s “magnetic moment” – how it moves when it interacts with a magnetic field. In theory, the muon’s magnetic moment is supposed to be fairly large, but in the Brookhaven experiment, the moment was negligible. “If you account for uncertainties in both the calculations and the measurements, we can’t tell if this is a real discrepancy or just a statistical fluctuation,” said Thomas Blum, a physicist at the University of Connecticut and co-author of the paper, in an interview with Brookhaven’s Christina Nunez. “So both experimentalists and theorists are trying to improve the sharpness of their results.”

The researchers focused on the strong force affecting the muons (as distinct from weak, electromagnetic or gravitational forces), which produces substantial uncertainties in muon analysis through “hadronic contributions.” The team tackled these uncertainties by applying a theory called quantum chromodynamics (QCD). 

“To do the calculation, we simulate the quantum field in a small cubic box that contains the light-by-light scattering process we are interested in,” said Luchang Jin, also a co-author and University of Connecticut physicist. “We can easily end up with millions of points in time and space in the simulation.”

To process these millions of points, the researchers turned to Mira, Argonne’s IBM supercomputer equipped with BlueGene/Q Power 16C 1.6GHz processors. Mira, which was rated at 8.6 Linpack petaflops, was decommissioned at the end of 2019. “Mira was ideally suited for this work,” said James Osborn, a computational scientist with Argonne’s Computational Science division. “With nearly 50,000 nodes connected by a very fast network, our massively parallel system enabled the team to run large simulations very efficiently.”

This work went on for four years, after which, at long last, the team produced the first-ever result for that difficult light-by-light scattering process. Alas, it was not the result they were hoping for. 

“For a long time, many people thought this contribution, because it was so challenging, would explain the discrepancy,” Blum said. “But we found previous estimates were not far off, and that the real value cannot explain the discrepancy. As far as we know, the discrepancy still stands. We are waiting to see whether the results together point to new physics, or whether the current Standard Model is still the best theory we have to explain nature.”

Of course, the researchers working on the muon problem know that they’re in it for the long haul. Already, work is underway at Fermi National Accelerator Laboratory to reduce experimental uncertainty by a factor of four.

“Physicists have been trying to understand the anomalous magnetic moment of the muon by comparing precise theoretical calculations and accurate experiments since the 1940s,” said Taku Izubuchi, a physicist at Brookhaven who co-authored the paper. “This sequence of work has led to many discoveries in particle physics and continues to expand the limits of our knowledge and capabilities in both theory and experiment.”

Header image: An illustration of the hadronic light-by-light scattering process with Mira in the background. Image courtesy of Luchang Jin.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantinuum, CU Succeed Implementing Nonlocal qLDPC Code

July 2, 2024

Trapped ion quantum computing specialist Quantinuum and University of Colorado (Boulder) researchers reported yesterday they had implemented nonlocal qLDPC codes for the first time and exceeded the breakeven point (error Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Next up on the product roadmap is Forte Enterprise, intende Read more…

Best Networking Experience on the Planet: Join the 2024 SCinet CommUNITY Program

July 1, 2024

Join the SC24 SCinet team in Atlanta, GA, and learn high-performance networking while you network with high-performance people! Applications close July 15. Apply Now The CommUNITY@SC24 Professional Development program Read more…

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardware to keep up with newer AI models to drive revenue and prod Read more…

Four Steps to Ensure GenAI Safety and Ethics

June 27, 2024

With the deployment of generative artificial intelligence (GenAI) happening at a rapid pace, organizations of all sizes are tasked with navigating the challenges around implementation, especially regarding ethics and Read more…

AI-augmented HPC and the Inflation of Science and Technology

June 27, 2024

Everyone is aware of the inflationary model of the early universe in which the volume of space expands exponentially then slows down. AI-augmented HPC (AHPC for short) has started to expand creating new space in the scie Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 2338659951

AI-augmented HPC and the Inflation of Science and Technology

June 27, 2024

Everyone is aware of the inflationary model of the early universe in which the volume of space expands exponentially then slows down. AI-augmented HPC (AHPC for Read more…

Summer Reading: DARPA Showcases Quantum Benchmarking Progress

June 25, 2024

Last week, the Defense Advanced Research Projects Agency (DARPA) issued an interim progress update from the second phase of its Quantum Benchmark (QB) program. Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – Read more…

HPE and NVIDIA Join Forces and Plan Conquest of Enterprise AI Frontier

June 20, 2024

The HPE Discover 2024 conference is currently in full swing, and the keynote address from Hewlett-Packard Enterprise (HPE) CEO Antonio Neri on Tuesday, June 18, Read more…

Slide Shows Samsung May be Developing a RISC-V CPU for In-memory AI Chip

June 19, 2024

Samsung may have unintentionally revealed its intent to develop a RISC-V CPU, which a presentation slide showed may be used in an AI chip. The company plans to Read more…

Qubits 2024: D-Wave’s Steady March to Quantum Success

June 18, 2024

In his opening keynote at D-Wave’s annual Qubits 2024 user meeting, being held in Boston, yesterday and today, CEO Alan Baratz again made the compelling pitch Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Leading Solution Providers

Contributors

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire