Skip to content

FUNDAMENTAL PHYSICS | Contents | Next

Putting Quantum Electrodynamics to the Test

FEATURED
Toichiro Kinoshita, Cornell University

SAC TEAM
Bob Sinkovits
Bob Leary
SDSC

Q uantum electrodynamics (QED) is a theory of physics formulated more than 50 years ago that describes the fundamental building blocks of matter starting from only a few observable quantities, such as particle mass and electron charge. All predictions of QED have been found to agree with experiments so far, but using NPACI's Blue Horizon in a Strategic Applications Collaboration (SAC), Toichiro Kinoshita, a physics researcher and professor emeritus at Cornell University, is putting the theory of QED to an even more stringent test.

"Unlike any previous theories, QED has no obvious limitation," Kinoshita said. "This is the first time that such a claim can be made. Since no theory is likely to be absolutely true, however, physicists have been trying to discover where the limitations of QED might be found."

In QED, a cloud of virtual particles surrounds the electron and affects its magnetic property, the strength of which is defined as a magnetic moment. The variation in the magnetic moment, caused by the virtual cloud of particles, is precisely calculable in the Standard Model of QED and is the simplest quantity that can be calculated in QED. (The Standard Model of QED incorporates strong and weak nuclear forces in addition to the electromagnetic force.) Thus, the magnetic moment of a free electron provides the most precise test of QED within the Standard Model.

Early experimental work indicated that the magnetic moment of the electron has a slight deviation of approximately 0.1% from the value expected from QED. This deviation is known as the anomalous magnetic moment and the size of this deviation between theoretical and experimental values establishes a strong bound on the validity of QED. Kinoshita is now pushing the test of the anomalous magnetic moment to one part in a billion or better.

MIXED PRECISION COMPUTATIONS

QUASI-MONTE CARLO


Unlike previous theories, QED has no obvious limitation. Since no theory is likely to be absolutely true, however, physicists have been trying to discover where the limitations of QED might be found.

--Toichiro Kinoshita, Cornell University


MIXED PRECISION COMPUTATIONS

The key computational aspect of the project is the evaluation of approximately 200 10-dimensional Feynman integrals. They are evaluated numerically using Monte Carlo techniques--specifically, with the standard routine VEGAS. The function is evaluated at many random points over the domain, and these evaluations are combined in a weighted sum to get an estimate of both the integral and the statistical error. The integrand itself is a sum of many terms, which must be carefully computed to control floating-point errors in the summation process. Often this requires the use of 128-bit-precision arithmetic, which is approximately 30 times slower than the standard 64-bit precision used in most scientific computations.

Bob Sinkovits, a physicist and the SAC program coordinator at SDSC, developed a new version of the integration routines that uses a mix of 64- and 128-bit precision. He devised a way to evaluate a condition number that governs whether 128-bit precision is necessary at any selected point in the domain, and found that 90% to 95% of the 128-bit floating-point operations could thus be avoided.

"Although this does not completely eliminate the use of 128-bit arithmetic, it has accelerated my program [which evaluates more than 100 huge integrals] significantly," said Kinoshita, who has used 400,000 CPU hours to date. The SAC team has observed speedups of factors of eight to 12 using mixed precision instead of uniform 128-bit evaluation, while still maintaining the required accuracy.

Top| Contents | Next

QUASI-MONTE CARLO

The team is also pursuing further performance gains using quasi-Monte Carlo techniques, according to Bob Leary, an applied mathematician and senior staff scientist at SDSC. These techniques use low-discrepancy sequences, rather than pseudo-random numbers, to sample the integrands. The sequences fill the domain much more uniformly and thus accelerate convergence. Leary has tested the idea on the most difficult of the integrals and obtained results of equivalent accuracy with fewer sample points.

However, there are still some difficulties related to efficient parallel generation of the low-discrepancy sequences, whereas parallel generation of pseudo-random sequences is well developed. "The ultimate goal is to develop faster general solutions that can be used on modern parallel architectures for researchers who rely on Monte Carlo techniques," Leary said.

"The accuracy of my calculation has been improving in step with the improved performance of computers," Kinoshita said. "Only now has the speed of computation reached the point where my problem can be handled with reasonable precision. My current goal is to improve the value to the point where the residual uncertainty is smaller than that of the forthcoming experimental value and also smaller than the expected value of the next measurement." --EN *

Top| Contents | Next