TERSCALE HORIZONS | Contents | Next | |
Quantum Chromodynamics with MILC |
|
PARTICIPANTS Claude Bernard, Washington University Tom DeGrand, University of Colorado Carleton DeTar, Craig McNeile, University of Utah Steven Gottlieb, Indiana University Urs M. Heller, Florida State University James Hetrick, University of the Pacific Kari Rummukainen, Nordic Institute for Theoretical Physics Robert L. Sugar, UC Santa Barbara Doug Toussaint, Kostas Orginos, University of Arizona |
SAC TEAM Sharon Brunett, Mahesh Rajan Caltech Dominic Holland, Dmitry Pekurovsky SDSC |
n the standard model of high-energy physics, the strong forces binding the atom are described by the theory of quantum chromodynamics (QCD). "These forces are so strong that the fundamental entities in the theory, quarks and gluons, can't be directly observed in the laboratory," said UC Santa Barbara physics professor Bob Sugar. "We can only observe them bound together as protons and neutrons, the building blocks of the atomic nucleus, and as a host of short-lived particles produced in high-energy accelerator collisions." Very large-scale supercomputer calculations are the key to testing the standard model against the results of particle physics experiments. Since it uses more than a million CPU-hours of computer time a year, the MILC QCD code package is an ongoing target for optimization on several computer systems in an NPACI Strategic Applications Collaboration. |
Sugar is a member of the MILC Collaboration, a Department of Energy Grand Challenge Application nuclear physics research group that has members at nine institutions in the United States and Europe. "Our work continues a broad research program in lattice QCD that has been going on for more than seven years," he said. "One of the major objectives of QCD is to calculate the masses and other basic properties of strongly interacting particles, which would be a major test of the theory. Another is to understand the effects of strong interactions on weak interaction processes--such calculations have direct relevance to high-energy accelerator experiments now in progress and will provide important tests of the standard model at some of its most vulnerable points. A third objective is to understand the properties of strongly interacting matter at high temperatures, which is important for interpreting experiments being carried out at the recently opened Relativistic Heavy Ion Collider."
Although QCD is widely accepted as the theory of strong interactions, it is notorious among physicists because of its difficulty in making predictions, since it is not solvable by analytic mathematical methods. "One way to study QCD is with computer simulations," said Doug Toussaint of the Physics Department at the University of Arizona. "We can make accurate predictions in QCD using calculations based on lattice gauge theory, in which time and the three dimensions of space are represented by a four-dimensional lattice. The more points in the lattice, the greater the accuracy. We need to evaluate a multidimensional integral with a large number of variables over this lattice." For a 4-D lattice 32 points on a side, for example, the integral would have approximately 34 million variables. The only practical method for evaluating such large integrals is with Monte Carlo importance sampling. Evaluating the integral throughout the lattice involves repeated inversions of matrices with millions of elements in each row. MILC uses the conjugate gradient technique. "The problem is a natural one for parallel machines, since similar computations must be done at each site of the lattice, and the lattice sites are simply divided among the processors," Sugar said. MILC stands for MIMD Lattice Computation, where MIMD indicates that the lattice simulation code runs on computers that execute several streams of instructions in parallel on different pieces of data. MILC is written in 90,000 lines of C and assembly code using the Message Passing Interface library for parallelization. The MILC code package requires gigabytes of main memory and tens of gigabytes of mass storage to execute. It currently consumes slightly more than a million CPU-hours per year, a quantity that is expected to grow as more resources become available. |
Top| Contents | Next |
"With computing demands like those, this code was an obvious candidate for tuning on the new Hewlett-Packard and IBM SP systems, at Caltech and SDSC, respectively," said Jay Boisseau, SDSC associate director for Scientific Computing. MILC's performance was also evaluated and improved on the Cray T3E and IBM SP machines. Efforts to speed up the code concentrated on single-processor optimization.
"The MILC team's production simulation runs, with lattice dimensions of 20x20x20x64, benefit from the HP X-Class system's good job turnaround," said Caltech's Sharon Brunett. "Early tests on the new HP V2500 show approximately 30% improvement over the older X-Class. The Caltech/HP team's analysis of this mature and well-optimized code indicates potential for future performance enhancement, possibly by tuning the communications and memory access patterns to take advantage of the distributed shared memory architecture of the V2500 and future HP SMP systems." "Prefetching and/or cache alignment were the main optimizations on the Cray T3E, SDSC's older IBM SP, and Blue Horizon," said SDSC's Dmitry Pekurovsky. "And one can gain an additional speedup by interchanging the looping order in matrix inversions. This allows reuse of data already in cache, instead of having to reload them each time through the loop. This modification, however, is not appropriate for some QCD problems. It requires increased memory usage and may not always be feasible on a system with relatively limited memory." According to Boisseau, the optimizations typically yielded improvements in execution speed of about 15 percent. "This was a very mature code, so this is a significant performance increase. When you apply that improvement to the million CPU-hours that MILC gets per year, 150,000 hours is a significant saving." --MG * |
Top| Contents | Next |