News Archive

SDSC Helps UCSD Astrophysicists Apply New Technique to Modeling Turbulence

Published 02/13/2006

Using the Enzo cosmology code, the researchers showed for the first time that adaptive mesh refinement can be an efficient tool to model supersonic turbulence, which occurs in such important phenomena as star formation. This volume rendering of the gas density in a slice through the computational domain shows the V- and U-shaped shocklets or "Mach cones" characteristic of the supersonic turbulence found in interstellar gas. A. Kritsuk et al, UCSD.


by Paul Tooby, SDSC Senior Science Writer

Just as an athlete in one sport can benefit from cross-training in a different sport, scientists from one field of science can sometimes apply a method developed in their area to yield unexpected advances in another discipline. This cross-fertilization is just what astrophysicists Alexei Kritsuk, Michael Norman, and Paolo Padoan of the Center for Astrophysics and Space Sciences (CASS) at UC San Diego have done. Norman and Padoan are also in the UCSD Department of Physics.

In the February 10 online issue of Astrophysical Journal Letters they describe the first application of the adaptive mesh refinement technique to modeling the important physical problem of turbulence. "We were able to use the Enzo code, developed for cosmological simulations of the early Universe, in an entirely new regime -- to model supersonic turbulence, the sort that prevails in molecular clouds throughout our own Milky Way galaxy and in many other situations," said Norman. The research relied on large-scale simulations performed using the DataStar supercomputer at the San Diego Supercomputer Center (SDSC) at UCSD.

A major challenge in modeling turbulent flows, which have important interactions taking place across the full range of scales from the smallest vortices to the size of the entire flow, is that the simulation must capture all the scales. But computational modeling works by dividing the modeled region into cells or boxes. The smaller the boxes, the higher the resolution and the more detail is captured for greater scientific realism. At the same time, increasing the number of boxes also increases the computational demands of the model. Even with today's largest supercomputers, scientists are limited in how many boxes they can use, and thus how realistically they can model complex, real-world turbulence, especially on the scale of star formation.

Now, in a significant step forward, Kritsuk, Norman, and Padoan have found a way to model the active, high-density regions of the turbulence at high resolution, while conserving computer resources by modeling the less-active regions at lower resolution. It's like using the zoom on a camera to focus in closely on the regions that are more active, while zooming out to a broader view of the regions where the turbulence is less intense.

"The innovation here is that we've done what many turbulence experts thought was nonsense," said Norman. "The conventional thinking is that since turbulence fills the entire volume, therefore it would make no sense to use a spatially adaptive method to model it, because it would simply refine the grid to zoom in to the highest resolution everywhere."

What the UCSD researchers realized is that the important structures in turbulence are not uniform everywhere, but are spatially localized at any given instant in time. The next instant, they move somewhere else, and the scientists showed that the adaptive mesh refinement model used in Enzo is capable of tracking these important flow features.

What's important about this is that the Enzo code can capture the important science by zooming in to the highest resolution only where needed on the important flow features, while saving valuable computational resources by modeling less-important areas at coarser resolution.

The large-scale computations for the simulations were done using SDSC's 15.6 Teraflops DataStar supercomputer. In running the Enzo code, the researchers used 64 eight-way nodes of DataStar for a total of 512 processors, and ran the simulations for about 100,000 processor hours. The calculations are also data-intensive, and produced about four terabytes of data, nearly half the size of the printed collection of the Library of Congress. "SDSC data-oriented resources were extremely useful for this research," said Alexei Kritsuk.

The Enzo code, developed by Norman and others including Robert Harkness of SDSC, is an advanced, 3-D time-dependent code for computational astrophysics, capable of simulating the evolution of the Universe from the beginning using first principles. These simulations help scientists test theories against observations and provide new insights into cosmology, galaxy formation, star formation, and high-energy astrophysics.

In their simulations the researchers found that the properties of the supersonic turbulence in the adaptive mesh refinement simulations, and simulations performed on uniform grids, agreed surprisingly well, even though only a fraction of the volume was covered by the higher resolution subgrids. The scientists also found that nested supersonic flow structures known as "Mach cones" and U-shaped shockwaves or shocklets dominate the dynamics in the flows, and they explored the fractal or self-similar characteristics of these structures, their signature in the statistical properties of the turbulence, and their role in overall flow dynamics.

Reference
ApJ Letters, vol. 638, n. 1, pages L25-L28, 2006, February 10.