Published 11/04/2010
The San Diego Supercomputer Center (SDSC) at the University of California, San Diego, will provide expertise to a multi-year technology investment program to develop the next generation of extreme scale supercomputers.
The project is part of the Ubiquitous High Performance Computing (UHPC) program, run by the Defense Advanced Research Projects Agency (DARPA), part of the U.S. Department of Defense. Intel Corporation leads one of the winning teams in the program, and is working closely with SDSC researchers on applications.
The first two phases of the project extend into 2014, and a full system design and simulation is expected at the completion of those phases in 2014. Phases 3 and 4 of the project, which have not yet been awarded, are expected to result in a full prototype system sometime in 2018.
During the first phases of the award, SDSC's Performance Modeling and Characterization (PMaC) laboratory will assist the Intel-DARPA project by analyzing and mapping strategic applications to run efficiently on Intel hardware. Applications of interest include rapid processing of real-time sensor data, establishing complex connectivity relationships within graphs (think of determining "six degrees of Kevin Bacon" relationships on Facebook), and complex strategy planning.
Energy consumption at extreme scales is one of the formidable challenges to be taken on by the Intel team. Today's top supercomputers operate at the petascale level, which means the ability to perform one thousand trillion calculations per second. The next level is exascale, or achieving computing speeds of one million trillion calculations per second - one thousand times faster than today's machines.
According to Intel, the project will focus on new circuit topologies, new chip and system architectures, and new programming techniques to reduce the amount of energy required per computation by two to three orders of magnitude. That means such extreme scale systems will have to require 100 to 1,000 times less energy per computation than what today's most efficient computing systems consume.
"We are working to build an integrated hardware/software stack that can manage data movement with extreme efficiency," said Allan Snavely, associate director of SDSC and head of the supercomputer center's PMaC lab. "The Intel team includes leading experts in low-power device design, optimizing compilers, expressive program languages, and high-performance applications, which is PMaC's special expertise."
According to Snavely, all these areas must work in a coordinated fashion to ensure that one bit of information is not moved further up or down the memory hierarchy than need be.
"Today's crude and simplistic memory cache and prefetch policies won't work at the exascale level because of the tremendous energy costs associated with that motion," he said. "Today it takes a nano joule (a billionth of a joule, a joule being the amount of energy needed to produce one watt of power for one second) to move a byte even a short distance. Multiply that byte into an exabyte (one quintillion bytes) and one would need a nuclear plant's worth of instantaneous power to move it based on today's technology."
Intel's other partners for the project include top computer science and engineering faculty at the University of Delaware and the University of Illinois at Urbana-Champaign, as well as top industrial researchers at Reservoir Labs and ET International.
DARPA's UHPC program directly addresses major priorities expressed by President Obama's "Strategy for American Innovation", according to a DARPA release issued earlier this month. These priorities include exascale supercomputing Century "Grand Challenge", energy-efficient computing, and worker productivity. The resulting UHPC capabilities will provide at least 50 times greater energy, computing and productivity efficiency, which will slash the time needed to design and develop complex computing applications.
About SDSC
As an organized research unit of UC San Diego, SDSC is a national leader in creating and providing cyberinfrastructure for data-intensive research. Cyberinfrastructure refers to an accessible and integrated network of computer-based resources and expertise, focused on accelerating scientific inquiry and discovery. SDSC plans to build the high-performance computing community's first flash memory-based supercomputer system named
Gordon, to enter operation in 2011. SDSC is a founding member of TeraGrid, the nation's largest open-access scientific discovery infrastructure. The mission of SDSC's PMaC lab is to bring scientific rigor to the prediction and understanding of factors affecting the performance of current and projected high-performance computing platforms.
Media Contacts:
Jan Zverina, SDSC Communications
858 534-5111 or
jzverina@sdsc.edu
Warren R. Froelich, SDSC Communications
858 822-3622 or
froelich@sdsc.edu
San Diego Supercomputer Center (SDSC):
http://www.sdsc.edu/
SDSC's PMaC Lab:
http://www.sdsc.edu/pmac/index.html
UC San Diego:
http://www.ucsd.edu/
DARPA:
http://www.darpa.mil/