Skip to content

NEWS | Contents | Next

online

Subscribe to Online or ENVISION| Front Cover

To view the full Online article, append the issue number to the URL: www.npaci.edu/online/v3.x

SDSC Acquires 64-Processor Sun HPC 10000 Platform for NPACI Allocations and Strategic Collaborations

In a continuing collaboration with Sun Microsystems, SDSC recently installed a 64-processor Sun HPC 10000 Server (popularly known as the StarFire) running the Solaris Operating Environment. The Sun HPC 10000 will be used for both high-performance computing allocations for researchers across the country and strategic collaborations to simulate magnetic recording materials and the behavior of neurons.

The new HPC 10000 will be configured with 64 400-MHz processors, 64 gigabytes of memory, and 800 GB of disk storage in a Sun StorEdge A5200, for a peak performance exceeding 50 gigaflops. Fifty percent of the machine's time will be available to scientists through the NPACI allocations process.

The HPC 10000 will also be used for large-scale simulations through NPACI's Strategic Application Collaboration program. One project will focus on the fundamental physics codes related to magnetic recording materials developed by Neal Bertram and the Center for Magnetic Recording Research at UC San Diego. A second project will use the HPC 10000 for the GENESIS neuron simulation code developed by James Bower at Caltech. (v3.22)


SDSC HPC Systems Group Installs 1-TB File System; Teraflops System Enters Friendly User Testing Phase

The HPC Systems group has completed the first installation of a user file system that is greater than one terabyte in capacity. The file system, attached to NPACI's interim IBM SP system with 28 two-processor nodes, is in testing and evaluation mode and is configured using IBM's General Parallel File System.

The evaluation file system is just more than 1.19 terabytes--1.19 times 240 bytes or more than 1,200 gigabytes--made up of 18 RAID arrays using 18-gigabyte, 10,000-rpm drives. The full teraflops system will have a file system greater than 3 terabytes to support large computations.

As the countdown to installation of an IBM SP teraflops system at SDSC continues, a smaller system, provided by IBM, has entered the friendly user testing phase. Friendly use is an important pre-production stage during which selected users have early access to a new system. The Scientific Computing Group at SDSC works with these users, making appropriate adjustments to system configurations in response to problems that may be encountered while refining the users' codes for the new machine.

The full teraflops system is expected to arrive by the end of this year. The NPACI teraflops system will be the largest SP installation with IBM's next-generation hardware.

The NPACI teraflops configuration will include 1,152 Power3 processors in 144 eight-processor IBM SMP High Nodes as compute nodes and another 24 Power3 processors in 12 two-processor SMP High Nodes as service nodes. All nodes will have 4 GB of memory. The Power3 processors in the NPACI system will run at 222 MHz, for a peak processor performance of 888 megaflops. In total, the compute nodes will deliver a peak performance of 1.02 teraflops and memory of 576 GB.

To obtain maximum performance, users will need to program for the POWER3 chip using a methodology that combines distributed- and shared-memory programming. MPI will be used to program for distributed memory, and OpenMP or threads will be used for the machine's shared-memory nature. (v3.19)


NSF Award to Build Experimental Technology Grid at the University of Tennessee

The National Science Foundation (NSF) has awarded $2 million over five years to a large group of researchers at the University of Tennessee, Knoxville, for the creation of an experimental technology grid on the Knoxville campus.

The purpose of this infrastructure, which can be called a "computational power grid" by analogy with the electrical power grid, is to support leading-edge research on technologies and applications for high performance distributed computing and information systems.

The project, called the Scalable Intracampus Research Grid (SInRG), will deploy an infrastructure that mirrors, within the boundaries of the Knoxville campus, both the underlying technologies and the interdisciplinary research collaborations that are characteristic of the national technology grid that the U.S. research community is now developing.

The national technology grid is now growing out of the convergent efforts of NSF's Partnerships for Advanced Computational Infrastructure and several other government agencies, including NASA, DoD, and DOE.

While the SInRG infrastructure will become a node on this national grid at some point, its primary purpose is to provide a technological and organizational microcosm in which key challenges underlying grid-based computing can be attacked with better communication and control than wide-area environments usually permit. (v3.22)


NCEAS, SDSC, and University of New Mexico to Develop Ecological Informatics Technologies

Nearly $4 million in grants have been received by the National Center for Environmental Analysis and Synthesis (NCEAS)--the unique national think tank for ecologists, based at the University of California, Santa Barbara. The research will be conducted with participants from the University of New Mexico and SDSC.

The funds will allow the Center to contribute to generic information management solutions that will advance the field of ecology. Additionally, critical information about the environment will be made accessible to resource managers and policy makers.

The larger of the two grants, awarded by the National Science Foundation in the area of Knowledge and Distributed Intelligence, is for $2.9 million over three years. The grant will allow for the development of a computerized knowledge network with tools for exploring complex data sets.

"Information on biocomplexity is voluminous and complex, but currently is scattered in many places and formats," said Jim Reichman, director of NCEAS. "The research advances in information science that we propose will provide an accessible infrastructure for identifying, integrating, managing, and ultimately synthesizing the nation's existing ecological and biodiversity information resources." (v3.21)


SDSC to Serve 250-GB Chinese-Language Digital Library for Pacific Rim Digital Library Alliance

At the request of the UC San Diego Libraries, SDSC will soon be serving approximately 250 GB for a Chinese-language digital library as a component of the California Digital Library and as a service for the Pacific Rim Digital Library Alliance (PRDLA).

The UC San Diego Libraries have taken the lead in building a Chinese Digital Library for the University of California campuses and for the Pacific Rim Digital Library Alliance. Within the next 12 months, the Chinese Digital Library will provide a mirror site for two different major databases in Beijing and Hong Kong, possibly creating the largest Chinese digital library site in the world. Additional digital data will also be made available.

The Chinese Digital Library collaboration is led by Phyllis Mirsky, deputy university librarian, Bruce Miller, associate university librarian, and Karl Lo, librarian for the International Relations and Pacific Studies Library. The project charts new territory that requires the integration of varied technologies across operating systems, character codes, proprietary software, mass data storage, and cultural boundaries. (v3.19)


University of Queensland Joins NPACI as International Affiliate in Earth Systems Science, Molecular Science, and EOT

The University of Queensland, Australia, has joined NPACI's International Affiliates to collaborate on projects in integrated environments for natural resource management, bioinformatics, and earthquake modeling.

"By participating in NPACI's International Affiliates program, both the University of Queensland and NPACI will benefit by sharing results from high-performance computing initiatives," said Bernard Minster, professor of geophysics at the Scripps Institution of Oceanography and NPACI's Earth Systems Science thrust area leader. "The long-term goal of this collaboration is to evolve a shared information architecture for the scientific community to access data sets needed for mathematical model development and data assimilation."

John Helly at SDSC and Lawrence Lau at Queensland's Advanced Computational Modelling Centre will be the principal investigators for the effort. The collaboration with the University of Queensland will have four primary components involving three research centers at the university: Integrated Environments for Natural Resource Management, Bioinformatics, Earth Systems Science, and Education courses. (v3.18)


House Science Chair Discusses IT Legislation at SDSTC Event

Legislation to establish the Networking and Information Technology Research and Development (NITRD) program topped the list of items discussed by Congressman Jim Sensenbrenner at a luncheon sponsored Friday by the San Diego Science and Technology Council (SDSTC) and Qualcomm. The October 8 event drew representatives from the IT industry, think tanks, and regional government, as well as staff members from UCSD and SDSC.

NITRD, the Republican's response to the IT2 (Information Technology Initiative for the Twenty-first Century) program proposed in the Administration's FY00 budget, would authorize $4,768.7 million for six federal agencies participating in the High-Performance Computing and Communications, Next Generation Internet (NGI), and NITRD programs--an increase of 92 percent in IT funding for the agencies under the jurisdiction of the House Committee on Science, which Rep. Sensenbrenner chairs.

IT2 would provide $336 million in new funding for long-term research in computing and the development of a new generation of supercomputers and infrastructure for civilian applications.

After his talk at the SDSTC luncheon, Sensenbrenner received a tour of SDSC from center and NPACI leadership, including Wayne Pfeiffer, Reagan Moore, Margaret Simmons, Ann Redelfs, Bernard Pailthorpe, Phil Bourne, and Mark Sheddon. (v3.21)

Top| Contents | Next