Published 02/11/2004
Researchers at the San Diego Supercomputer Center (SDSC) have released Version 3.1.0 of the Rocks cluster toolkit to support three processor families: Intel's 32-bit Pentium and 64-bit Itanium CPUs, and AMD's 64-bit Opteron.
Rocks gives users turn-key software installation and update capabilities for Linux clusters. This user-friendly method of maintaining and administering cluster computers scales from small machines to some of the fastest on the Top500 List of supercomputers. Rocks 3.1.0 ("Matterhorn") is the public version of the software package with which personnel from SDSC and Sun Microsystems built a fully-functional, grid-enabled cluster with 128 processor nodes in less than two hours on the opening night of the SC2003 conference in Phoenix, Arizona last November.
Rocks 3.1.0 is a co-release for x86 (Pentium, Athlon, and others), Itanium2 (IA-64) and Opteron (x86-64) based clusters. Software is freely available for download to burn onto a bootable CD set for x86 and x86-64 or a single DVD for Itanium2. Versions for all processor families are available at http://www.rocksclusters.org/.
"Support for these three key CPU families enables scientists and engineers to build clusters using the best CPU and system for theirparticular workloads," said Philip Papadopoulos, program director for SDSC's Grid and Cluster Computing group. "Because this latest version of Rocks is built from a common code base, users will have the same robust system software no matter which of these processors they choose."
Version 3.1.0 builds on previous releases, includes maintenance fixes and updates available security patches. It includes an expanding suite of de facto standard cluster and grid tools, including Sun Grid Engine (SGE), MPICH for Ethernet and Myrinet, High-Performance Linpack, ATLAS BLAS, and the National Science Foundation Middleware Initiative's suite of Grid Tools such as Globus and Condor.
This latest version of Rocks enhances the "roll" mechanism (introduced in Version 3.0.0), which enables additional software packages to be included in the configuration. These optional "Roll CDs" extend the system by integrating seamlessly and automatically into the management and packaging mechanisms used by base software. To the user, rolls appear to be part of the original CD distribution. A number of defined extension rolls are freely available, including rolls for HPC, Sun Grid Engine, Grid (based on NMI R4), Java, Condor, and Ninf-G.
An important enhancement of this capability with Rocks 3.1.0 is that new rolls can be created or updated independently of the core toolkit. This enables user communities and application teams to add domain-specific software packages, define a particular grid configuration, or simply modify any of the default configuration or package settings.
"Rolls are significant because they allow the definition of reproducible customizations to further enhance specific working configurations," said Mason Katz, group leader for the Rocks software development effort.
Rocks 3.1.0 also adds cluster and grid functionality to a standard Linux distribution without specific kernel hooks. This approach allows the software to handle the natural evolution of Linux updates. It enhances the Linux cluster environment with features that allow users to start, monitor, and control processes on cluster nodes from the cluster's front-end computer while supporting standard Linux interfaces and tools. The result is a stable, extensible, production environment that appeals to both end users and software developers, and provides a supported platform for the deployment of advanced clustering applications.
Beginning with the Rocks 3.1.0 release, the new 411 Secure Information Service replaces the Network Information Service (NIS) as the default method of distributing /etc/passwd and other login files. (NIS can be used instead of 411, as an option.) 411 operates on a file level and does not rely on RPC, and instead distributes the files themselves using HTTP (web service). 411 also uses RSA Public Key Cryptography to protect files' contents. Its central task is to securely maintain critical login/password files on the worker nodes of a cluster. It does this by implementing a file-based distributed database with weak consistency semantics. The design goals of 411 include scalablility, security, low latency when changes occur, and resilience to failures.
Rocks is developed and maintained by the Grid and Cluster Computing Group at SDSC and by partners at the University of California, Berkeley, Singapore Computing Systems in Singapore, and a number of individual open-source software developers. Rocks development is funded by the National Science Foundation through the National Partnership for Advanced Computational Infrastructure (NPACI).
Rocks 3.1.0 is derived from Red Hat's publicly available source packages (SRPMS) used in portions of their Enterprise Linux 3.0 Product Line. All SRPMs have been recompiled to enable redistribution. All updates available as of the Rocks release date for these packages have been pre-applied. Rocks-specific software and standard cluster and grid community software is then added to create a complete clustering toolkit. All Rocks source code is available in a public CVS (Concurrent Versions System) repository at http://cvs.rocksclusters.org/.
About SDSC
The mission of the San Diego Supercomputer Center (SDSC) is to innovate, develop and deploy technology to advance science. SDSC is involved in an extensive set of collaborations and activities at the intersection of technology and science whose purpose is to enable and facilitate the next generation of scientific advances. Founded in 1985 and primarily funded by the National Science Foundation (NSF), SDSC is an organized research unit of the University of California, San Diego. With a staff of more than 400 scientists, software developers, and support personnel, SDSC is an international leader in data management, grid computing, biosciences, geosciences, and visualization. For more information, visit http://www.sdsc.edu/.
Technical Contact: Mason J. Katz, SDSC, 858-822-3651, mjk@sdsc.edu