For over 35 years, SDSC has led the way in deploying and supporting cutting-edge high performance computing systems for a wide range of users, from the campus to the national research community. From the earliest Cray systems to today’s data-intensive systems, SDSC has focused on providing innovative architectures designed to keep pace with the changing needs of science and engineering.
Whether you’re looking to expand computing beyond your lab or a business looking for that competitive advantage, SDSC’s HPC experts will guide potential users in selecting the right resource, thereby reducing time to solution while taking your science to the next level.
Take a look at what SDSC has to offer and let us help you discover your computing potential.
System |
Performance |
Key Features |
---|---|---|
Expanse |
5 Pflop/s peak; 93,184 CPU cores; 208 NVIDIA GPUs; 220 TB total DRAM; 810 TB total NVMe |
Standard Compute Nodes (728 total) GPU Nodes (52 total) Large-memory Nodes (4 total) Interconnect Storage Systems SDSC Scalable Compute Units (13 total) Entire system organized as 13 complete SSCUs, consisting of 56 standard nodes and four GPU nodes connected with 100 GB/s HDR InfiniBand |
TSCC: |
80+ Tflop/s |
General Computing Nodes GPU Nodes Interconnect 10GbE (QDR InfiniBand optional) Lustre-based Parallel File System |
Comet |
2.76 Pflop/s peak; 48,784 CPU cores; 288 NVIDIA GPUs; 247 TB total memory; 634 TB total flash memory |
Standard Compute Nodes (1944 total) GPU Nodes (72 total) 36 P100 nodes: 4 NVIDIA P100 GPUs; dual socket, 14 cores/socket; 128 GB DDR4 DRAM; 150GB/s memory bandwidth; 400 GB flash memory Large-memory Nodes (4 total) Interconnect 7.6 PB Lustre-based Parallel File System High-performance virtualization |
Trial Accounts give users rapid access for the purpose of evaluating Expanse for their research. This can be a useful step in accessing the value of the system by allowing potential users to compile, run, and do initial benchmarking of their application prior to submitting a larger Startup or Research allocation. Trial Accounts are for 1000 core-hours, and requests are fulfilled within 1 working day.
Visit the Expanse page for full details of the system.
Visit the TSCC Home page for all the details.
The TSCC User Guide has complete information on accessing and running jobs on TSCC.
Comet is a petascale supercomputer designed to transform advanced scientific computing by expanding access and capacity among traditional as well as non-traditional research domains. The result of a $21.6 million National Science Foundation award, Comet is capable of an overall peak performance of 2.76 petaflops, or 2.76 quadrillion operations per second.
Within its first 18 months of operation, Comet soared past its capacity goal of serving 10,000 unique users across a diverse range of science disciplines. Comet was designed to meet the needs of what has been referred to as the 'long tail' of science – the idea that the large number of modestly-sized computationally-based research projects represent in aggregate a tremendous amount of research that can yield scientific advances and discovery. During the project, SDSC doubled the number of GPU nodes on Comet, making it among the largest providers of GPU resources available to the NSF's XSEDE (Extreme Science and Engineering Discovery Environment) program, which comprises the most advanced collection of integrated digital resources and services in the world.
“Comet is all about providing high-performance computing to a much larger research community – what we call ‘HPC for the 99 percent’ – and serving as a gateway to discovery,” said Mike Norman, the project’s principal investigator. “Comet meets the needs of underserved researchers in domains that have not traditionally relied on supercomputers to help solve problems, as opposed to the way such systems have historically been used.”
Comet is a popular solution for emerging research within the ‘long tail’. Comet supports modest-scale users across the entire spectrum of NSF communities while also welcoming non-traditional HPC research communities, such as genomics, the social sciences, and economics.
Comet is a Dell-integrated cluster using Intel’s Xeon® Processor E5-2600 v3 family, with two processors per node and 12 cores per processor running at 2.5GHz. Each compute node has 128 GB (gigabytes) of traditional DRAM and 320 GB of local flash memory. Since Comet is designed to optimize capacity for modest-scale jobs, each rack of 72 nodes (1,728 cores) has a full bisection InfiniBand FDR interconnect from Mellanox, with a 4:1 over-subscription across the racks. There are 27 racks of these compute nodes, totaling 1,944 nodes or 46,656 cores.
In addition, Comet has four large-memory nodes, each with four 16-core sockets and 1.5 TB of memory, as well as 72 GPU (graphic processing unit) nodes, each with four NVIDIA GPUs. The GPUs and large-memory nodes are for specific applications such as visualizations, molecular dynamics simulations, or de novo genome assembly.
Comet users have access to 7.6 PB of storage on SDSC’s substantially upgraded Data Oasis parallel file storage system. The system is configured with 100 Gbps (Gigabits per second) connectivity to Internet2 and ESNet, allowing users to rapidly move data to SDSC for analysis and data sharing, and to return data to their institutions for local use. Comet was the first XSEDE production system to support high-performance virtualization at the multi-node cluster level. Comet’s use of Single Root I/O Virtualization (SR-IOV) means researchers can use their own software environment as they do with cloud computing, but can achieve the high performance they expect from a supercomputer.
Comet succeeded SDSC's Gordon as a key resource within XSEDE. Comet earlier replaced Trestles, which entered production in 2011 providing researchers significant computing capabilities while increasing computational productivity.