For over 35 years, SDSC has led the way in deploying and supporting cutting-edge high performance computing systems for a wide range of users, from the campus to the national research community. From the earliest Cray systems to today’s data-intensive systems, SDSC has focused on providing innovative architectures designed to keep pace with the changing needs of science and engineering.
Whether you’re looking to expand computing beyond your lab or a business looking for that competitive advantage, SDSC’s HPC experts will guide potential users in selecting the right resource, thereby reducing time to solution while taking your science to the next level.
Take a look at what SDSC has to offer and let us help you discover your computing potential.
System |
Performance |
Key Features |
---|---|---|
Expanse |
5 Pflop/s peak; 93,184 CPU cores; 208 NVIDIA GPUs; 220 TB total DRAM; 810 TB total NVMe |
Standard Compute Nodes (728 total) GPU Nodes (52 total) Large-memory Nodes (4 total) Interconnect Storage Systems SDSC Scalable Compute Units (13 total) Entire system organized as 13 complete SSCUs, consisting of 56 standard nodes and four GPU nodes connected with 100 GB/s HDR InfiniBand |
TSCC: |
80+ Tflop/s |
General Computing Nodes GPU Nodes Interconnect 10GbE (QDR InfiniBand optional) Lustre-based Parallel File System |
Trial Accounts give users rapid access for the purpose of evaluating Expanse for their research. This can be a useful step in accessing the value of the system by allowing potential users to compile, run, and do initial benchmarking of their application prior to submitting a larger Startup or Research allocation. Trial Accounts are for 1000 core-hours, and requests are fulfilled within 1 working day.
Visit the Expanse page for full details of the system.
Visit the TSCC Home page for all the details.
The TSCC User Guide has complete information on accessing and running jobs on TSCC.