Systems

SDSC offers high-performance computing and innovative architectures designed to keep pace with the changing needs of science and technology research.

expanse_logo_card-v2.jpg

Expanse

Expanse supports SDSC's vision of “Computing without Boundaries” by increasing the capacity and performance of batch-oriented and science gateway computing. Its capabilities advance research across domains that is increasingly dependent upon heterogeneous and distributed resources composed into integrated and highly usable cyberinfrastructure. Expanse is also an Allocations resource for the U.S. National Science Foundations’ ACCESS program.

About Expanse
nrp_logo_card.jpg

National Research Platform

The National Research Platform (NRP)—the set of programs, facilities and policies designed for distributed growth and expansion—is a partnership of more than 50 institutions, led by researchers and cyberinfrastructure professionals at UC San Diego, supported in part by awards from the U.S. National Science Foundation. NRP’s primary computation, storage and network resource is a ~300 node distributed cluster called Nautilus, which hosts many network testing data transfer nodes including ones used by the Open Science Grid.

About NRP
tscc_logo_card-v2.jpg

Triton Shared Computing Cluster

The Triton Shared Computing Cluster (TSCC) is a critical, affordable resource for UC San Diego researchers—as well as other University of California teams—providing high-performance computing services to support modeling, simulation and data analytics. Researchers from other academic institutions and industries can also participate in this program, which supports a broad range of research computing workloads including traditional HPC, HTC and emerging big data pipelines.

About TSCC
sdsc_cloud2.png

SDSC Cloud

SDSC Cloud—one of the first large-scale academic deployments of cloud storage in the world—enables researchers on a budget to preserve and share data, especially massive datasets so prevalent in this data-intensive research and computing era. UC San Diego campus users, members of the UC community and UC affiliates are eligible to join the hundreds of users who already benefit from the 3 petabytes of raw space, which is organized into object-based cloud storage by OpenStack’s Swift platform.

About SDSC Cloud
sherlock.png

Sherlock Secure Cloud

Built on top of public cloud platforms, including AWS and Microsoft Azure, Sherlock Cloud orchestrates the necessary micro-services to provide fit-for-purpose solutions that support big data, analytics and data science-use cases while meeting strict and secure compliance (HIPAA, FISMA, FERPA) requirements. Sherlock's services are designed for customers looking for turnkey technology solutions with a hands-off approach to the intricacies of public cloud platforms.

About Sherlock
voyager_logo_card.jpg

Voyager

Voyager is an innovative AI system designed for science and engineering (S&E) research at scale. It supports S&E research that depends on AI and deep learning for experimental and/or computational work. An NSF ACCESS resource, Voyager features Habana Lab’s Gaudi training and first-generation inference processors, along with a high-performance, low latency 400 gigabit-per-second interconnect from Arista. It can handle extremely large data sets using well-established deep learning frameworks such as PyTorch, MXNet and Tensorflow.

About Voyager
cosmos_logo_card.jpg

Cosmos

Cosmos is a testbed high-performance computing system featuring AMD's MI300A accelerated processing units (APUs), which have integrated CPU and GPU on a single chip. Through its incremental programming approach, Cosmos aims to democratize access to accelerated computing. This makes it easier for researchers to utilize the powerful hardware for a wide range of applications such as AI, astrophysics, genomics and materials science, while making accelerated computing more accessible to diverse research fields.

About Cosmos
Back to top