Skip to main content

Project Profile

abstract image of neural networks in artificial intelligence computing

AI Architecture

Funded by the U.S. National Science Foundation, DeltaAI has been designed from the ground up to maximize the output of Artificial Intelligence and Machine Learning (AI/ML) research.

More Than a New Supercomputer

Using state-of-the-art hardware, DeltaAI will enable scientists and researchers to address the world’s most challenging problems by accelerating complex AI/ML and high-performance computing applications running terabytes of data. Tripling NCSA’s AI-focused computing capacity and greatly expanding the capacity available within the NSF-funded advanced computing ecosystem, DeltaAI will utilize the newest web-based interfaces that will make it more accessible to the growing community of domains employing AI methods in their research. DeltaAI is a computing and data resource funded by the NSF through its Advanced Computing Systems and Services (ACSS) program, offering computing and data resources that enable GPU-accelerated computing with access to CPU resources and storage.

ACCESSIBLE BY DESIGN

The DeltaAI team strives to create greater accessibility in all aspects of DeltaAI’s operation by:

  • following NCSA’s mission to spread these advances throughout the broader research computing and data ecosystem and working to create best practices for accessibility in HPC
  • leveraging the work done on Delta to work with developers to create an HPC environment accessible to individuals with disabilities
  • installing user-friendly interface applications like Open OnDemand

DeltaAI MIGHT BE RIGHT FOR YOU IF:

  • you need access to advanced NVIDIA Grace Hopper GPUs for your accelerated code
  • you’re applying AI/ML methods in your research and need the compute resources to make it happen

To apply for a DeltaAI allocation, please visit the DeltaAI allocations page.

Delta Offers:

  • 456 NVIDIA Grace Hopper GH200 GPUs
  • 200 Gb/s HPE SlingShot network fabric
  • Two Lustre file systems (based on HDD and NVME, respectively) shared with Delta to support both block and small file IO
  • Access to project space on the Taiga Lustre-based center-wide project file system
  • Home directories provisioned on the Harbor VAST-based center-wide home directory system
  • 114 compute nodes consisting of:
    • 4 Grace Hopper GH200 superchips
    • 4 Slingshot11 network connections: 1 per Grace Hopper superchip
    • One 3.5 TB NVME drive
  • Each NVIDIA GH200 superchip has one H100 GPU and a 72-core Grace ARM CPU
  • Each H100 has 96GB HBM3
  • Each Grace ARM CPU has 120GB of LPDDR5X memory
Back to top