Skip to main content

Project Profile

The New Frontier Initiative’s Hydro system is a compute cluster focused on supporting research and development related to national security and preparedness as well as research in other domains. The Hydro cluster combines a current OS and software stack, up to 512 GB of memory per node, 100 Gb/s WAN bandwidth, and direct access to two Lustre-based parallel filesystems (/home and /projects).

System Overview

The system is composed of 70 nodes, together making available 944 Intel Sandy Bridge cores, 256 AMD Interlagos cores, and 560 AMD Rome & Milan cores with over 27TB aggregate system memory, as well as 18 NVIDIA 80GB A100 GPUs. All nodes are connected to 4 PB of Lustre-based parallel storage across two filesystems.

FDR Infiniband connects the storage, while 40Gbe and 100Gbe connect the nodes to the internet. Both IB and Ethernet networks are usable by MPI communications.

The software environment includes:

  • RHEL 8 OS
  • SLURM job scheduler
  • Singularity container support
  • NVIDIA CUDA 11.7 GPU toolset

Hardware specifications:

  • 70 total nodes
  • CPU: Sandy Bridge, Interlagos, Rome, Milan
  • Mem: 256-512 GB per node
  • GPU: 18 Nvidia A100 (9 nodes)
  • 40-100 Gb ethernet to WAN
  • FDR IB
  • 4 PB of Lustre-based storage
  • 2 Login nodes

Support and Documentation

Hydro documentation can be found here. For technical support, email help+hydro@ncsa.illinois.edu

Back to top