Skip to main content

Computing Systems and Services

Campus Champion Allocation

A Campus Champion is an employee of, or affiliated with, a college or university (or other institution engaged in research), whose role includes helping their institution’s researchers, educators and scholars (faculty, postdocs, graduate students, undergraduates and professionals) with their computing-intensive and data-intensive research, education, scholarship and/or creative activity, including but not limited to helping them use advanced digital capabilities to improve, grow and/or accelerate these achievements.

Details and description

Type

HPC

Access

Allocation awarded by your Campus Champion

Allocation period

Open continuously

Hardware/storage

Allows access to the XSEDE ecosystem


Delta Illinois

Delta is a computing and data resource that balances cutting-edge graphics processors and CPU architectures that will shape the future of advanced research computing. Made possible by the National Science Foundation, at its launch Delta will be the most performant GPU-computing resource in NSF’s portfolio. University of Illinois researchers can be allocated on Delta.

Details and description

Type

HPC

Access

Allocation awarded by NCSA

Allocation period

Delta biannual allocation

Hardware/storage

    • 124 CPU nodes
    • 100 quad A100 GPU nodes
    • 100 quad A40 GPU nodes
    • Five eight-way A100 GPU nodes
    • One MI100 GPU node
    • Eight utility nodes will provide login access, data transfer capability and other services
    • 100 Gb/s HPE SlingShot network fabric
    • 7 PB of disk-based Lustre storage
    • 3 PB of flash based storage for data intensive workloads to be deployed in the fall of 2021

Delta XSEDE

A computing and data resource that balances cutting-edge graphics processor and CPU architectures that will shape the future of advanced research computing. Made possible by the National Science Foundation, at its launch Delta will be the most performant GPU-computing resource in NSF’s portfolio. Most allocations for Delta will be allocated through XSEDE.

Details and description

Type

HPC

Access

Allocation awarded by XSEDE

Allocation period

XSEDE quarterly allocation

Hardware/storage

    • 124 CPU nodes
    • 100 quad A100 GPU nodes
    • 100 quad A40 GPU nodes
    • Five eight-way A100 GPU nodes
    • One MI100 GPU node
    • Eight utility nodes will provide login access, data transfer capability and other services
    • 100 Gb/s HPE SlingShot network fabric
    • 7 PB of disk-based Lustre storage
    • 3 PB of flash based storage for data intensive workloads to be deployed in the fall of 2021

Granite

Granite is NCSA’s Tape Archive system, closely integrated with Taiga, to provide users with a place to store, and easily access, longer-term archive datasets. Access to this tape system is available directly via tools such as scp, Globus, and S3. Data written to Granite is replicated to two tapes for mirrored protection in case of tape failure. Granite can be used for storage of infrequently accessed data, disaster recovery, archive datasets, etc.

Details and description

Type

Storage

Access

    • Internal Rate: $16/TB/Year
    • External Rate: Contact Support

Allocation period

Open continuously

Hardware/storage

    • 19 Frame Spectra TFinity Library
    • 40PB of replicated capacity on TS1140 (JAG 7) media
    • Managed by Versity’s ScoutFS/ScoutAM products.

iForge

iForge is a high-performance computing cluster designed specifically for NCSA’s industry partners, featuring distinct hardware platforms designed for differing computational needs.

Details and description

BIG-MEM

    • Haswell
    • 24 nodes
    • 24 cores each
    • 256 GB RAM
    • $0.1308 per core-hour

SKLYLAKE

    • Skylake
    • 44 nodes
    • 40 cores each
    • 192 GB RAM
    • $0.0947 per core-hour

GPU

    • Skylake
    • 2 nodes
    • 40 cores, 4 NVIDIA V100 GPUs in each
    • 192 GB RAM
    • $0.1931 per core-hour

Illinois Campus Cluster

The Illinois Campus Cluster provides access to computing and data storage resources and frees you from the hassle of administering your own compute cluster. Any individual, research team or campus unit can invest in compute nodes or storage disks or pay a fee for on-demand use of compute cycles or storage space. 

Details and description

Type

HPC

Access

Cost to purchase nodes, storage, or usage on-demand

Allocation period

Open continuously

Hardware/storage

    • 8 nodes with: 64GB memory, InfiniBand interconnect, 20 cores (E2670V2 CPU), Tesla K40M GPU
    • 8 nodes with: 64GB memory, InfiniBand interconnect, 20 cores (E2670V2 CPU), No GPU
    • 4 nodes with: 256GB memory, InfiniBand interconnect, 24 cores (E2690V3 CPU), No GPU

Illinois HTC Program

The High Throughput Computing (HTC) Pilot program is a collaborative, volunteer effort between Research IT, Engineering IT Shared Services and NCSA. The computing systems that comprise the HTC Pilot resource are retired compute nodes from the Illinois Campus Cluster Program (ICCP) or otherwise idle workstations in Linux Workstation labs.

Details and description

Type

High Throughput Computing (HTC)

Access

Allocation awarded by University of Illinois Urbana campus

Allocation period

Open continuously

Hardware/storage

300 compute nodes with 12-core Intel Xeon X5650 @2.67GHz  and 24 GB RAM. Of those, ~2 have 48 GB RAM and ~1 have 96 GB RAM


Radiant

Radiant is a new private cloud-computing service operated by NCSA for the benefit of NCSA and UIUC faculty and staff.  Customers can purchase VMs, computing time in cores, storage of various types and public IPs for use with their VMs.

Details and description

Type

HPC

Access

Cost varies by the Radiant resource requested

Allocation period

Open continuously

Hardware/storage

    • 140 nodes
    • 3360 cores
    • 35TB Memory
    • 25GbE/100GbE backing network
    • 185TB Usable flash capacity
    • access to NCSA’s 10PB+ (and growing) center-wide storage infrastructure/archive

Nightingale

HIPAA-compliant HPC environment created to support UIUC researchers and collaborators who engage in analysis of sensitive data, like electronic Protected Health Information.

Details and description

Type

HIPAA compliant HPC

Access

Cost varies by resource requested

Batch Computing

    • 16 dual 64-core AMD systems with 1 TB of RAM
    • 2 dual-A100 compute nodes with 32-core AMDs and 512 GB of RAM

Interactive Compute Nodes

    • 4 interactive compute/login nodes with dual 64-core AMDs and 512 GB of RAM
    • 6 interactive nodes with 1 A100, dual 32-core AMDs with 256GB RAM
    • 5 interactive nodes with 1 A40 with dual 32-core AMDs and 512GB RAM

Access

Cost varies by resource requested

Allocation period

Open continuously


Research IT Software Collaborative Services

Getting hands-on programming support for performance analysis, software optimization, efficient use of accelerators, I/O optimization, data analytics, visualization, use of research computing resources by science gateways and workflows.

Details and description

Type

Support

Access

Allocation awarded by campus Research IT

Allocation period

Open continuously


Taiga

Taiga is NCSA’s Global File System able to integrate with all non-HIPAA environments in the National Petascale Computation Facility. Built with SSUs (Scaleable Storage Units) spec’d by NCSA engineers with DDN, it provides a center-wide, single-namespace file system available across multiple platforms at NCSA. This allows researchers to access their data on multiple systems simultaneously; improving their ability to run science pipelines across batch, cloud and container resources. Taiga is also well integrated with the Granite Tape Archive to allow users to readily stage out data to their tape allocation for long-term, cold storage.

Details and description

Type

Storage

Access

    • Internal Rate: $32/TB/Year
    • External Rate: Contact Support

Allocation period

Open continuously

Hardware/storage

    • 10PB of hybrid NVME/HDD storage based on two Taiga SSUs
    • Backed by HDR Infiniband
    • Running DDN’s Lustre ExaScaler appliance.

XSEDE Startup Allocation

Startup allocations, along with Trial allocations, are one of the fastest ways to gain access to and start using XSEDE-allocated resources. We recommend that all new XSEDE users begin by requesting Startup allocation.

Details and description

Type

HPC

Access

Allocation awarded to new users by XSEDE

Allocation period

Open continuously

Hardware/storage

Allows access to the XSEDE ecosystem


XSEDE Education Allocation

These allocations are for academic courses or training activities with specific begin and end dates. Instructors may request a single resource or a combination of resources. Education requests have the same allocation size limits as Startup requests; per-resource limits are in the Startup Limits table. As with Startup requests, Educational requests are limited to no more than three separate computational resources, unless the abstract explicitly justifies the need for each resource to the reviewers’ satisfaction.

Details and description

Type

HPC

Access

Allocation awarded to new users by XSEDE

Allocation period

Open continuously

Hardware/storage

Allows access to the XSEDE ecosystem


XSEDE Research Allocation

The XSEDE ecosystem encompasses a broad portfolio of resources operated by members of the XSEDE Service Provider Forum. These resources include multi-core and many-core high-performance computing (HPC) systems, distributed high-throughput computing (HTC) environments, visualization and data analysis systems, large-memory systems, data storage, and cloud systems. These resources provide unique services for Science Gateways. Some of these resources are made available to the user community through a central XSEDE-managed allocations process, while many other resources operated by Forum members are linked to other parts of the ecosystem.

Details and description

Type

HPC

Access

Allocation awarded to new users by XSEDE

Allocation period

Open continuously

Hardware/storage

Allows access to the XSEDE ecosystem


Back to top