A Campus Champion is an employee of, or affiliated with, a college or university (or other institution engaged in research), whose role includes helping their institution’s researchers, educators and scholars (faculty, postdocs, graduate students, undergraduates and professionals) with their computing-intensive and data-intensive research, education, scholarship and/or creative activity, including but not limited to helping them use advanced digital capabilities to improve, grow and/or accelerate these achievements.
HPC
Allocation awarded by your Campus Champion
Open continuously
Allows access to the XSEDE ecosystem
Delta is a computing and data resource that balances cutting-edge graphics processors and CPU architectures that shapes the future of advanced research computing. Made possible by the National Science Foundation, Delta is the most performant GPU-computing resource in NSF’s portfolio. University of Illinois researchers can be allocated on Delta.
Allocation awarded by NCSA
Available continuously
A computing and data resource that balances cutting-edge graphics processor and CPU architectures that shapes the future of advanced research computing. Made possible by the National Science Foundation, Delta is the most performant GPU-computing resource in NSF’s portfolio. Most allocations for Delta will be allocated through ACCESS.
Allocation awarded by ACCESS
ACCESS quarterly allocation (Select tiers are available continuously)
Granite is NCSA’s Tape Archive system, closely integrated with Taiga, to provide users with a place to store, and easily access, longer-term archive datasets. Access to this tape system is available directly via tools such as scp, Globus, and S3. Data written to Granite is replicated to two tapes for mirrored protection in case of tape failure. Granite can be used for storage of infrequently accessed data, disaster recovery, archive datasets, etc.
Storage
HAL is an efficient purpose-built system for distributed deep learning. It combines NVIDIA GPUs, high-speed interconnect, and a high-performing NVME SSD-based storage to provide a reliable and robust platform for developing and training deep neural networks. The system is funded by the NSF MRI program to provide University of Illinois researchers with a computational resource for machine learning needs. University of Illinois researchers can be obtain an allocation on HAL.
HPC-AI
Allocation awarded by CAII
On rolling basis, for the duration of the project
HOLL-I is a new service at NCSA that offers public access to an extreme scale machine learning capability, to complement other available resources at NCSA such as Delta and HAL. Leveraging the power of a Cerebras CS-2 Wafer Scale Engine, and with access to NCSA’s shared project storage on TAIGA, HOLL-I is capable of performing large machine-learning tasks in short order. HOLL-I’s unique architecture offers higher-speed processing than anything currently available on campus.
Extreme Scale Machine Learning
Allocations through NCSA managed service fees (NCSA Director has a discretionary allocation)
Service fees cover usage during the next quarter, charged by actual usage
Hydro is a Unix-based cluster computing resource designed with a focus on supporting research and development related to national security and preparedness as well as research in other domains. Hydro is made available by the New Frontiers Initiative (NFI). It combines NVIDIA GPUs, high-speed interconnect, and a high-performing NVME SSD-based storage to provide a reliable and robust platform. Hydro is available to allocated NFI projects and Illinois Computes projects.
Allocations through NFI and Illinois Computes projects.
2 Login and 42 Compute nodes: 384 GB of memory per node, 40 Gb/s WAN bandwidth
The Illinois Campus Cluster provides access to computing and data storage resources and frees you from the hassle of administering your own compute cluster. Any individual, research team or campus unit can invest in compute nodes or storage disks or pay a fee for on-demand use of compute cycles or storage space. Hardware and storage can be customized to the specific needs of individual research teams. Below are what NCSA is able to allocate, though the system is much larger.
Cost to purchase nodes, storage, or usage on-demand
The High Throughput Computing (HTC) Pilot program is a collaborative, volunteer effort between Research IT, Engineering IT Shared Services and NCSA. The computing systems that comprise the HTC Pilot resource are retired compute nodes from the Illinois Campus Cluster Program (ICCP) or otherwise idle workstations in Linux Workstation labs.
High Throughput Computing (HTC)
Allocation awarded by University of Illinois Urbana campus
300 compute nodes with 12-core Intel Xeon X5650 @2.67GHz and 24 GB RAM. Of those, ~2 have 48 GB RAM and ~1 have 96 GB RAM
Nightingale is a high-performance compute cluster for sensitive data. It accommodates projects requiring extra security, such as compliance with HIPAA and CUI policies. It is available for a fee to University of Illinois faculty, staff, students and their collaborators through desktop access and encrypted laptop access. NCSA experts manage the complex requirements surrounding sensitive data, taking the burden off the user so they can focus on their research.
HPC for sensitive data
Cost varies by resource request. See Nightingale Overview and Costs for more detail
Radiant is a new private cloud-computing service operated by NCSA for the benefit of NCSA and UIUC faculty and staff. Customers can purchase VMs, computing time in cores, storage of various types and public IPs for use with their VMs.
Cost varies by the Radiant resource requested
Getting hands-on programming support for performance analysis, software optimization, efficient use of accelerators, I/O optimization, data analytics, visualization, use of research computing resources by science gateways and workflows.
Support
Allocation awarded by campus Research IT
Taiga is NCSA’s Global File System able to integrate with all non-HIPAA environments in the National Petascale Computation Facility. Built with SSUs (Scaleable Storage Units) spec’d by NCSA engineers with DDN, it provides a center-wide, single-namespace file system available across multiple platforms at NCSA. This allows researchers to access their data on multiple systems simultaneously; improving their ability to run science pipelines across batch, cloud and container resources. Taiga is also well integrated with the Granite Tape Archive to allow users to readily stage out data to their tape allocation for long-term, cold storage.
vForge is a high-performance batch computing cluster built on NCSA’s Radiant cloud computing environment and Taiga center-wide storage system. vForge provides both CPU and GPU nodes and can be dynamically scaled to meet changing computational demands.
Note: vForge is a cloud-based cluster and resources can change dynamically based on demand.