Offered byNational Center for Supercomputing Applications (NCSA)
How to gain access:
To request allocations, visit https://wiki.ncsa.illinois.edu/display/USSPPRT/NCSA+Allocations
Faculty and Staff
Nightingale is a high-performance compute cluster for sensitive data. It accommodates projects requiring extra security, such as compliance with HIPAA and Controlled Unclassified Information (CUI) policies. It is available for a fee to University of Illinois faculty, staff, students and their collaborators through desktop access and encrypted laptop access. NCSA experts manage the complex requirements surrounding sensitive data, taking the burden off the user so they can focus on their research.
Nightingale provides standard batch computing and allocation on interactive nodes for GPU or CPU usage. The storage system is mounted to all nodes and available for both long-term storage and parallel high-performance computing interaction. Slurm manages batch workload, allocating only individual GPUs (MIG not used). Users are not able to add their own nodes to the system.
Interactive Compute Nodes:
- 4 interactive compute/login nodes with dual 64-core AMDs and 512 GB of RAM
- 6 interactive nodes with 1 A100, dual 32-core AMDs with 256GB RAM
- 5 interactive nodes with 1 A40 with dual 32-core AMDs and 512GB RAM
Batch Compute System:
- 16 dual 64-core AMD systems with 1 TB of RAM
- 2 dual-A100 compute nodes with 32-core AMDs and 512 GB of RAM
- 880 TB of high-speed parallel LUSTRE-based storage
Cost varies by resource request. See Nightingale Overview and Costs for more details.
How much storage is available?
Storage: 880 TB of high-speed parallel LUSTRE-based storage