Skip to content

Hardware

The cluster has one login node and several compute nodes with a mix of NVIDIA, AMD, and Intel GPUs. The Partition column controls which Slurm partition the node is reachable through; see Running jobs for how to target a partition or node.

Hostname CPU Memory Storage GPUs (and how to allocate them) Partition
login 1x AMD EPYC 9124 (16C/32T) 128 GB 880 GB NVMe SSD -- login
gpu-nvidia-h100 2x Intel Xeon Gold 6548N
Sapphire Rapids (32C/64T)
2 TB 15 TB NVMe SSD 4x NVIDIA H100 (96 GB)
--gres gpu:nvidia:1
compute
gpu-nvidia-h200-1/2/3 2x AMD EPYC 9555
Turin (64C/128T)
2.3 TB 7 TB NVMe SSD 8x NVIDIA H200 (144 GB)
--gres gpu:nvidia:1
compute
gpu-nvidia 2x Intel Xeon Gold 6438Y+
Sapphire Rapids (32C/64T)
1 TB 880 GB NVMe SSD 2x NVIDIA L40S (48 GB)
--gres gpu:nvidia:1
development
gpu-amd 2x Intel Xeon Gold 6438Y+
Sapphire Rapids (32C/64T)
256 GB 880 GB NVMe SSD 2x AMD MI210 (64 GB)
--gres gpu:amd:1
develop
gpu-intel-pvc 2x Intel Xeon Platinum 8480+
Sapphire Rapids (56C/112T)
512 GB 7 TB NVMe SSD 2x Intel Max 1100 (48 GB)
--gres gpu:intel:1
develop
gpu-intel 1x AMD Ryzen 9 7900X3D (12C/24T) 128 GB 916 GB NVMe SSD 2x Intel A770 (16 GB)
--gres gpu:intel:1
develop
rocinante 2x AMD EPYC 7713
Milan (64C/128T)
512 GB 880 GB NVMe SSD 1x NVIDIA A100 (40 GB)
2x NVIDIA A2 (16 GB)
1x NVIDIA P100 (16 GB)
3x AMD MI50 (16 GB)
1x Intel A770 (16 GB)
--gres gpu:nvidia/amd/intel:1
develop

Note

Access to the develop partition is granted on request. Open an Access develop partition ticket to get added.