Zephyr and Phoenix Cluster Configurations
The TRACC Zephyr cluster consists of
- 88 compute nodes in a queue, each with:
- 2 16-core AMD Interlagos 6273, 2.3 GHz CPUs
- 1 TB of local scratch space
- 32 GB of RAM
- 1 InfiniBand QDR interconnect
- In addition we also have a queue with two nodes that have 64GB on each node and a queue with two nodes that have 128GB on each node
- 2 login nodes, each with
- 2 16-core AMD Interlagos 6273, 2.3 GHz CPUs
- 2 TB of RAID1 local scratch space
- 32 GB of RAM
- 1 InfiniBand QDR interconnect
- A high performance Lustre-based file system, consisting of
- 1 I/O nodes
- 1 Dual Redundant storage controllers
- 48 3TB hard drives/RAID6
- 120 TB formatted user capacity
- Two Administrative Nodes
- One Applications node for user development
- One Statistics gathering node
- Five SuperMicro Gigabit Ethernet switches connected to the Argonne network with two 10 Gigabit Ethernet fiber links
- A QLogic InfiniBand switch
All nodes are connected to one another with both Gigabit Ethernet and Infiniband.
All the nodes run CentOS Linux 6.x.
The TRACC Phoenix cluster consists of
- 128 compute nodes, each with
- 2 quad-core AMD 2378 Opteron CPUs
- 200 GB of local scratch space
- 8 GB of RAM
- 1 InfiniBand DDR interconnect
- 3 login nodes, each with
- 2 dual-core AMD 2240 Opteron CPUs
- 180 GB of local scratch space
- 16 GB of RAM
- 1 InfiniBand DDR interconnect
- a high performance GPFS file system, consisting of
- 4 I/O nodes
- 1 DDN storage storage controller
- 480 500 GB hard drives
- 180 TB formatted capacity
- a Foundry SX 800 Gigabit Ethernet switch
- connected to the Argonne network with a 10 Gigabit Ethernet fiber link
- a SilverStorm 9120 InfiniBand switch
All nodes are connected to one another with both Gigabit Ethernet and Infiniband.
All the nodes run Red Hat Enterprise Linux 4.5.