HPC/Hardware Details: Difference between revisions
< HPC
Jump to navigation
Jump to search
m (→User nodes) |
|||
Line 11: | Line 11: | ||
* All nodes are dual-socket (4 cores/CPU, 8 cores/node). | * All nodes are dual-socket (4 cores/CPU, 8 cores/node). | ||
* Compute time on gen1 nodes is charged at a 50% discount of walltime. Depending on cores used and memory throughput demanded, these nodes may actually be about ''on par'' with gen2 (low memory throughput) or up to about 2–3 times slower. | * Compute time on gen1 nodes is charged at a 50% discount of walltime. Depending on cores used and memory throughput demanded, these nodes may actually be about ''on par'' with gen2 (low memory throughput) or up to about 2–3 times slower. | ||
== Storage == | == Storage == |
Revision as of 17:14, October 19, 2012
|
User nodes
- Carbon has several major hardware node types, named gen1 through gen3.
- Node characteristics
Node names, types |
Node generation |
Node extra properties |
Node count |
Cores per node (max. ppn )
|
Cores total, by type |
Account charge rate |
CPU model |
CPUs per node |
CPU nominal clock (GHz) |
Mem. per node (GB) |
Mem. per core (GB) |
GPU model |
GPU per node |
VRAM per GPU (GB) |
Disk per node (GB) |
Year added |
Note |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Login | |||||||||||||||||
login5…6 | gen7a | gpus=2 | 2 | 16 | 32 | 1.0 | Xeon Silver 4125 | 2 | 2.50 | 192 | 12 | Tesla V100 | 2 | 32 | 250 | 2019 | |
Compute | |||||||||||||||||
n421…460 | gen5 | 40 | 16 | 640 | 1.0 | Xeon E5-2650 v4 | 2 | 2.10 | 128 | 8 | 250 | 2017 | |||||
n461…476 | gen6 | 16 | 16 | 256 | 1.0 | Xeon Silver 4110 | 2 | 2.10 | 96 | 6 | 1000 | 2018 | |||||
n477…512 | gen6 | 36 | 16 | 576 | 1.0 | Xeon Silver 4110 | 2 | 2.10 | 192 | 12 | 1000 | 2018 | |||||
n513…534 | gen7 | gpus=2 | 22 | 32 | 704 | 1.5 | Xeon Gold 6226R | 2 | 2.90 | 192 | 6 | Tesla V100S | 2 | 32 | 250 | 2020 | |
n541…580 | gen8 | 20 | 64 | 2560 | 1.0 | Xeon Gold 6430 | 2 | 2.10 | 1024 | 16 | 420 | 2024 | |||||
Total | 134 | 4736 | 48 |
- All nodes are dual-socket (4 cores/CPU, 8 cores/node).
- Compute time on gen1 nodes is charged at a 50% discount of walltime. Depending on cores used and memory throughput demanded, these nodes may actually be about on par with gen2 (low memory throughput) or up to about 2–3 times slower.
Storage
- Lustre parallel file system
- 42 TB effective (84 TB raw RAID-10)
- 2 NexSAN SATAbeast
- 160–250 GB local disk per compute node
- NFS
- for user applications and cluster management
- highly-available server based on DRBD
Interconnect
- Infiniband 4x DDR (Mellanox, onboard ) – MPI and Lustre
- Ethernet 1 GB/s – node access and management
- Ethernet 10 Gb/s – crosslinks and uplink
- FibreChannel – storage backends
Power
- UPS (2) – carries infrastructure nodes and network switches
- PDUs – switched and metered
- Power consumption at typical load: 118 kW