HPC/Hardware Details: Difference between revisions
< HPC
Jump to navigation
Jump to search
m (→User nodes) |
mNo edit summary |
||
Line 13: | Line 13: | ||
== Storage == | == Storage == | ||
* [http://wiki.whamcloud.com/ Lustre] parallel file system for /home and /sandbox | * [http://wiki.whamcloud.com/ Lustre] parallel file system for /home and /sandbox | ||
* | * ≈60 TB total | ||
* local disk per compute node, 160–250 GB | * local disk per compute node, 160–250 GB | ||
[[Image:HPC Infiniband-blue.png|thumb|right|200px|]] | [[Image:HPC Infiniband-blue.png|thumb|right|200px|]] |
Revision as of 20:41, January 22, 2013
|
User nodes
- Carbon has several major hardware node types, named gen1 through gen3.
- Node characteristics
Node names, types |
Node generation |
Node extra properties |
Node count |
Cores per node (max. ppn )
|
Cores total, by type |
Account charge rate |
CPU model |
CPUs per node |
CPU nominal clock (GHz) |
Mem. per node (GB) |
Mem. per core (GB) |
GPU model |
GPU per node |
VRAM per GPU (GB) |
Disk per node (GB) |
Year added |
Note |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Login | |||||||||||||||||
login5…6 | gen7a | gpus=2 | 2 | 16 | 32 | 1.0 | Xeon Silver 4125 | 2 | 2.50 | 192 | 12 | Tesla V100 | 2 | 32 | 250 | 2019 | |
Compute | |||||||||||||||||
n421…460 | gen5 | 40 | 16 | 640 | 1.0 | Xeon E5-2650 v4 | 2 | 2.10 | 128 | 8 | 250 | 2017 | |||||
n461…476 | gen6 | 16 | 16 | 256 | 1.0 | Xeon Silver 4110 | 2 | 2.10 | 96 | 6 | 1000 | 2018 | |||||
n477…512 | gen6 | 36 | 16 | 576 | 1.0 | Xeon Silver 4110 | 2 | 2.10 | 192 | 12 | 1000 | 2018 | |||||
n513…534 | gen7 | gpus=2 | 22 | 32 | 704 | 1.5 | Xeon Gold 6226R | 2 | 2.90 | 192 | 6 | Tesla V100S | 2 | 32 | 250 | 2020 | |
n541…580 | gen8 | 20 | 64 | 2560 | 1.0 | Xeon Gold 6430 | 2 | 2.10 | 1024 | 16 | 420 | 2024 | |||||
Total | 134 | 4736 | 48 |
- Compute time on gen1 nodes is charged at 40% of actual walltime. Depending on cores used and memory throughput demanded, these nodes may actually be about on par with gen2 (low memory throughput) or up to about 2–3 times slower.
Storage
- Lustre parallel file system for /home and /sandbox
- ≈60 TB total
- local disk per compute node, 160–250 GB
Interconnect
- Infiniband 4x DDR (Mellanox, onboard ) – MPI and Lustre
- Ethernet 1 GB/s – node access and management
Power
- Power consumption at typical load: ≈125 kW