HPC/Hardware Details: Difference between revisions
< HPC
Jump to navigation
Jump to search
m (→User nodes) |
m (→User nodes) |
||
Line 6: | Line 6: | ||
[[Image:HPC Compute Rack-up.png|thumb|right|200px|]] | [[Image:HPC Compute Rack-up.png|thumb|right|200px|]] | ||
<!-- * [http://www.supermicro.com/products/nfo/1UTwin.cfm "1U Twin"] by [http://www.supermicro.com/ Supermicro] --> | <!-- * [http://www.supermicro.com/products/nfo/1UTwin.cfm "1U Twin"] by [http://www.supermicro.com/ Supermicro] --> | ||
* Carbon has two hardware node types, named <code>gen1</code> and <code>gen2</code>. | * Carbon has two major hardware node types, named <code>'''gen1'''</code> and <code>'''gen2'''</code>. | ||
* | * Node characteristics | ||
{{Template:Table of node types}} | {{Template:Table of node types}} | ||
* | * All nodes are dual-socket (4 cores/CPU, 8 cores/node). | ||
* gen1 nodes | * Compute time on gen1 nodes is charged at a 50% discount of walltime. Depending on cores used and memory throughput demanded, these nodes may actually be about ''on par'' with gen2 (low memory throughput) or up to about 2–3 times slower. | ||
== Infrastructure nodes == | == Infrastructure nodes == |
Revision as of 16:48, May 24, 2011
|
User nodes
- Carbon has two major hardware node types, named
gen1
andgen2
. - Node characteristics
Node names, types |
Node generation |
Node extra properties |
Node count |
Cores per node (max. ppn )
|
Cores total, by type |
Account charge rate |
CPU model |
CPUs per node |
CPU nominal clock (GHz) |
Mem. per node (GB) |
Mem. per core (GB) |
GPU model |
GPU per node |
VRAM per GPU (GB) |
Disk per node (GB) |
Year added |
Note |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Login | |||||||||||||||||
login5…6 | gen7a | gpus=2 | 2 | 16 | 32 | 1.0 | Xeon Silver 4125 | 2 | 2.50 | 192 | 12 | Tesla V100 | 2 | 32 | 250 | 2019 | |
Compute | |||||||||||||||||
n421…460 | gen5 | 40 | 16 | 640 | 1.0 | Xeon E5-2650 v4 | 2 | 2.10 | 128 | 8 | 250 | 2017 | |||||
n461…476 | gen6 | 16 | 16 | 256 | 1.0 | Xeon Silver 4110 | 2 | 2.10 | 96 | 6 | 1000 | 2018 | |||||
n477…512 | gen6 | 36 | 16 | 576 | 1.0 | Xeon Silver 4110 | 2 | 2.10 | 192 | 12 | 1000 | 2018 | |||||
n513…534 | gen7 | gpus=2 | 22 | 32 | 704 | 1.5 | Xeon Gold 6226R | 2 | 2.90 | 192 | 6 | Tesla V100S | 2 | 32 | 250 | 2020 | |
n541…580 | gen8 | 20 | 64 | 2560 | 1.0 | Xeon Gold 6430 | 2 | 2.10 | 1024 | 16 | 420 | 2024 | |||||
Total | 134 | 4736 | 48 |
- All nodes are dual-socket (4 cores/CPU, 8 cores/node).
- Compute time on gen1 nodes is charged at a 50% discount of walltime. Depending on cores used and memory throughput demanded, these nodes may actually be about on par with gen2 (low memory throughput) or up to about 2–3 times slower.
Infrastructure nodes
- 2 Management nodes
- 2 Lustre MDS
- 4 Lustre OSS
- dual socket, quad core (Intel Xeon E5345, 2.33 GHz)
- pairwise failover
Storage
- Lustre parallel file system
- 42 TB effective (84 TB raw RAID-10)
- 2 NexSAN SATAbeast
- 160–250 GB local disk per compute node
- NFS
- for user applications and cluster management
- highly-available server based on DRBD
Interconnect
- Infiniband 4x DDR (Mellanox, onboard ) – MPI and Lustre
- Ethernet 1 GB/s – node access and management
- Ethernet 10 Gb/s – crosslinks and uplink
- FibreChannel – storage backends
Power
- UPS (2) – carries infrastructure nodes and network switches
- PDUs – switched and metered
- Power consumption at typical load: 118 kW