HPC/Hardware Details: Difference between revisions

From CNM Wiki
< HPC
Jump to navigation Jump to search
mNo edit summary
Line 9: Line 9:
* Node characteristics
* Node characteristics
{{Template:Table of node types}}
{{Template:Table of node types}}
* All nodes are dual-socket (4 cores/CPU, 8 cores/node).
* Compute time on gen1 nodes is charged at 40% of actual walltime. Depending on cores used and memory throughput demanded, these nodes may actually be about ''on par'' with gen2 (low memory throughput) or up to about 2–3 times slower.
* Compute time on gen1 nodes is charged at a 50% discount of walltime. Depending on cores used and memory throughput demanded, these nodes may actually be about ''on par'' with gen2 (low memory throughput) or up to about 2–3 times slower.


== Storage ==
== Storage ==

Revision as of 18:29, November 2, 2012


HPC-Main-external.jpg

Carbon Cluster
User Information


User nodes

1U Twin node chassis (Supermicro).
HPC Compute Rack-up.png
  • Carbon has several major hardware node types, named gen1 through gen3.
  • Node characteristics
Node
names, types
Node
generation
Node
extra
properties
Node
count
Cores
per node
(max. ppn)
Cores total,
by type
Account
charge
rate
CPU
model
CPUs
per node
CPU
nominal
clock
(GHz)
Mem.
per node
(GB)
Mem.
per core
(GB)
GPU
model
GPU
per node
VRAM
per GPU
(GB)
Disk
per node
(GB)
Year
added
Note
Login
login5…6 gen7a gpus=2 2 16 32 3.0 Xeon Silver 4125 2 2.50 192 12 Tesla V100 2 32 250 2019
Compute
n421…460 gen5 40 16 640 2.0 Xeon E5-2650 v4 2 2.10 128 8 250 2017
n461…476 gen6 16 16 256 2.0 Xeon Silver 4110 2 2.10 96 6 1000 2018
n477…512 gen6 36 16 576 2.0 Xeon Silver 4110 2 2.10 192 12 1000 2018
n513…534 gen7 gpus=2 22 32 704 3.0 Xeon Gold 6226R 2 2.90 192 6 Tesla V100S 2 32 250 2020
n541…580 gen8 20 64 2560 2.1 Xeon Gold 6430 2 2.10 1024 16 420 2024
Total 134 4736 48
  • Compute time on gen1 nodes is charged at 40% of actual walltime. Depending on cores used and memory throughput demanded, these nodes may actually be about on par with gen2 (low memory throughput) or up to about 2–3 times slower.

Storage

  • Lustre parallel file system for /home and /sandbox
  • 42 TB effective (84 TB raw RAID-10)
  • local disk per compute node, 160–250 GB
HPC Infiniband-blue.png

Interconnect

  • Infiniband 4x DDR (Mellanox, onboard ) – MPI and Lustre
  • Ethernet 1 GB/s – node access and management

Power

  • Power consumption at typical load: ≈125 kW