HPC/Hardware Details

From CNM Wiki
< HPC
Revision as of 16:19, November 14, 2008 by Stern (talk | contribs) (migrated from internal wiki)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search


HPC-Main-external.jpg

Carbon Cluster
User Information


User nodes

1U Twin node chassis (Supermicro).
HPC Compute Rack-up.png
  • 4 login nodes
  • 144 compute nodes, as 72 "1U Twin" by Supermicro
    • dual socket, quad-core (Intel Xeon 5355 CPUs, 2.66 GHz)
    • 288 processors, 1152 cores total
  • 2 GB RAM/core, 2.3 TB RAM total

Infrastructure nodes

  • 2 management nodes
  • 2 lustre OSS
  • 2 lustre MDS
  • dual socket, quad core (Intel Xeon E5345, 2.33 GHz)
  • pairwise failover

Storage

  • NexSAN SATAbeast
  • 30 TB raw RAID-5, 22 TB effective
  • Lustre parallel file system
  • 160 GB local disk per compute node
  • 2 × 250 GB local disk in frontend nodes, as RAID1
  • NFS
    • highly-available server based on DRBD
    • used for cluster management
HPC Infiniband-blue.png

Interconnect

  • Infiniband 4x DDR (Mellanox, onboard ) – MPI and Lustre
  • Ethernet 1 GB/s – node access and management
  • Ethernet 10 Gb/s – crosslinks and uplink
  • FibreChannel – storage backends

Power

  • UPS (2) – carries infrastructure nodes and network switches
  • PDUs – switched and metered
  • Power consumption
    • peak: 89 kW
    • full load (average): 55 kW