HPC/Hardware Details: Difference between revisions

From CNM Wiki
< HPC
Jump to navigation Jump to search
Line 30: Line 30:


== Storage ==
== Storage ==
* [http://www.nexsan.com/ NexSAN SATAbeast]
* 2 [http://www.nexsan.com/ NexSAN SATAbeast]
* 30 TB raw RAID-5, 22 TB effective
* 84 TB raw RAID-10; 42 TB effective
* [http://www.lustre.org Lustre] parallel file system
* [http://www.lustre.org Lustre] parallel file system
* 160 GB local disk per compute node
* 160–250 GB local disk per compute node
* 2 × 250 GB local disk in frontend nodes, as RAID1
* NFS
* NFS
** highly-available server based on [http://www.drbd.org/ DRBD]
** highly-available server based on [http://www.drbd.org/ DRBD]

Revision as of 15:42, October 27, 2010


HPC-Main-external.jpg

Carbon Cluster
User Information


User nodes

1U Twin node chassis (Supermicro).
HPC Compute Rack-up.png
Features Count Processor Cores Clock
(GHz)
Memory
(GB)
Memory/core
(GB)
login1 1 Xeon X5355 8 2.67 16 2
login5/login6 2 Xeon E5540 8 2.53 24 3
gen1 144 Xeon X5355 8 2.67 16 2
gen2 bigmem 38 Xeon E5540 8 2.53 48 6
gen2 166 Xeon E5540 8 2.53 24 3

Infrastructure nodes

  • 2 Management nodes
  • 2 Lustre MDS
  • 4 Lustre OSS
  • dual socket, quad core (Intel Xeon E5345, 2.33 GHz)
  • pairwise failover

Storage

  • 2 NexSAN SATAbeast
  • 84 TB raw RAID-10; 42 TB effective
  • Lustre parallel file system
  • 160–250 GB local disk per compute node
  • NFS
    • highly-available server based on DRBD
    • used for cluster management
HPC Infiniband-blue.png

Interconnect

  • Infiniband 4x DDR (Mellanox, onboard ) – MPI and Lustre
  • Ethernet 1 GB/s – node access and management
  • Ethernet 10 Gb/s – crosslinks and uplink
  • FibreChannel – storage backends

Power

  • UPS (2) – carries infrastructure nodes and network switches
  • PDUs – switched and metered
  • Power consumption
    • peak: 89 kW
    • full load (average): 55 kW