HPC/Hardware Details: Difference between revisions

From CNM Wiki
< HPC
Jump to navigation Jump to search
m (category)
Line 5: Line 5:
[[Image:HPC Compute Node Chassis.jpg|thumb|right|200px|[http://www.supermicro.com/products/nfo/1UTwin.cfm 1U Twin] node chassis ([http://www.supermicro.com/products/chassis/1U/808/SC808T-980V.cfm Supermicro]).]]
[[Image:HPC Compute Node Chassis.jpg|thumb|right|200px|[http://www.supermicro.com/products/nfo/1UTwin.cfm 1U Twin] node chassis ([http://www.supermicro.com/products/chassis/1U/808/SC808T-980V.cfm Supermicro]).]]
[[Image:HPC Compute Rack-up.png|thumb|right|200px|]]
[[Image:HPC Compute Rack-up.png|thumb|right|200px|]]
* 4 '''login nodes'''
<!-- * [http://www.supermicro.com/products/nfo/1UTwin.cfm "1U Twin"] by [http://www.supermicro.com/ Supermicro] -->
* 144 '''compute nodes''', as 72 [http://www.supermicro.com/products/nfo/1UTwin.cfm "1U Twin"] by [http://www.supermicro.com/ Supermicro]
 
** dual socket, quad-core (Intel Xeon 5355 CPUs, 2.66 GHz)
{| class="wikitable" cellpadding=8 style="text-align:center;  margin: 1em auto 1em auto;"
** 288 processors, 1152 cores total
|- style="background:#eee;"
* 2 GB RAM/core, 2.3 TB RAM total
! Features !! Count !! Processor !! Cores !! Clock<br>(GHz) !! Memory<br>(GB) !! Memory/core<br>(GB)
|-  align="center"
| login1 || 1 || Xeon X5355 || 8 || 2.67 || 16 || 2
|-  align="center"
| login5/login6 || 2 || Xeon E5540 || 8 || 2.53 || 24 || 3
|-  align="center"
| gen1 || 144 || Xeon X5355 || 8 || 2.67 || 16 || 2
|-  align="center"
| gen2 bigmem || 38 || Xeon E5540 || 8 || 2.53 || 48 || 6
|-  align="center"
| gen2 || 166 || Xeon E5540 || 8 || 2.53 || 24 || 3
|}


== Infrastructure nodes ==
== Infrastructure nodes ==

Revision as of 15:39, October 27, 2010


HPC-Main-external.jpg

Carbon Cluster
User Information


User nodes

1U Twin node chassis (Supermicro).
HPC Compute Rack-up.png
Features Count Processor Cores Clock
(GHz)
Memory
(GB)
Memory/core
(GB)
login1 1 Xeon X5355 8 2.67 16 2
login5/login6 2 Xeon E5540 8 2.53 24 3
gen1 144 Xeon X5355 8 2.67 16 2
gen2 bigmem 38 Xeon E5540 8 2.53 48 6
gen2 166 Xeon E5540 8 2.53 24 3

Infrastructure nodes

  • 2 management nodes
  • 2 lustre OSS
  • 2 lustre MDS
  • dual socket, quad core (Intel Xeon E5345, 2.33 GHz)
  • pairwise failover

Storage

  • NexSAN SATAbeast
  • 30 TB raw RAID-5, 22 TB effective
  • Lustre parallel file system
  • 160 GB local disk per compute node
  • 2 × 250 GB local disk in frontend nodes, as RAID1
  • NFS
    • highly-available server based on DRBD
    • used for cluster management
HPC Infiniband-blue.png

Interconnect

  • Infiniband 4x DDR (Mellanox, onboard ) – MPI and Lustre
  • Ethernet 1 GB/s – node access and management
  • Ethernet 10 Gb/s – crosslinks and uplink
  • FibreChannel – storage backends

Power

  • UPS (2) – carries infrastructure nodes and network switches
  • PDUs – switched and metered
  • Power consumption
    • peak: 89 kW
    • full load (average): 55 kW