HPC/Hardware Details: Difference between revisions
< HPC
Jump to navigation
Jump to search
m (category) |
m (category) |
||
Line 43: | Line 43: | ||
** full load (average): 55 kW | ** full load (average): 55 kW | ||
[[Category:HPC]] | [[Category:HPC|Hardware]] |
Revision as of 16:49, April 8, 2009
|
User nodes
- 4 login nodes
- 144 compute nodes, as 72 "1U Twin" by Supermicro
- dual socket, quad-core (Intel Xeon 5355 CPUs, 2.66 GHz)
- 288 processors, 1152 cores total
- 2 GB RAM/core, 2.3 TB RAM total
Infrastructure nodes
- 2 management nodes
- 2 lustre OSS
- 2 lustre MDS
- dual socket, quad core (Intel Xeon E5345, 2.33 GHz)
- pairwise failover
Storage
- NexSAN SATAbeast
- 30 TB raw RAID-5, 22 TB effective
- Lustre parallel file system
- 160 GB local disk per compute node
- 2 × 250 GB local disk in frontend nodes, as RAID1
- NFS
- highly-available server based on DRBD
- used for cluster management
Interconnect
- Infiniband 4x DDR (Mellanox, onboard ) – MPI and Lustre
- Ethernet 1 GB/s – node access and management
- Ethernet 10 Gb/s – crosslinks and uplink
- FibreChannel – storage backends
Power
- UPS (2) – carries infrastructure nodes and network switches
- PDUs – switched and metered
- Power consumption
- peak: 89 kW
- full load (average): 55 kW