HPC/Hardware Details: Difference between revisions
< HPC
Jump to navigation
Jump to search
m (→User nodes) |
|||
Line 24: | Line 24: | ||
== Infrastructure nodes == | == Infrastructure nodes == | ||
* 2 '''management nodes''' | * 2 '''management nodes''' | ||
* 2 '''lustre MDS''' | * 2 '''lustre MDS''' | ||
* 4 '''lustre OSS''' | |||
* dual socket, quad core (Intel Xeon E5345, 2.33 GHz) | * dual socket, quad core (Intel Xeon E5345, 2.33 GHz) | ||
* pairwise failover | * pairwise failover |
Revision as of 15:40, October 27, 2010
|
User nodes
Features | Count | Processor | Cores | Clock (GHz) |
Memory (GB) |
Memory/core (GB) |
---|---|---|---|---|---|---|
login1 | 1 | Xeon X5355 | 8 | 2.67 | 16 | 2 |
login5/login6 | 2 | Xeon E5540 | 8 | 2.53 | 24 | 3 |
gen1 | 144 | Xeon X5355 | 8 | 2.67 | 16 | 2 |
gen2 bigmem | 38 | Xeon E5540 | 8 | 2.53 | 48 | 6 |
gen2 | 166 | Xeon E5540 | 8 | 2.53 | 24 | 3 |
Infrastructure nodes
- 2 management nodes
- 2 lustre MDS
- 4 lustre OSS
- dual socket, quad core (Intel Xeon E5345, 2.33 GHz)
- pairwise failover
Storage
- NexSAN SATAbeast
- 30 TB raw RAID-5, 22 TB effective
- Lustre parallel file system
- 160 GB local disk per compute node
- 2 × 250 GB local disk in frontend nodes, as RAID1
- NFS
- highly-available server based on DRBD
- used for cluster management
Interconnect
- Infiniband 4x DDR (Mellanox, onboard ) – MPI and Lustre
- Ethernet 1 GB/s – node access and management
- Ethernet 10 Gb/s – crosslinks and uplink
- FibreChannel – storage backends
Power
- UPS (2) – carries infrastructure nodes and network switches
- PDUs – switched and metered
- Power consumption
- peak: 89 kW
- full load (average): 55 kW