HPC/Hardware Details: Difference between revisions
< HPC
Jump to navigation
Jump to search
m (→Storage) |
m (→Power) |
||
Line 49: | Line 49: | ||
* UPS (2) – carries infrastructure nodes and network switches | * UPS (2) – carries infrastructure nodes and network switches | ||
* PDUs – switched and metered | * PDUs – switched and metered | ||
* Power consumption | * Power consumption at typical load: 118 kW | ||
[[Category:HPC|Hardware]] | [[Category:HPC|Hardware]] |
Revision as of 15:45, October 27, 2010
|
User nodes
Features | Count | Processor | Cores | Clock (GHz) |
Memory (GB) |
Memory/core (GB) |
Disk (GB) |
---|---|---|---|---|---|---|---|
login1 | 1 | Xeon X5355 | 8 | 2.67 | 16 | 2 | 250 |
login5/login6 | 2 | Xeon E5540 | 8 | 2.53 | 24 | 3 | 250 |
gen1 | 144 | Xeon X5355 | 8 | 2.67 | 16 | 2 | 160 |
gen2 bigmem | 38 | Xeon E5540 | 8 | 2.53 | 48 | 6 | 250 |
gen2 | 166 | Xeon E5540 | 8 | 2.53 | 24 | 3 | 250 |
Infrastructure nodes
- 2 Management nodes
- 2 Lustre MDS
- 4 Lustre OSS
- dual socket, quad core (Intel Xeon E5345, 2.33 GHz)
- pairwise failover
Storage
- Lustre parallel file system
- 42 TB effective (84 TB raw RAID-10)
- 2 NexSAN SATAbeast
- 160–250 GB local disk per compute node
- NFS
- for user applications and cluster management
- highly-available server based on DRBD
Interconnect
- Infiniband 4x DDR (Mellanox, onboard ) – MPI and Lustre
- Ethernet 1 GB/s – node access and management
- Ethernet 10 Gb/s – crosslinks and uplink
- FibreChannel – storage backends
Power
- UPS (2) – carries infrastructure nodes and network switches
- PDUs – switched and metered
- Power consumption at typical load: 118 kW