HPC/Hardware Details: Difference between revisions
< HPC
Jump to navigation
Jump to search
m (→Storage) |
|||
Line 30: | Line 30: | ||
== Storage == | == Storage == | ||
* [http://www.nexsan.com/ NexSAN SATAbeast] | * 2 [http://www.nexsan.com/ NexSAN SATAbeast] | ||
* | * 84 TB raw RAID-10; 42 TB effective | ||
* [http://www.lustre.org Lustre] parallel file system | * [http://www.lustre.org Lustre] parallel file system | ||
* | * 160–250 GB local disk per compute node | ||
* NFS | * NFS | ||
** highly-available server based on [http://www.drbd.org/ DRBD] | ** highly-available server based on [http://www.drbd.org/ DRBD] |
Revision as of 15:42, October 27, 2010
|
User nodes
Features | Count | Processor | Cores | Clock (GHz) |
Memory (GB) |
Memory/core (GB) |
---|---|---|---|---|---|---|
login1 | 1 | Xeon X5355 | 8 | 2.67 | 16 | 2 |
login5/login6 | 2 | Xeon E5540 | 8 | 2.53 | 24 | 3 |
gen1 | 144 | Xeon X5355 | 8 | 2.67 | 16 | 2 |
gen2 bigmem | 38 | Xeon E5540 | 8 | 2.53 | 48 | 6 |
gen2 | 166 | Xeon E5540 | 8 | 2.53 | 24 | 3 |
Infrastructure nodes
- 2 Management nodes
- 2 Lustre MDS
- 4 Lustre OSS
- dual socket, quad core (Intel Xeon E5345, 2.33 GHz)
- pairwise failover
Storage
- 2 NexSAN SATAbeast
- 84 TB raw RAID-10; 42 TB effective
- Lustre parallel file system
- 160–250 GB local disk per compute node
- NFS
- highly-available server based on DRBD
- used for cluster management
Interconnect
- Infiniband 4x DDR (Mellanox, onboard ) – MPI and Lustre
- Ethernet 1 GB/s – node access and management
- Ethernet 10 Gb/s – crosslinks and uplink
- FibreChannel – storage backends
Power
- UPS (2) – carries infrastructure nodes and network switches
- PDUs – switched and metered
- Power consumption
- peak: 89 kW
- full load (average): 55 kW