HPC/Carbon Cluster - Overview: Difference between revisions

From CNM Wiki
< HPC
Jump to navigation Jump to search
mNo edit summary
 
(One intermediate revision by the same user not shown)
Line 13: Line 13:
=== Hardware ===
=== Hardware ===
[[Image:HPC Compute Rack-up.png|noframe|right|200px|]]
[[Image:HPC Compute Rack-up.png|noframe|right|200px|]]
* 380 compute nodes, 8-12 cores/node (Xeon)
* 280 compute nodes, 8-16 cores/node (Xeon)
* total: 3200 cores, 8 TB RAM, 100 TB disk
* total: 2800 cores, 8 TB RAM, 150 TB disk
* Infiniband interconnect
* Infiniband interconnect
* Performance 32 TFLOPS (aggregrate)
* Performance 32 TFLOPS (aggregrate, incl. GPU)
* More [[HPC/Hardware Details|details on a separate page]]
* More [[HPC/Hardware Details|details on a separate page]]


Line 23: Line 23:
* [http://www.clusterresources.com/pages/products/moab-cluster-suite.php Moab Cluster Suite]  
* [http://www.clusterresources.com/pages/products/moab-cluster-suite.php Moab Cluster Suite]  
* GNU and [http://www.intel.com/support/performancetools/ Intel] compilers
* GNU and [http://www.intel.com/support/performancetools/ Intel] compilers
 
* [[HPC/Applications | Applications]]
===Vendors===
* [http://www.redhat.com/ Redhat]
* [http://www.intel.com/ Intel]


[[Category:HPC|Overview]]
[[Category:HPC|Overview]]

Latest revision as of 17:51, August 30, 2017


HPC-Main-external.jpg

Carbon Cluster
User Information


Primary Usage

Hardware

HPC Compute Rack-up.png
  • 280 compute nodes, 8-16 cores/node (Xeon)
  • total: 2800 cores, 8 TB RAM, 150 TB disk
  • Infiniband interconnect
  • Performance 32 TFLOPS (aggregrate, incl. GPU)
  • More details on a separate page

Software