HPC/Carbon Cluster - Overview: Difference between revisions
< HPC
Jump to navigation
Jump to search
mNo edit summary |
m (→Hardware) |
||
Line 13: | Line 13: | ||
=== Hardware === | === Hardware === | ||
[[Image:HPC Compute Rack-up.png|noframe|right|200px|]] | [[Image:HPC Compute Rack-up.png|noframe|right|200px|]] | ||
* | * 280 compute nodes, 8-16 cores/node (Xeon) | ||
* total: | * total: 2800 cores, 8 TB RAM, 150 TB disk | ||
* Infiniband interconnect | * Infiniband interconnect | ||
* Performance 32 TFLOPS (aggregrate) | * Performance 32 TFLOPS (aggregrate, incl. GPU) | ||
* More [[HPC/Hardware Details|details on a separate page]] | * More [[HPC/Hardware Details|details on a separate page]] | ||
Latest revision as of 17:51, August 30, 2017
|
Primary Usage
- Modeling and Simulation
- CNM User Community
- CNM Theory and Modeling group
- Real-time on-demand data processing
- Nanoprobe Beamline
- E-beam lithography control
- Other high-data intensive instruments
Hardware
- 280 compute nodes, 8-16 cores/node (Xeon)
- total: 2800 cores, 8 TB RAM, 150 TB disk
- Infiniband interconnect
- Performance 32 TFLOPS (aggregrate, incl. GPU)
- More details on a separate page
Software
- Redhat Enterprise Linux 5/CentOS 5
- Moab Cluster Suite
- GNU and Intel compilers
- Applications