HPC/Carbon Cluster - Overview
< HPC
Jump to navigation
Jump to search
|
Primary Usage
- Modeling and Simulation
- CNM Theory and Modeling group
- CNM User Community
- Real-time on-demand data processing
- Nanoprobe Beamline
- E-beam lithography control
- Other high-data intensive instruments
Hardware
- 350 compute nodes, 8 cores/node (Xeon 2.5-2.7 GHz)
- total: 2800 cores, 7 TB RAM, 40 TB disk
- Infiniband interconnect
- 40 TB shared storage
- Performance 28 TFLOPS (aggregrate)
- More details on a separate page
Software
- Redhat Enterprise Linux 5/CentOS 5
- Moab Cluster Suite
- GNU and Intel compilers