ARROW Cluster: Difference between revisions
No edit summary |
|||
Line 4: | Line 4: | ||
== ARROW Queues== | == ARROW Queues== | ||
There are currently three queues that are available with some restrictions about who can use them as described below. | There are currently three queues that are available with some restrictions about who can use them as described below. | ||
* batch (default, with 94 nodes, each node with 16 floating point cores | * batch (default, with 94 nodes, each node with 16 floating point cores available for general use) | ||
** 92 nodes have 32 GB of RAM | |||
** 2 nodes (nodes 1 and 2) with 64 GB | |||
** 2 nodes (nodes 3 and 4) with 128GBavailable for general use) | |||
* nhtsa (with 12 nodes, each with 28 cores and 64 GB of RAM, only available to the NHTSA project) | * nhtsa (with 12 nodes, each with 28 cores and 64 GB of RAM, only available to the NHTSA project) | ||
* arrow (one new EPYC server with 64 cores, for use for testing by TRACC staff or special permission by the TRACC Director) | * arrow (one new EPYC server with 64 cores, for use for testing by TRACC staff or special permission by the TRACC Director) |
Revision as of 22:40, February 17, 2021
Introduction To ARROW
TRACC has now combined the hardware from the Phoenix and Zephyr clusters into the ARROW cluster. This consolidation allows efficient administration of TRACC cluster services with limited staff. To avoid the problems of load balancing, the different types of hardware nodes on the ARROW cluster are partitioned and available in queues. When new hardware is installed to expand cluster resources, it will be made available via a new queue. The documentation at Using the Clusters describes procedures for using ARROW.
ARROW is arranged such that there is a single set of login nodes, a singe file system, and single user home directory that serves all of the nodes in all of the queues.
ARROW Queues
There are currently three queues that are available with some restrictions about who can use them as described below.
- batch (default, with 94 nodes, each node with 16 floating point cores available for general use)
- 92 nodes have 32 GB of RAM
- 2 nodes (nodes 1 and 2) with 64 GB
- 2 nodes (nodes 3 and 4) with 128GBavailable for general use)
- nhtsa (with 12 nodes, each with 28 cores and 64 GB of RAM, only available to the NHTSA project)
- arrow (one new EPYC server with 64 cores, for use for testing by TRACC staff or special permission by the TRACC Director)