Under construction to ARROW Cluster

From TRACC Wiki
Revision as of 23:50, January 27, 2025 by Amiot (talk | contribs) (Created page with "== Introduction To ARROW == TRACC has combined the original hardware from the Phoenix and Zephyr clusters into the ARROW cluster. This consolidation allows efficient administration of TRACC cluster services. To avoid the problems of load balancing, the different types of hardware nodes on the ARROW cluster are partitioned and available in queues. When new hardware is installed to expand cluster resources, it will be made available via a new queue. The documentation at ...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Introduction To ARROW

TRACC has combined the original hardware from the Phoenix and Zephyr clusters into the ARROW cluster. This consolidation allows efficient administration of TRACC cluster services. To avoid the problems of load balancing, the different types of hardware nodes on the ARROW cluster are partitioned and available in queues. When new hardware is installed to expand cluster resources, it will be made available via a new queue. The documentation at Using the Cluster describes procedures for using ARROW.

ARROW is arranged such that there is a single set of 4 login nodes, a singe file system, and single user home directory that serves all of the nodes in all of the queues.

Further all nodes on ARROW are logically divided into one scheduling system or another. The first scheduling system is called Torque and Maui (called Torque in the rest of this presentation the second scheduling system is called is capped Torque.