ARROW Cluster: Difference between revisions

From TRACC Wiki
Jump to navigation Jump to search
 
(8 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== Introduction To ARROW ==
== Introduction To ARROW ==
TRACC has now combined the hardware from the Phoenix and Zephyr clusters into the ARROW cluster. This consolidation allows efficient administration of TRACC cluster services with limited staff. To avoid the problems of load balancing, the different types of hardware nodes on the ARROW cluster are partitioned and available in queues. When new hardware is installed to expand cluster resources, it will be made available via a new queue. The documentation at [[Using the Clusters]] describes procedures for using ARROW.
TRACC has combined the original hardware from the Phoenix and Zephyr clusters into the ARROW cluster. This consolidation allows efficient administration of TRACC cluster services with limited staff. To avoid the problems of load balancing, the different types of hardware nodes on the ARROW cluster are partitioned and available in queues. When new hardware is installed to expand cluster resources, it will be made available via a new queue. The documentation at [[Using the Cluster]] describes procedures for using ARROW.
<p>ARROW is arranged such that there is a single set of login nodes, a singe file system, and single user home directory that serves all of the nodes in all of the queues.
<p>ARROW is arranged such that there is a single set of 4 login nodes, a singe file system, and single user home directory that serves all of the nodes in all of the queues.
 
== ARROW Queues==
== ARROW Queues==
There are currently several queues that are available, some with restrictions about who can use them as described below.
There are currently several queues that are available, some with restrictions about who can use them as described below. Also be aware that all nodes in some queues have the same characteristics (RAM, etc) while some queues have nodes with different characteristics. Thus jobs using those queues must specify the node names that are to be used.
* batch ('''default queue''', with 90 nodes, each node with 16 floating point cores available for general use)
 
** 92 nodes (n005 through n098) each with 32 GB of RAM
* batch queue (default queue)
* batch128
** 95 nodes numbered n005 through n099
** 2 nodes (n001 and n002) each with 128GB
** 2 x AMD Opteron 6276
** Available for general usage
** 16 floating point cores per node
* batch65
** 32GB of RAM per node
** 2 nodes (n003 and n004) each with 64GB
** available for general use
** Available for general usage
 
* nhtsa (with 12 nodes, each with 28 cores and 64 GB of RAM, only available to the NHTSA project)
* batch128 queue
* arrow  
** 2 nodes numbered n001 and n002
** 3 EPYC 7702P nodes each with 16 logical CPUs, 4 cores each, and 512 GB of RAM (node names a001, a002, and a003)
** Same design as batch queue
** 2 EPYC 7702P nodes each with 16 logical CPUs, 4 cores each, and 256 GB of RAM (node names a004 and a005)
** 128GB of RAM per node
** The nodes are currently for use for testing by TRACC staff or special permission by the TRACC Director
** available for general use
* virtual (This queue is only available for testing and is considered under construction. Please do not use for now.)
 
* test (This queue is only available for testing, and is only available with permission by the TRACC Director. The nodes as currently configure are
* batch64 queue
not very powerful but have large amounts of RAM.)
** 2 nodes numbered n003 and n004
** Same design as batch queue
** 64GB of RAM per node
** available for general use
 
* nhtsa queue
** 12 nodes numbered p001 through p012
** 2 x Intel Xeon E5-2690 v4
** 28 floating point cores per node
** 64GB of RAM per node
** only available to the NHTSA project
 
* arrow queue
** 15 nodes numbered a001 through a015
** 1 x Intel EPYC 7702P
** 64 floating point cores per node
** 256GB of RAM per node, 512GB on nodes a001 through a003
** available for general use
 
* extra queue
** 12 nodes numbered a0016 through a027
** 1 x Intel EPYC 7713P
** 64 floating point cores per node
** 256GB of RAM per node, 512GB on nodes a018 through a022
** available for general use
** note: this queue will likely be merged into the arrow queue in the future
 
* virtual queue
** 5 nodes numbered v001 through v005
** Mostly for internal testing and validation, can be used as 2 core machines with 32GB memory
** Minimal virtual hardware, not capable of running engineering applications

Latest revision as of 19:41, December 4, 2023

Introduction To ARROW

TRACC has combined the original hardware from the Phoenix and Zephyr clusters into the ARROW cluster. This consolidation allows efficient administration of TRACC cluster services with limited staff. To avoid the problems of load balancing, the different types of hardware nodes on the ARROW cluster are partitioned and available in queues. When new hardware is installed to expand cluster resources, it will be made available via a new queue. The documentation at Using the Cluster describes procedures for using ARROW.

ARROW is arranged such that there is a single set of 4 login nodes, a singe file system, and single user home directory that serves all of the nodes in all of the queues.

ARROW Queues

There are currently several queues that are available, some with restrictions about who can use them as described below. Also be aware that all nodes in some queues have the same characteristics (RAM, etc) while some queues have nodes with different characteristics. Thus jobs using those queues must specify the node names that are to be used.

  • batch queue (default queue)
    • 95 nodes numbered n005 through n099
    • 2 x AMD Opteron 6276
    • 16 floating point cores per node
    • 32GB of RAM per node
    • available for general use
  • batch128 queue
    • 2 nodes numbered n001 and n002
    • Same design as batch queue
    • 128GB of RAM per node
    • available for general use
  • batch64 queue
    • 2 nodes numbered n003 and n004
    • Same design as batch queue
    • 64GB of RAM per node
    • available for general use
  • nhtsa queue
    • 12 nodes numbered p001 through p012
    • 2 x Intel Xeon E5-2690 v4
    • 28 floating point cores per node
    • 64GB of RAM per node
    • only available to the NHTSA project
  • arrow queue
    • 15 nodes numbered a001 through a015
    • 1 x Intel EPYC 7702P
    • 64 floating point cores per node
    • 256GB of RAM per node, 512GB on nodes a001 through a003
    • available for general use
  • extra queue
    • 12 nodes numbered a0016 through a027
    • 1 x Intel EPYC 7713P
    • 64 floating point cores per node
    • 256GB of RAM per node, 512GB on nodes a018 through a022
    • available for general use
    • note: this queue will likely be merged into the arrow queue in the future
  • virtual queue
    • 5 nodes numbered v001 through v005
    • Mostly for internal testing and validation, can be used as 2 core machines with 32GB memory
    • Minimal virtual hardware, not capable of running engineering applications