ARROW Cluster under construction: Difference between revisions

From TRACC Wiki
Jump to navigation Jump to search
 
(10 intermediate revisions by the same user not shown)
Line 38: Line 38:
* a6000
* a6000
** GPU
** GPU
:::::::::::::::[[New main|'''Return To Main Page''']]<br >
:::::::::::::::[[Setting Up Your Environment#File Permissions|'''Return to Setting Up Your Environment''']]<br >
:::::::::::::::[[Scheduling a Computing Node(s)#Example of scheduling a Queue using pbs|'''Return to Scheduling a Computing Node(s)''']]<br >

Latest revision as of 20:56, March 11, 2025

There are currently several queues that are available, some with restrictions about who can use them as described below. Also be aware that all nodes in some queues have the same characteristics (RAM, etc) while some queues have nodes with different characteristics. Thus jobs using those queues must specify the node names that are to be used.

The queues "Torque Computing Queues" are scheduled by the Torque scheduler and is available by default.

The queues "PBS Computing Queues" are available after the environmental specifying the PBS Scheduler is initiated

 
 module load pbs/pbs

ARROW Computing Queues

Torque Computing Queues

  • batch
  • nhtsa
  • virtual
  • arrow
  • gpu
  • extra
  • lambda
  • epyc3

PBS Computing Queues

  • workq
  • xeon28
  • virtual
  • a4000
    • GPU
  • a6000
    • GPU
Return To Main Page
Return to Setting Up Your Environment
Return to Scheduling a Computing Node(s)