HPC/Submitting and Managing Jobs/Queues and Policies: Difference between revisions
m (→Introduction) |
|||
(4 intermediate revisions by the same user not shown) | |||
Line 14: | Line 14: | ||
resources_'''max.walltime = 240:00:00''' # 10 days | resources_'''max.walltime = 240:00:00''' # 10 days | ||
max_user_queuable = 2000 | max_user_queuable = 2000 | ||
For appropriate <code>ppn</code> values, see [[HPC/Hardware Details]]. | |||
In addition, the Moab scheduler applies various per-user limits. | In addition, the Moab scheduler applies various per-user limits. | ||
Straightforward are hard and soft limits on the number of concurrent jobs and CPU cores used (about 60% of the whole machine), | |||
designed to prevent monopolizing the cluster by a single user while permitting use of otherwise idle resources. | |||
A more advanced parameter is a cutoff for queued jobs ''considered for scheduling,'' based on their total number of cores requested (MAXIPROC). | |||
This ensures a fair job turnover between different users, while not restricting throughput for large numbers of "small" jobs. | |||
<!-- | |||
MAXJOB metaphor only: This concept is similar to that a line of people waiting outside a building vs. those permitted to wait inside the lobby. | |||
--> | |||
See also: | See also: | ||
Line 28: | Line 34: | ||
The debug queue accepts jobs under the following conditions | The debug queue accepts jobs under the following conditions | ||
resources_'''default.nodes = 1:ppn=4''' | resources_'''default.nodes = 1:ppn=4''' | ||
resources_'''max.nodes = | resources_'''max.nodes = 2:ppn=8''' | ||
resources_'''default.walltime = 00:15:00''' | resources_'''default.walltime = 00:15:00''' | ||
resources_'''max.walltime = 01:00:00''' | resources_'''max.walltime = 01:00:00''' | ||
max_user_queuable = 3 | max_user_queuable = 3 | ||
max_user_run = 2 | max_user_run = 2 | ||
in other words, | in other words, | ||
nodes ≤ | nodes ≤ 2 | ||
ppn ≤ | ppn ≤ 8 | ||
walltime ≤ 1:00:00 # 1 hour | walltime ≤ 1:00:00 # 1 hour |
Latest revision as of 17:07, April 9, 2024
Introduction
There is one main queue and one debug queue on Carbon, defined as Torque queues. Once submitted, job routing decisions are made by the Moab scheduler.
In this framework, short jobs are accommodated by a daily reserved node and by backfill scheduling, i.e. "waving forward" small jobs while one or more big jobs wait for full resources to become available.
Default queue "batch"
The main queue on Carbon is batch
and need not be specified to qsub or in the job script.
The following defaults and limits apply:
resources_default.nodes = 1:ppn=8 resources_default.walltime = 00:15:00 # 15 min resources_max.walltime = 240:00:00 # 10 days max_user_queuable = 2000
For appropriate ppn
values, see HPC/Hardware Details.
In addition, the Moab scheduler applies various per-user limits. Straightforward are hard and soft limits on the number of concurrent jobs and CPU cores used (about 60% of the whole machine), designed to prevent monopolizing the cluster by a single user while permitting use of otherwise idle resources. A more advanced parameter is a cutoff for queued jobs considered for scheduling, based on their total number of cores requested (MAXIPROC). This ensures a fair job turnover between different users, while not restricting throughput for large numbers of "small" jobs.
See also:
Queue "debug"
For testing job processing and your job environment, use qsub -q debug
on the command line, or the follwing in a job script:
#PBS -q debug
The debug queue accepts jobs under the following conditions
resources_default.nodes = 1:ppn=4 resources_max.nodes = 2:ppn=8 resources_default.walltime = 00:15:00 resources_max.walltime = 01:00:00 max_user_queuable = 3 max_user_run = 2
in other words,
nodes ≤ 2 ppn ≤ 8 walltime ≤ 1:00:00 # 1 hour