PBS-main
PBS-Main
Under construction
We are currently using Torque and Maui as our queuing system. Jobs can be submitted from any of the five login nodes. Once a job starts, the nodes assigned to that user are generally accessible by additional ssh sessions from any other node in the system. For example, if you submit a job from login1, you can go to login2 and create an ssh session to the node that was handed out by the scheduler. Think of it as a global resource allocation that gives you access to a few nodes that you can do anything on as you desire until the job time expires. This is true for interactive and batch sessions: it’s all the same. Any node assigned to a user is fully allocated to that user, and a job can only ask for full nodes. No other users can share a node that has been handed out to a user. The queues are used to get certain CPU types for the job.
We are currently adding a second scheduling and queuing system that is called OpenPBS. That is a new scheduler that is installed but not activated by default. It behaves roughly like the old scheduler, but it is much more modern and flexible. Because it is installed but not activated, you have to turn off the standard scheduler and enable OpenPBS. The method to enable OpenPBS is to use the “module” command. That’s the command:
module load pbs/pbs
The “module” command changes your environment variable to enable or disable certain software collections or components. That includes Python, OpemFoam, StarCCM+, and many more. This is all configured for the current production system.