<!-- keep this as a security measure: #uncomment if the subject should only be modifiable by the listed groups # * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup,Main.CMSAdminGroup # * Set ALLOWTOPICRENAME = Main.TWikiAdminGroup,Main.CMSAdminGroup #uncomment this if you want the page only be viewable by the listed groups # * Set ALLOWTOPICVIEW = Main.TWikiAdminGroup,Main.CMSAdminGroup,Main.CMSAdminReaderGroup --> ---+ Slurm Batch system usage This is introduction to test configuration of Slurm - a modern job scheduler for Linux clusters - at T3. Currently t3ui07 is a single login node for Slurm. As any User Interface Node it should be used mostly for development and small quick tests. For intensive computational work one should use Compute Nodes. There are two types of Compute Nodes - Worker Nodes for CPU usage and GPU machines. All new hardware is equipped with 256GB of RAM and 10GbE network: | *Compute Node* | *Processor Type* | *Computing Resources: Cores/GPUs* | |t3ui07 - login node| Intel Xeon Gold 6148 (2.40GHz) | 80 Cores | | t3gpu0[1-2] | Intel Xeon E5-2630 v4 (2.20GHz) | 8 * !GeForce GTX 1080 Ti | | t3wn60 | Intel Xeon Gold 6148 (2.40GHz) | 80 Cores | | t3wn51-58 | Intel Xeon E5-2698 (2.30GHz) | 64 Cores | | t3wn48 | AMD Opteron 6272 (2.1GHz) | 32 Cores | | t3wn38 | Intel Xeon E5-2670 (2.6 GHz) | 16 Cores | Access to the Compute Nodes is controlled by Slurm. </br> Corresponding to computing resources there are two partitions (similar to SGE queues) implemented: *wn* and *gpu*. Here is few useful commands start to work with Slurm: <pre> sinfo # view information about Slurm nodes and partitions sbatch # submit a batch script squeue # view information about jobs in the scheduling queue scancel # to abort the job </pre> To submit job to the wn partition issue: =sbatch -p wn job.sh= One might create a shell script with a set of all directives starting with =#SBATCH= string like in the following examples. </br> [[GPU Example][GPU Example]] [[CPU Example][CPU Example]] [[CPU Example for using multiple processors (threads) on a single physical computer][CPU Example for using multiple processors (threads) on a single physical computer]] To start Slurm usage please ask T3 Administrators [cms-tier3@lists.psi.ch]‎ to add your user_id to Slurm accounts. -- Main.NinaLoktionova - 2019-05-08
This topic: CmsTier3
>
WebHome
>
SlurmUsage
Topic revision: r10 - 2019-10-09 - NinaLoktionova
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback