User Tools

Site Tools


institute_lorentz:institutelorentz_maris_slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
institute_lorentz:institutelorentz_maris_slurm [2018/08/30 07:41] – [Accounting] lenocilinstitute_lorentz:institutelorentz_maris_slurm [2019/01/16 08:38] – [Slurm on the Maris Cluster] lenocil
Line 1: Line 1:
-====== slurm on the Maris Cluster ======+====== Slurm on the Maris Cluster ======
  
 All maris nodes have been configured to use [[http://slurm.schedmd.com/|slurm]] as a workload manager. Its use is enforced on all nodes. Direct access to any node other than the headnode `novamaris' is not allowed.  All maris nodes have been configured to use [[http://slurm.schedmd.com/|slurm]] as a workload manager. Its use is enforced on all nodes. Direct access to any node other than the headnode `novamaris' is not allowed. 
Line 7: Line 7:
 A snapshot of the cluster usage can be found at http://slurm.lorentz.leidenuniv.nl/ (only accessible within the IL workstations network). A snapshot of the cluster usage can be found at http://slurm.lorentz.leidenuniv.nl/ (only accessible within the IL workstations network).
  
-Maris runs SLURM v17.02.+Maris runs SLURM v17.11.12
  
 Suggested readings: Suggested readings:
Line 45: Line 45:
 |notebook| 48|193044M|all| |maris0[23-28]|6|4| 1|  | notebook | all | |notebook| 48|193044M|all| |maris0[23-28]|6|4| 1|  | notebook | all |
 |computation| 1552|6578050M |400M, 3Days |  |maris0[47-74] | 28 |  |  |  | normal | all | |computation| 1552|6578050M |400M, 3Days |  |maris0[47-74] | 28 |  |  |  | normal | all |
-|compintel| 192 |1030000M|400M, 1 Day| |maris0[76,77] |6|  |  | 3 days | normal | beenakker |+|compintel| 192 |1030000M|400M, 1 Day| |maris0[76-77] |2|  |  | 3 days | normal | beenakker | 
 +|ibintel| 96 |512000M|400M, 1 Day| |maris078 |1|  |  | 10 days | normal | beenakker |
 |emergency| 384 |2773706M|all| |maris0[69-74] |6|  |  |  | normal | NOBODY | |emergency| 384 |2773706M|all| |maris0[69-74] |6|  |  |  | normal | NOBODY |
 |gpu| 56 |256000M|400M, 3 Days|2 gpu|maris075 |1|  |  |  | normal | all | |gpu| 56 |256000M|400M, 3 Days|2 gpu|maris075 |1|  |  |  | normal | all |
Line 57: Line 58:
 The `gpu' partition should be used only for jobs requiring GPUs. Note that these latter must be requested to slurm explicitly using ''--gres=gpu:1'' for instance. The `gpu' partition should be used only for jobs requiring GPUs. Note that these latter must be requested to slurm explicitly using ''--gres=gpu:1'' for instance.
  
-The `computation' and `compintel' partitions should be used for production runs.+The `computation' and `compintel' partitions should be used for production runs. Note that the `compintel' partition is made of intel CPUs. 
 + 
 +The `ibintel' is a partition made of nodes which have InfinBand connections to an iSCSI scrap storage system to allow efficient I/O operations.
  
  
Line 111: Line 114:
  
 To compile your cuda application on maris using slurm, note that in your submission script you might have to export the libdevice library path and include the path in which the cuda headers can be found, for instance To compile your cuda application on maris using slurm, note that in your submission script you might have to export the libdevice library path and include the path in which the cuda headers can be found, for instance
-<code>+<code bash>
 #!/bin/env bash #!/bin/env bash
 .... ....
Line 129: Line 132:
  
 In principle to run an MPI application you could just execute it using mpirun as shown in the session below In principle to run an MPI application you could just execute it using mpirun as shown in the session below
-<code>+<code bash>
 novamaris$ cat slurm_script.sh novamaris$ cat slurm_script.sh
 #!/bin/env bash #!/bin/env bash
Line 138: Line 141:
 </code> </code>
 However, __**it is highly advised you use slurm's ''srun'' to submit a parallel job in any circumstances**__. However, __**it is highly advised you use slurm's ''srun'' to submit a parallel job in any circumstances**__.
-<code>+<code bash>
 novamaris$ cat slurm_script.sh novamaris$ cat slurm_script.sh
 #!/bin/env bash #!/bin/env bash
Line 148: Line 151:
 At the moment maris supports only OpenMPI with slurm so you are required to load a particular openmpi/slurm module to get things to work, for instance At the moment maris supports only OpenMPI with slurm so you are required to load a particular openmpi/slurm module to get things to work, for instance
  
-<code>+<code bash>
 # load openMPI # load openMPI
 module load openmpi-slurm/2.0.2 module load openmpi-slurm/2.0.2
Line 177: Line 180:
 An example of batch script is given below: An example of batch script is given below:
  
-<code>+<code bash>
 #!/bin/env bash #!/bin/env bash
 ##comment out lines by adding at least two `#' at the beginning ##comment out lines by adding at least two `#' at the beginning
Line 203: Line 206:
 Consider the batch script below Consider the batch script below
  
-<code>+<code bash>
 $ cat slurmcp.sh $ cat slurmcp.sh
 #!/bin/env bash #!/bin/env bash
Line 230: Line 233:
  
 Slurm can be instructed to email any job state changes to a chosen email address. This is accomplished by using the ''--mail-type'' option in sbatch for instance Slurm can be instructed to email any job state changes to a chosen email address. This is accomplished by using the ''--mail-type'' option in sbatch for instance
-<code>+<code bash>
 ... ...
 #SBATCH --mail-user=myemail@address.org #SBATCH --mail-user=myemail@address.org