User Tools

Site Tools


institute_lorentz:institutelorentz_maris_slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
institute_lorentz:institutelorentz_maris_slurm [2018/08/30 07:51]
lenocil [Example: how to use a node's scratch disks]
institute_lorentz:institutelorentz_maris_slurm [2019/01/31 11:45] (current)
lenocil [Using GPUs with slurm on Maris]
Line 7: Line 7:
 A snapshot of the cluster usage can be found at http://​slurm.lorentz.leidenuniv.nl/​ (only accessible within the IL workstations network). A snapshot of the cluster usage can be found at http://​slurm.lorentz.leidenuniv.nl/​ (only accessible within the IL workstations network).
  
-Maris runs SLURM v17.02.+Maris runs SLURM v17.11.12
  
 Suggested readings: Suggested readings:
Line 45: Line 45:
 |notebook| 48|193044M|all| |maris0[23-28]|6|4| 1|  | notebook | all | |notebook| 48|193044M|all| |maris0[23-28]|6|4| 1|  | notebook | all |
 |computation| 1552|6578050M |400M, 3Days |  |maris0[47-74] | 28 |  |  |  | normal | all | |computation| 1552|6578050M |400M, 3Days |  |maris0[47-74] | 28 |  |  |  | normal | all |
-|compintel| 192 |1030000M|400M,​ 1 Day| |maris0[76,77] |6|  |  | 3 days | normal | beenakker |+|compintel| 192 |1030000M|400M,​ 1 Day| |maris0[76-77] |2|  |  | 3 days | normal | beenakker | 
 +|ibintel| 96 |512000M|400M,​ 1 Day| |maris078 |1|  |  | 10 days | normal | beenakker |
 |emergency| 384 |2773706M|all| |maris0[69-74] |6|  |  |  | normal | NOBODY | |emergency| 384 |2773706M|all| |maris0[69-74] |6|  |  |  | normal | NOBODY |
 |gpu| 56 |256000M|400M,​ 3 Days|2 gpu|maris075 |1|  |  |  | normal | all | |gpu| 56 |256000M|400M,​ 3 Days|2 gpu|maris075 |1|  |  |  | normal | all |
Line 57: Line 58:
 The `gpu' partition should be used only for jobs requiring GPUs. Note that these latter must be requested to slurm explicitly using ''​--gres=gpu:​1''​ for instance. The `gpu' partition should be used only for jobs requiring GPUs. Note that these latter must be requested to slurm explicitly using ''​--gres=gpu:​1''​ for instance.
  
-The `computation'​ and `compintel'​ partitions should be used for production runs.+The `computation'​ and `compintel'​ partitions should be used for production runs. Note that the `compintel'​ partition is made of intel CPUs. 
 + 
 +The `ibintel'​ is a partition made of nodes which have InfinBand connections to an iSCSI scrap storage system to allow efficient I/O operations.
  
  
Line 114: Line 117:
 #!/bin/env bash #!/bin/env bash
 .... ....
-NVVMIR_LIBRARY_DIR=/​usr/​share/cuda /​usr/​bin/​nvcc -I/usr/include/cuda my_code.cu+NVVMIR_LIBRARY_DIR=/​usr/​local/cuda/​lib64/ ​/usr/local/cuda/bin/nvcc -I/usr/local/cuda/​include ​my_code.cu
 </​code>​ </​code>​
  
Line 230: Line 233:
  
 Slurm can be instructed to email any job state changes to a chosen email address. This is accomplished by using the ''​--mail-type''​ option in sbatch for instance Slurm can be instructed to email any job state changes to a chosen email address. This is accomplished by using the ''​--mail-type''​ option in sbatch for instance
-<​code>​+<​code ​bash>
 ... ...
 #SBATCH --mail-user=myemail@address.org #SBATCH --mail-user=myemail@address.org
institute_lorentz/institutelorentz_maris_slurm.1535615479.txt.gz · Last modified: 2018/08/30 07:51 by lenocil