This is an old revision of the document!
xmaris is a small computational cluster at the Lorentz Institute financed by external research grants. As such, its access is granted primarily to the research groups who have been awarded the grants. Other research groups wishing to use xmaris can enquire whether there is any left-over computing time by getting in touch with either
to discuss what resources can be made available to their needs. After a preliminary assessment and approval, access to xmaris will be granted by the IT staff. Any technical questions should be addressed via https://helpdesk.lorentz.leidenuniv.nl or in person to
External research groups to the Lorentz Institute are strongly encouraged to explore other HPC possibilities, such as the ALICE HPC cluster of the University of Leiden.
Xmaris is optimised for multithreading applications and embarrassingly parallel problems but there have been some recent investments to improve nodes interconnection communications to enable multiprocessing. Currently, multiprocessing is possible on maris0[78-81] which are interconnected via an InfiniBand EDR switch. Each one of these nodes is capable of a practical 9.6 TFLOPS*.
* Ask support how this number was estimated.
Xmaris is the successor of the maris cluster, renamed with a prefix
x because its nodes deployment is automated using the xCAT software. Less formally, the presence of the
x prefix also suggests the time of the year when xmaris was first made available to IL users, that is Christmas (Xmas).
xmaris runs CentOS v7.6 and consists of heterogeneous computation nodes. A list of configured nodes and partitions on the cluster can be obtained using slurm
Because xmaris features different CPU types that understand different types of instructions (see here), each slurm node is characterised by a list of
Features that, among other things, describe the type of CPUs mounted in that node.
In the example below you can see that maris077's CPUs belong to the
broadwell family. To request allocation of specific features, please see below.
sinfo -o " %n %P %t %C %z %m %f" -N -n maris077 HOSTNAMES PARTITION STATE CPUS(A/I/O/T) S:C:T MEMORY AVAIL_FEATURES maris077 compIntel mix 10/86/0/96 4:12:2 512000 broadwell,10Gb,R830,highmem
xmaris aims to offer a stable computational environment to its users in the period Dec 2019 – Jan 2024. Within this period, the OS might be patched only with important security updates. Past January 2024 all working xmaris nodes will be re-provisioned from scratch with a newer version of CentOS. At this time all scratch data disks will be reformatted.
All compute nodes have at least access to the following data partitions
An extra scratch space is available to all nodes which features infiniband cards
iSER stands for “iSCSI Extensions for RDMA”. It is an extension of the iSCSI protocol that includes RDMA (Remote Dynamic Memory Access) support.
Backup snapshots of
/home are taken hourly, daily, and weekly and stored in
xmaris users are strongly advised they delete (or at least move to the shared data disk), if any, their data from the compute nodes scratch disks upon completion of their calculations. All data on the scratch disks might be cancelled without prior notice.
The home disk
/home has a 10GB/user quota where as
/marisdata has a 2TB/user quota. Note that these policies might change at any time at the discretion of the cluster owners.
/clusterdata is deliberately made unavailable on xmaris, because it is no longer maintained. If you have any data on it, it is your responsibility to create backups. All data on
/clusterdata will get permanently lost in case of hardware failure.
All data on the scratch partitions is assumed to be temporary and will be deleted upon a node re-installation.
maris' homes are different than IL workstations' homes.
Usage policies are updated regularly in accordance with the needs of the cluster owners and may change at any time without notice. At the moment there is an enforced usage limit of 128 CPUs per user that does not apply to the owners. Job execution priorities are defined via a complex multi-factor algorithm whose parameters can be queried via
scontrol show config | grep -i priority
Xmaris usage is regularly monitored to prevent resource abuse.
To monitor live usage of xmaris you can either
The link above is accessible only within the IL workstations network.
Only from within the IL workstations subnet
Once you have been authorised to use xmaris, you have two ways to access its services:
Terminal access is provided via login to xmaris' headnode reachable at
marishead.lorentz.leidenuniv.nl. For connections from outside the IL network, an ssh tunnel into the IL ssh server is needed.
If you were a maris user prior to the configuration switch to xmaris, you will find out that many terminal functions and programs are not working as expected. This is due to the presence in your maris home directory of old shell initialisation scripts still tied to the STRW sfinx environment. You can override them (after making a backup copy) by replacing their contents with the default CentOS shell initialisation scripts, for instance for bash these are located in
xmaris services, that is terminal, scheduler/resource manager, jupyter notebooks and monitoring facilities, can be accessed easily via a browser without the need of additional plugins navigating to xmaris OpenOnDemand . Similarly to a standard terminal access, xmaris OpenOnDemand is available only for connections within the IL subnetwork. IL users who wish to access OpenOnDemand from their home could instruct their browser to connect via a SOCKS proxy, for instance open a local terminal and type
ssh -ND 7777 <your_IL_username>@ssh.lorentz.leidenuniv.nl
then in your browser settings find the tab relative to the connection type and instruct the browser to use the SOCKS proxy at
localhost:7777 to connect to the internet.
xmaris OnDemand allows you to
Currently only maris075 features GPUs. They are two
nvidia Tesla P100 16GB (cuda-compute-capability = 6.0) . In order to request them you must use the –gres
option, for instance srun -p gpuIntel –gres=gpu:1 –pty bash -i
===== xmaris scientific software =====
xmaris uses EasyBuild to provide a build environment for its (scientific) software. Pre-installed software can be explored by means of the module spider
command. For instance you can query the system for all modules whose name starts with `mpi' by executing module -r spider '^mpi
'. Installed softwares include
|GCC| GNU Compiler Collection|
|OpenBLAS | Basic Linear Algebra Subprograms|
|LAPACK |Linear Algebra PACKage |
|ScaLAPACK | Scalable Linear Algebra PACKage|
|CUDA | Compute Unified Device Architecture|
|FFTW | Fastest Fourier Transform in the West|
|EasyBuild | Software Build and Installation Framework |
|GSL | GNU Scientific Library|
|HDF5 | Management of Extremely Large and Complex Data Collections|
|git | Distributed Version Control System |
|Java | General-Purpose Programming Language |
|Miniconda | Free Minimal Installer for Conda |
|OpenMPI | Open Source Message Passing Interface Implementation |
|Python | Programming Language |
|PyCUDA | Python wrapper to CUDA |
|Perl | Programming Language |
|R | R is a Language and Environment for Statistical Computing and Graphics |
|Singularity | Containers software |
| Tensorflow | Machine Learning Platform |
|plc | The Planck Likelihood Code |
|cobaya | A code for Bayesian analysis in Cosmology |
|Clang | C language family frontend for LLVM |
|Octave | GNU Programming language for scientific computing |
| Mathematica* | Technical computing system |
* Usage of proprietary software is discouraged.
For an up-to-date list of installed software use the module
Any pre-installed software can be sourced by means of the module load
Sometimes it is useful to save a list of modules you use often in a `collection'. Consider the following example
module load mod1 mod2 mod3 mod4 mod5 mod6
modules save collection1
module restore collection1
# list collections
==== Installing extra software ====
- Request it via https://helpdesk.lorentz.leidenuniv.nl
- Install it yourself
* via EasyBuild (see instructions below on how to setup your EasyBuild environment)
* using your own method
You do not have administrative rights to the cluster.
=== Installing software via EasyBuild ===
Load the EasyBuild module and define a directory in which to store your EasyBuild-built softwares
module avail EasyBuild
module load EasyBuild
NOTE 1: The environment variable EASYBUILD_OPTARCH
instructs EasyBuild to compile software in a generic way so that it can be used on different CPUs. This is rather convenient in heterogeneous clusters such as xmaris to avoid recompilations of the same softwares on different compute nodes. This convenience comes of course at a cost; the executables so produced will not be as efficient as they would be on a given CPU. For more info read here.
NOTE 2: When compiling OpenBLAS it is not sufficient to define EASYBUILD_OPTARCH
to achieve portability of the executables. Some extra steps must be taken as described in https://github.com/easybuilders/easybuild/blob/master/docs/Controlling_compiler_optimization_flags.rst. A list of targets supported by OpenBLAS can be found here.
Search a software to build, build it and make it available to your environment
eb -S ^Miniconda
eb Miniconda2-4.3.21.eb -r
module use /marisdata/<uname>/easybuild/modules/all
module use <path>
will prepend <path> to your MODULEPATH
. Should you want to append it instead, then add the option -a
. To remove <path> from MODULEPATH
execute module unuse <path>
Should you want to customise the building process of a given software please read how to implement EasyBlocks and write EasyConfig files or
contact Leonardo Lenoci (HL409b) for a quick tutorial.
===== How to run a computation =====
xmaris runs the slurm scheduler and resource manager. Computation jobs can be submitted as batch jobs or be run interactively via slurm. Any other jobs will be terminated without prior notice.
==== Batch jobs ====
To submit a batch job to slurm you must first create a shell script which contains enough instructions to request the needed resources to slurm and to execute your program. The script can be written in any known interpreter to the system. In a batch script, slurm instructions are prefixed by the interpreter comment symbol and the word SBATCH
For instance a bash batch script could be
Please consult the slurm manual for all possible options. Batch scripts are then submitted for execution via sbatch
sbatch: Submitted batch job 738279474774293
and their status [PENDING|RUNNING|FAILED|COMPLETED] checked using squeue
. You can recur to the command sstat
to display useful information about your running job, such as memory consumption etc…
ssh shell access to an executing node is automatically granted by slurm and can also be used for debugging purposes.
==== Interactive jobs ====
Interactive jobs give you nodes shell prompts in interactive mode
srun –pty bash -i
===== Parallelism 101 =====
Inexpert users should refrain from attempting to program parallel applications without studying appropriately.
A parallel job runs a calculation whose computational subtasks are run simultaneously. The underlying principle is that
large computations could be more efficient if divided into smaller ones. Note however, that parallelism
can in fact decrease the efficiency of a poorly written code in which communication and synchronisation between the different subtasks are not handled properly.
Parallelism is usually achieved either by
* multithreading (shared memory) on multiple cores of a single node
* multiprocessing (distributed memory) on multiple nodes
==== Multithreading ====
In multithreading programming all computation subtasks (threads
) exist within the context of single process and share the process' resources. Threads are able to execute independently and are assigned by the operating system to multiple CPU cores and/or multiple CPUs effectively speeding up your calculation.
Multithreading can be achieved using libraries such as pthread
==== Multiprocessing ====
Multiprocessing usually refers to computations subdivided into tasks that run on multiples nodes. This type of programming increases the resources available (e.g. more memory) to your computation by employing several nodes at the same time. MPI (Message Parsing Interface)
defines the standards (in terms of syntax and rules) to implement multiprocessing in your codes.
MPI-enabled applications spawn multiple copies of the program, also called ranks, mapping each one of them to a processor. A computation node has usually multiple processors. The MPI interface lets you manage the allocated resources and the communication and synchronisation of the ranks
It is easy to imagine how inefficient can be a poorly-written MPI application or an MPI application running on a cluster with slow nodes interconnects.
==== Hybrid Applications ====
This term refers to applications that use simultaneously multiprocessing (MPI) and multithreading (OpenMP).
===== How to launch a jupyter notebook =====
To launch a jupyter notebook login to xmaris OnDemand, select Interactive Apps –> Jupyter Notebook
and specify the resources needed in the form provided, push Launch
and wait until the notebook has launched.
Now you can interact with your notebook (click on Connect to Jupyter
), open a shell on the executing node (click on Host >_hostaname
), and analyse notebook log files for debugging purposes (click on Session ID xxxx-xxxx-xxxxx-xxxxxxx-xxx-xx
If your notebook does not launch in a few seconds take the following actions
* Check the status of your jobs in the queue with squeu -u <username>
* Examine the notebook log files (click on Session ID xxxx-xxxx-xxxxx-xxxxxxx-xxx-xx
* If the suggestions above do not help, contact support
==== How to launch a jupyter notebook that uses GPUs ====
Repeat the steps above but make sure you select an appropriate GPU partition. Moreover, you must add an appropriate CUDA module to the field Extra modules needed
otherwise the connection to the GPUs might not work as expected. For a list of cuda modules you could type in a terminal ml spider CUDA
The form field Extra modules needed
can accept more than one module as long as the modules names are separated by a space.
NOTE1: If you want your notebook directory to be different than $HOME, please do export NOTEBOOKDIR=/marisdata/$LOGNAME
in your .bashrc
NOTE2: Any form fields left empty will assume pre-programmed values. For instance, you do not need to
specify your slurm account because it will default to the account of your PI.
==== Launching jupyterlab instead of jupyter notebook ====
If you prefer the newest jupyter notebook features and interface, that is jupyetrlab
, just proceed as above and after clicking on Connect to Jupyter
replace the string tree with the string lab in the URL bar of your browser. For instance, a typical jupyterlab interface URL could look like this
==== Custom jupyter kernels ====
A jupyter notebook kernel defines the notebook interpreter, such as python, R, matlab, etc…
It is possible to define custom kernels, for instance for a particular version of python including additional packages. The example below show how to install a python v3 kernel containing some additional python packages that you will then be able to use in your notebooks.
=== Create a python v3.5 kernel with additional numpy pkg ===
You can list all already available kernels
jupyter kernelspec list
then proceed to create a new one, for instance
module load Miniconda3/4.7.10
conda create –name py35 python=3.5 # default location in $HOME/.conda/envs/py35
source activate py35
conda install ipykernel
ipython kernel install –name=py35kernel –user
Installed kernelspec py35kernel in $HOME/.local/share/jupyter/kernels/py35kernel
conda install h5py
source deactivate py35
Note that conda
is a full package manager and environment management system and as such it might perform poorly in large environments.
Launch a jupyter notebook as described above and select the newly created py35kernel as shown in the figure below
numpy will also be available
Should you not need a conda environment anymore, please do not forget to clean up from time to time
# first remove the kernel
source activate py35
jupyter kernelspec list
jupyter kernelspec uninstall py35
source deactivate py35
# then delete the kernel environment
conda env remove –name py35
=== Install a mathematica (wolfram) kernel ===
==== Debugging jupyter lab/notebook sessions ====
xmaris OpenOnDemand writes jupyter sessions logs to subdirectories located in $HOME/ondemand/data/sys/dashboard/batch_connect/sys/jupyter/output/
. Before contacting the helpdesk
you are advised you analyse the contents of these subdirectories (one for each session) in particular the files called output.log
Do not forget to clean up any session logs from time to time to avoid gettong over your allocated quota.
===== xmaris slurm tips =====
This is not a slurm manual, you should always refer to the official documentation (see link below).
xmaris runs the scheduler and resource manager slurm v18.08.6-2. Please consult the official manual for detailed information.
The headnode (marishead) is not a compute node. Any user applications running on it will be terminated without notice.
Here we report a few useful commands and their outputs to get you started. For the inpatients, look at the following slurm batch-script generator (NO RESPONSIBILITIES assumed! Study the script before submitting it.) which is available only from withing UL IPs.
==== Determine your slurm account name ====
sacctmgr show users <username>
==== Display detailed information about a node ====
sinfo -o “ %n %P %t %C %z %m %f %l %L” -N -n maris077
==== Display detailed information about all nodes ====
sinfo -o “ %n %P %t %C %z %m %f %G %l %L” -N
==== Launch an interactive session on a node ====
srun -w maris047 –pty bash -i
==== Display status of your jobs ====
squeue -u <username>
==== Interactive use of GPUs ====
srun -p gpuIntel –gres=gpu:1 –pty bash -i
==== Request nodes with particular features ====
srun –constraint=“opteron&highmem” –pty bash -i
==== Run a multiprocessing application ====
srun -p ibIntel -N 4 -n4 -c1 –mem=16000 –pty bash -i
module load foss
ulimit -l unlimited
mpirun –mca btl openib,vader,self <YOUR_MPI_APP>
===== Suggested readings =====
===== Requesting help =====
Please use this helpdesk form or email support''.