xmaris is a small computational cluster at the Lorentz Institute financed by external research grants. Access is primarily for those research groups who have purchased the machines, but there may well be computing time available for others. If you would like to use xmaris please get in touch with
to see what resources can be made available for your needs. You can then request access by sending an email to support@lorentz.
Lorentz Institute guests are encouraged to explore other HPC possibilities, such as the ALICE HPC cluster of the University of Leiden.
xmaris runs CentOS v7.6 and consists of heterogeneous computation nodes. A list of configured nodes and partitions on the cluster can be obtained using slurm
Because xmaris features different CPU types that understand different types of instructions (see here), each slurm node is characterised by a list of
Features that, among other things, describe the type of CPUs mounted in that node
sinfo -o " %n %P %t %C %z %m %f %l %L" -N -n maris077 HOSTNAMES PARTITION STATE CPUS(A/I/O/T) S:C:T MEMORY AVAIL_FEATURES TIMELIMIT DEFAULTTIME maris077 compIntel idle 0/96/0/96 4:12:2 512000 broadwell,10Gb,R830 infinite 1:00:00
In the example above maris077 CPUs belong to the
broadwell family. To request allocation of specific features, please see below.
xmaris aims to offer a stable computational environment to its users in the period Dec 2019 – Jan 2024. Within this period, the OS might be patched only with important security updates. Past January 2024 all working xmaris nodes will be re-provisioned from scratch with a newer version of CentOS.
All compute nodes have access to the following data partitions at least
Extra scratch space
iSER stands for “iSCSI Extensions for RDMA”. It is an extension of the iSCSI protocol that includes RDMA (Remote Dynamic Memory Access) support.
Backup snapshots of /home directory are taken hourly, daily, and weekly and stored in
xmaris users are strongly advised they delete (or at least move to the shared data disk), if any, their data from the compute nodes scratch disks upon completion of their calculations. All data on the scratch disks might be cancelled without prior notice.
10GB/user quotas are enforced on the home disk. Please use /marisdata to temporarily store large datafiles. Note also that these policies might change at any time at the discretion of the cluster owners.
/clusterdata is deliberately made not available on xmaris, because it is no longer maintained. If you have any data on it, it is your responsibility to create backups. All data on
/clusterdata will get permanently lost in case of hardware failure.
All data on the scratch partitions is assumed to be temporary and will be deleted upon a node re-installation.
maris' homes are different than IL workstations' homes.
Usage policies are updated regularly in accordance with the needs of the cluster owners and may change without notice at any time. At the moment there is an enforced usage limit of 128 CPUs per user that does not apply to the owners. Cluster usage is regularly monitored to prevent resource abuse.
To monitor live usage of xmaris you can either
The link above is accessible only within the IL workstations network.
Once you have been authorised to use xmaris, you have two ways to access its services:
Terminal access is provided via login to xmaris' headnode reachable at
marishead.lorentz.leidenuniv.nl. For connections from outside the IL network, an ssh tunnel into the IL ssh server is needed.
If you were a maris user prior to the configuration switch to xmaris, you will find out that many terminal functions and programs are not working as expected. This is due to the presence in your maris home directory of old shell initialisation scripts still tied to the STRW sfinx environment. You can override them (after making a backup copy) by replacing their contents with the default CentOS shell initialisation scripts, for instance for bash these are located in
xmaris services, that is terminal, scheduler/resource manager, jupyter notebooks and monitoring facilities, can be accessed easily via a browser without the need of additional plugins navigating to xmaris OpenOnDemand . Similarly to a standard terminal access, xmaris OpenOnDemand is available only for connections within the IL subnetwork. IL users who wish to access OpenOnDemand from their home could instruct their browser to connect via a SOCKS proxy, for instance open a local terminal and type
ssh -ND 7777 <your_IL_username>@ssh.lorentz.leidenuniv.nl
then in your browser settings find the tab relative to the connection type and instruct the browser to use the SOCKS proxy at
localhost:7777 to connect to the internet.
xmaris OnDemand allows you to
Currently only maris075 features GPUs. They are two
nvidia Tesla P100 16GB. In order to request them you must use the
–gres option, for instance
srun -p gpuIntel –gres=gpu:1 –pty bash -i.
xmaris uses EasyBuild to provide a build environment for its (scientific) software. Pre-installed software can be explored by means of the
module spider command. For instance you can query the system for all modules whose name starts with `mpi' by executing
module -r spider '^mpi'. Installed softwares include
|GCC||GNU Compiler Collection|
|OpenBLAS||Basic Linear Algebra Subprograms|
|LAPACK||Linear Algebra PACKage|
|ScaLAPACK||Scalable Linear Algebra PACKage|
|CUDA||Compute Unified Device Architecture|
|FFTW||Fastest Fourier Transform in the West|
|EasyBuild||Software Build and Installation Framework|
|GSL||GNU Scientific Library|
|HDF5||Management of Extremely Large and Complex Data Collections|
|git||Distributed Version Control System|
|Java||General-Purpose Programming Language|
|Miniconda||Free Minimal Installer for Conda|
|OpenMPI||Open Source Message Passing Interface Implementation|
|R||R is a Language and Environment for Statistical Computing and Graphics|
|plc||The Planck Likelihood Code|
|Clang||C language family frontend for LLVM|
|Mathematica*||Technical computing system|
* Usage is discouraged because proprietary.
For an up-to-date list of installed software use the
Any pre-installed software can be sourced by means of the
module load command.
Sometimes it is useful to save a list of modules you use often in a `collection'. Consider the following example
module load mod1 mod2 mod3 mod4 mod5 mod6 modules save collection1 module restore collection1 # list collections module savelist
You do not have administrative rights to the cluster.
Load the EasyBuild module and define a directory in which to store your EasyBuild-built softwares
module avail EasyBuild module load EasyBuild mkdir /marisdata/<uname>/easybuild export EASYBUILD_PREFIX=/marisdata/<uname>/easybuild export EASYBUILD_OPTARCH=GENERIC*
NOTE 1: The environment variable
EASYBUILD_OPTARCH instructs EasyBuild to compile software in a generic way so that it can be used on different CPUs. This is rather convenient in heterogeneous clusters such as xmaris to avoid recompilations of the same softwares on different compute nodes. This convenience comes of course at a cost; the executables so produced will not be as efficient as they would be on a given CPU. For more info read here.
NOTE 2: When compiling OpenBLAS it is not sufficient to define
GENERIC to achieve portability of the executables. Some extra steps must be taken as described in https://github.com/easybuilders/easybuild/blob/master/docs/Controlling_compiler_optimization_flags.rst. A list of targets supported by OpenBLAS can be found here.
Search a software to build, build it and make it available to your environment
eb -S ^Miniconda eb Miniconda2-4.3.21.eb -r module use /marisdata/<uname>/easybuild/modules/all
module use <path> will prepend <path> to your
MODULEPATH. Should you want to append it instead, then add the option
-a. To remove <path> from
module unuse <path>.
xmaris runs the slurm scheduler and resource manager. Computation jobs can be submitted as batch jobs or be run interactively. Any other jobs might be terminated without prior notice.
To submit a batch job to slurm you must first create a shell script which contains enough instructions to request the needed resources to slurm and to execute your program. The script can be written in any known interpreter to the system. In a batch script, slurm instructions are prefixed by the interpreter comment symbol and the word
For instance a bash batch script could be
cat test.sh #!/bin/env bash #SBATCH --job-name=super #SBATCH --ntasks=1 #SBATCH --mem=1000 srun hostname
Please consult the slurm manual for all possible options. Batch scripts are then submitted for execution via
sbatch test.sh sbatch: Submitted batch job 738279474774293
and their status [PENDING|RUNNING|FAILED|COMPLETED] checked using
squeue. You can recur to the command
sstat to display useful information about your running job, such as memory consumption etc…
ssh shell access to an executing node is automatically granted by slurm and can also be used for debugging purposes.
Interactive jobs give you nodes shell prompts in interactive mode
srun --pty bash -i
Inexpert users should refrain from attempting to program parallel applications without studying appropriately.
A parallel job runs a calculation whose computational subtasks are run simultaneously. The underlying principle is that
large computations could be more efficient if divided into smaller ones. Note however, that
parallelism can in fact decrease the efficiency of a poorly written code in which communication and synchronisation between the different subtasks are not handled properly.
Parallelism is usually achieved either by
In multithreading programming all computation subtasks (
threads) exist within the context of single process and share the process' resources. Threads are able to execute independently and are assigned by the operating system to multiple CPU cores and/or multiple CPUs effectively speeding up your calculation.
Multithreading can be achieved using libraries such as
Multiprocessing usually refers to computations subdivided into tasks that run on multiples nodes. This type of programming increases the resources available (e.g. more memory) to your computation by employing several nodes at the same time.
MPI (Message Parsing Interface) defines the standards (in terms of syntax and rules) to implement multiprocessing in your codes.
MPI-enabled applications spawn multiple copies of the program, also called ranks, mapping each one of them to a processor. A computation node has usually multiple processors. The MPI interface lets you manage the allocated resources and the communication and synchronisation of the
It is easy to imagine how inefficient can be a poorly-written MPI application or an MPI application running on a cluster with slow nodes interconnects.
This term refers to applications that use simultaneously multiprocessing (MPI) and multithreading (OpenMP).
To launch a jupyter notebook login to xmaris OnDemand, select
Interactive Apps –> Jupyter Notebook and specify the resources needed as in the figure below
Launch and wait until the notebook has launched (green colour, see below)
Now you can interact with your notebook (click on
Connect to Jupyter), open a shell on the executing node (click on
Host >_hostaname), and analyse notebook log files for debugging purposes (click on
Session ID xxxx-xxxx-xxxxx-xxxxxxx-xxx-xx).
If you want your notebook directory to be different than $HOME, please do
export NOTEBOOKDIR=/marisdata/$LOGNAME in your .bashrc
If you prefer the newest jupyter notebook features and interface, that is
jupyetrlab, just proceed as above and after clicking on
Connect to Jupyter replace the string tree with the string lab in the URL bar of your browser. For instance, a typical jupyterlab interface URL could look like this
A jupyter notebook kernel defines the notebook interpreter, such as python, R, matlab, etc…
It is possible to define custom kernels, for instance for a particular version of python including additional packages. The example below show how to install a python v3 kernel containing some additional python packages that you will then be able to use in your notebooks.
You can list all already available kernels
jupyter kernelspec list
then proceed to create a new one, for instance
module load Miniconda3/4.7.10 conda create --name py35 python=3.5 # default location in $HOME/.conda/envs/py35 source activate py35 conda install ipykernel ipython kernel install --name=py35kernel --user Installed kernelspec py35kernel in $HOME/.local/share/jupyter/kernels/py35kernel conda install h5py source deactivate py35
conda is a full package manager and environment management system and as such it might perform poorly in large environments.
Launch a jupyter notebook as described above and select the newly created py35kernel as shown in the figure below
numpy will also be available
Should you not need a conda environment anymore, please do not forget to clean up from time to time
# first remove the kernel source activate py35 jupyter kernelspec list jupyter kernelspec uninstall py35 source deactivate py35 # then delete the kernel environment conda env remove --name py35
xmaris OpenOnDemand writes jupyter sessions logs to subdirectories located in
$HOME/ondemand/data/sys/dashboard/batch_connect/sys/jupyter/output/. Before contacting the
helpdesk you are advised you analyse the contents of these subdirectories (one for each session) in particular the files called
Do not forget to clean up any session logs from time to time to avoid gettong over your allocated quota.
This is not a slurm manual, you should always refere to the official documentation (see link below).
xmaris runs the scheduler and resource manager slurm v18.08.6-2. Please consult the official manual for detailed information.
The headnode (marishead) is not a compute node. Any user applications running on it will be terminated without notice.
Here we report a few useful commands and their outputs to get you started.
sacctmgr show users <username>
sinfo -o " %n %P %t %C %z %m %f %l %L" -N -n maris077
sinfo -o " %n %P %t %C %z %m %f %G %l %L" -N
srun -w maris047 --pty bash -i
squeue -u <username>
srun -p gpuIntel --gres=gpu:1 --pty bash -i
srun --constraint="opteron&highmem" --pty bash -i
Please use this helpdesk form or email