User Tools

Site Tools


institute_lorentz:xmaris

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
institute_lorentz:xmaris [2020/09/25 15:02] – [Xmaris] lenocilinstitute_lorentz:xmaris [2024/02/29 14:16] (current) – [Xmaris Partitions] jansen
Line 1: Line 1:
 ====== Xmaris ====== ====== Xmaris ======
-xmaris is a small computational cluster at the [[https://www.lorentz.leidenuniv.nl|Lorentz Institute]]+Xmaris is a small computational cluster at the [[https://www.lorentz.leidenuniv.nl|Lorentz Institute]]
 financed by external research grants. As such, its access is granted **primarily** to the financed by external research grants. As such, its access is granted **primarily** to the
 research groups who have been awarded the grants.  Other research groups wishing to use xmaris can enquire whether there is any left-over computing time by getting in touch with either  research groups who have been awarded the grants.  Other research groups wishing to use xmaris can enquire whether there is any left-over computing time by getting in touch with either 
Line 13: Line 13:
 :!: External research groups to the Lorentz Institute are strongly encouraged to explore other HPC possibilities, such as the [[https://wiki.alice.universiteitleiden.nl/index.php?title=ALICE_User_Documentation_Wiki|ALICE HPC cluster]] of the University of Leiden. :!: External research groups to the Lorentz Institute are strongly encouraged to explore other HPC possibilities, such as the [[https://wiki.alice.universiteitleiden.nl/index.php?title=ALICE_User_Documentation_Wiki|ALICE HPC cluster]] of the University of Leiden.
  
-Xmaris is optimised for [[https://en.wikipedia.org/wiki/Thread_(computing)#Multithreading|multithreading applications]] and  [[https://en.wikipedia.org/wiki/Embarrassingly_parallel|embarrassingly parallel problems]], but there have been some recent investments to improve nodes interconnection communications to enable [[institute_lorentz:xmaris#parallelism_101|multiprocessing]]. Currently, multiprocessing is possible on maris0[78-81] which are interconnected via an InfiniBand EDR switch. Each one of these nodes  is capable of a practical __9.6 TFLOPS__.+Xmaris is optimised for [[https://en.wikipedia.org/wiki/Thread_(computing)#Multithreading|multithreading applications]] and  [[https://en.wikipedia.org/wiki/Embarrassingly_parallel|embarrassingly parallel problems]], but there have been some recent investments to improve nodes interconnection communications to enable [[institute_lorentz:xmaris#parallelism_101|multiprocessing]]. Currently, multiprocessing is possible on the nodes of the ''ibIntel'' partition which are interconnected via an **InfiniBand EDR** switch. Each one of these nodes  is capable of a practical __9.6 TFLOPS__.
  
  
 Xmaris is the successor of the maris cluster, renamed with a prefix ''x'' because its nodes deployment is automated using the [[https://www.xcat.org/|xCAT]] software. Less formally, the presence of the ''x'' prefix also suggests the time of the year when xmaris was first made available to IL users, that is Christmas (Xmas). Xmaris is the successor of the maris cluster, renamed with a prefix ''x'' because its nodes deployment is automated using the [[https://www.xcat.org/|xCAT]] software. Less formally, the presence of the ''x'' prefix also suggests the time of the year when xmaris was first made available to IL users, that is Christmas (Xmas).
  
-[[https://www.gnu.org/|{{https://www.gnu.org/graphics/heckert_gnu.transp.small.png?50 }}]][[https://wiki.centos.org/|{{https://wiki.centos.org/ArtWork/Brand/Logo?action=AttachFile&do=get&target=centos-logo-light.png?200 }}]] [[https://openondemand.org/|{{https://openondemand.org/assets/images/ood_logo_stack_rgb.png?200  }}]] [[https://slurm.schedmd.com|{{https://slurm.schedmd.com/slurm_logo.png?60  }}]] [[https://easybuild.readthedocs.io/en/latest/|{{https://easybuild.readthedocs.io/en/latest/_static/easybuild_logo_alpha.png?200  }}]]+[[https://www.gnu.org/|{{https://www.gnu.org/graphics/heckert_gnu.transp.small.png?50 }}]][[https://wiki.centos.org/|{{https://wiki.centos.org/ArtWork/Brand/Logo?action=AttachFile&do=get&target=centos-logo-light.png?200 }}]] [[https://openondemand.org/|{{https://www.osc.edu/sites/default/files/OpenOnDemand_horiz_RGB.png?200  }}]] [[https://slurm.schedmd.com|{{https://slurm.schedmd.com/slurm_logo.png?60  }}]] [[https://easybuild.readthedocs.io/en/latest/|{{https://docs.easybuild.io/img/easybuild_logo.png?100  }}]]
 ===== Xmaris features and expected cluster lifetime ===== ===== Xmaris features and expected cluster lifetime =====
  
-Xmaris runs CentOS v7.6 and consists for historical reasons of heterogeneous computation nodes. A list of configured nodes and partitions on the cluster can be obtained on the command line using slurm's ''sinfo''.+Xmaris runs CentOS v7 and consists for historical reasons of heterogeneous computation nodes. A list of configured nodes and partitions on the cluster can be obtained on the command line using slurm's ''sinfo''.
  
 :!: Because Xmaris features different CPU types that understand different types of instructions (see [[https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html|here]]), we have associated to each computation node a list of slurm ''Features'' that also describe the type of CPUs mounted in that node. To request allocation of specific features to the resource manager, see [[institute_lorentz:xmaris#request_nodes_with_particular_features|this example]]. :!: Because Xmaris features different CPU types that understand different types of instructions (see [[https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html|here]]), we have associated to each computation node a list of slurm ''Features'' that also describe the type of CPUs mounted in that node. To request allocation of specific features to the resource manager, see [[institute_lorentz:xmaris#request_nodes_with_particular_features|this example]].
  
-You can display nodes features with ''sinfo'' as below+You can display just one node specs and features or all nodes specs and features with ''sinfo''
  
-<code>+<code bash> 
 +# specific node
 sinfo -o " %n  %P %t %C %z %m %f" -N -n maris077 sinfo -o " %n  %P %t %C %z %m %f" -N -n maris077
-HOSTNAMES  PARTITION STATE CPUS(A/I/O/TS:C:T MEMORY AVAIL_FEATURES +# all nodes 
-maris077  compIntel mix 10/86/0/96 4:12:2 512000 broadwell,10Gb,R830,highmem+sinfo -o " %n  %P %t %C %z %m %f" -N 
 +# all nodes (more concise
 +sinfo -Nel
 </code> </code>
  
Line 39: Line 42:
 All compute nodes have **//at least//** access to the following data partitions  All compute nodes have **//at least//** access to the following data partitions 
  
-  * /scratch **temporary**, local to each node, therefore faster I/O) +^Mount Point^ Type ^Notes^ 
-  /marisdata (NetApp2TB/user quota) +|/scratch | HD | **temporary**, local| 
-  /home (NetApp10GB/user quota)+|/marisdata |NetApp2TB/user quota, medium-term storage, remote| 
 +|/home |NetApp10GB/user quota, medium-term storage, remote| 
 +|/ilZone/home | [[institute_lorentz:irods_fair_storage|iRODS]]| 20GB/user quota, archive storage, remote|
  
-Extra efficient scratch spaces are available to all nodes on the infiniband network 
  
-  * /IBSSD (maris0[78,79,80,81] -- **temporary**, InfiniBand/iSER* high-rate I/O)   +Extra efficient scratch spaces are available to all nodes on the infiniband network (''ibIntel'')
-  * /PIBSSD (maris0[78,79,80,81] -- **temporary**, InfiniBand/BeeGFS Parallel FS -- testing until mid 2021+
  
-* iSER stands for “iSCSI Extensions for RDMA”. It is an extension of the iSCSI protocol that includes RDMA (Remote Dynamic Memory Access) support.+^Mount Point^ Type^ Notes^ 
 +|/IBSSD| SSD |**DISCONTINUED**, InfiniBand/iSER((iSER stands for “iSCSI Extensions for RDMA”. It is an extension of the iSCSI protocol that includes RDMA (Remote Dynamic Memory Access) support. BeeGFS is parallel filysystem. IBSSD will be discontinued by the end of 2022 in favour of PIBSSD.))|   
 +|/PIBSSD| SSD|**temporary**, InfiniBand/BeeGFS|
  
  
Line 55: Line 60:
 xmaris users are strongly advised they delete (or at least move to the shared data disk), if any, their data from the compute nodes scratch disks upon completion of their calculations. All data on the scratch disks __might be cancelled without prior notice__. xmaris users are strongly advised they delete (or at least move to the shared data disk), if any, their data from the compute nodes scratch disks upon completion of their calculations. All data on the scratch disks __might be cancelled without prior notice__.
  
-The home disk ''/home'' has a 10GB/user quota where as ''/marisdata'' has a 2TB/user quota. Note that **these policies might change at any time at the discretion of the cluster owners**.+Note that **disk policies might change at any time at the discretion of the cluster owners**. 
  
  
 ---- ----
  
 +Please also note the following  
 +  * xmaris' home disk is different than your [[institute_lorentz:gnulinux_workstations|IL workstation]] or [[institute_lorentz:remote_workspace|remote workspace]] home disk.
 +  * The **OLD** (as in the old maris) ''/clusterdata'' is deliberately made unavailable on xmaris, because it is **no longer** maintained.  If you have any data on it, **it is your responsibility** to create backups. All data on ''/clusterdata'' will get permanently lost in case of hardware failure.
    
-:!: **All data** on the scratch partitions is assumed to be temporary and will be **deleted** upon a node re-installation. 
  
-:!: When usisng xmaris please note that the home partition  is different than the IL workstations home. 
- 
-:!: The **OLD** (as in the old maris) ''/clusterdata'' is deliberately made unavailable on xmaris, because it is **no longer** maintained.  If you have any data on it, **it is your responsibility** to create backups. All data on ''/clusterdata'' will get permanently lost in case of hardware failure. 
 ==== Xmaris usage policies ==== ==== Xmaris usage policies ====
  
-Usage policies are updated regularly in accordance with the needs of the cluster owners and **may change at any time without notice**. At the moment there is an __enforced usage limit of 128 CPUs per user__ that does not apply to the owners. Job execution priorities are defined via a complex [[https://slurm.schedmd.com/archive/slurm-18.08.6/priority_multifactor.html|multi-factor algorithm]] whose parameters can be displayed on the command line via +Usage policies are updated regularly in accordance with the needs of the cluster owners and **may change at any time without notice**. At the moment there is an __enforced usage limit of 128 CPUs per user__ that does not apply to the owners. Job execution priorities are defined via a complex [[https://slurm.schedmd.com/archive/slurm-21.08.8-2/priority_multifactor.html|multi-factor algorithm]] whose parameters can be displayed on the command line via 
-<code>+<code bash>
 scontrol show config | grep -i priority scontrol show config | grep -i priority
 </code> </code>
Line 77: Line 81:
  
   * execute slurm's ''sinfo'' (this requires shell access to the cluster, see below)   * execute slurm's ''sinfo'' (this requires shell access to the cluster, see below)
-  * visit https://xmaris.lorentz.leidenuniv.nl:4433/pun/sys/ganglia-status (via a web browser)+  * browse to https://xmaris.lorentz.leidenuniv.nl/gangliafrom any IL workstation 
  
-:!: The link above is accessible only within the IL workstations network. 
 ===== How to access Xmaris ===== ===== How to access Xmaris =====
 Access to Xmaris is not granted automatically to all Lorentz Institute members. Instead, a preliminary approval must be granted to you by the cluster owners (read [[|here]]). Access to Xmaris is not granted automatically to all Lorentz Institute members. Instead, a preliminary approval must be granted to you by the cluster owners (read [[|here]]).
Line 85: Line 89:
 Once you have been authorised to use Xmaris, there are two ways to access its services: Once you have been authorised to use Xmaris, there are two ways to access its services:
  
-  - using an ssh client (traditional method). +  - using a web browser (**Strongly advised** to the novel HPC user
-  - using a web browser (modern method)+  - using an SSH client (Expert users) 
 + 
  
-Both methods can provide terminal access, but connections via web browsers offer you extra services such as sftp (drag-and-drop file transfers) and jupyter interactive notebooks. We advise all users unfamilair with  GNU/Linux to use a web browser to interact with Xmaris.+Both methods can provide terminal access, but connections via web browsers offer you extra services such as sftp (drag-and-drop file transfers)jupyter interactive notebooks, virtual desktops and more at the click of your mouse**We advise** all users either unfamilair with the GNU/Linux terminal or new to HPC to use a web browser to interact with Xmaris.
  
 ==== Access via an ssh client ==== ==== Access via an ssh client ====
  
-The procedure differs on whether you try to connect with a client connected to the IL network or not.+The procedure differs on whether you try to connect with a client connected to the IL intranet or not.
  
-  * When within the IL network, for instance if you are using a Lorentz Institute workstation, you have direct access to Xmaris. Open a terminal and type the command below+  * When within the IL network, for instance if you are using a Lorentz Institute workstation, you have direct access to Xmaris. Open a terminal and type the command below (substitute username with your own IL username)
  
-<code> +<code bash
-ssh xmaris.lorentz.leidenuniv.nl -l <your-IL-username>+ssh xmaris.lorentz.leidenuniv.nl -l username
 </code> </code>
    
-  * When outside the IL network, for instance from home or using a wireless connection, you must first initiate an ssh tunnel to our SSH server and then connect to Xmaris. You will need to open two terminal windows+  * When outside the IL network, for instance from home or using a wireless connection, you must first initiate an ssh tunnel to our SSH server and then connect to Xmaris
  
-<code> +<code bash
-# First set up the tunnel in terminal window 1 +ssh -o ProxyCommand="ssh -W %h:%p username@styx.lorentz.leidenuniv.nl" username@xmaris.lorentz.leidenuniv.nl
-ssh -f <your-IL-username>@ssh.lorentz.leidenuniv.nl -L 2222:xmaris.lorentz.leidenuniv.nl:22 -N+
 </code> </code>
  
-then 
-<code> 
-# Connect to Xmaris in terminal window 2 
-ssh -p 2222 localhost -l  <your-IL-username> 
-</code> 
  
-:!: If you were a maris user prior to the configuration switch to xmaris, you might find out that many terminal functions and programs could not be working as expected. This is due to the presence in your xmaris home directory of old shell initialisation scripts still tied to the STRW sfinx environment. You can override them (preferably after making a backup copy) by replacing their contents with the default CentOS shell initialisation scripts, for instance for bash these are located in ''/etc/skel/.bashrc'' and ''/etc/skel/.bash_profile''.+|:!: If you were a maris user prior to the configuration switch to xmaris, you might find out that many terminal functions and programs could not be working as expected. This is due to the presence in your xmaris home directory of old shell initialisation scripts still tied to the STRW sfinx environment. You can override them (preferably after making a backup copy) by replacing their contents with the default CentOS shell initialisation scripts, for instance for bash these are located in ''/etc/skel/.bashrc'' and ''/etc/skel/.bash_profile''|
 ==== Web access ==== ==== Web access ====
  
 Xmaris services, that is terminal, scheduler/resource manager, jupyter notebooks and monitoring facilities, can be accessed easily via a browser without the need of additional plugins navigating to [[https://xmaris.lorentz.leidenuniv.nl:4433|xmaris OpenOnDemand]]. Xmaris services, that is terminal, scheduler/resource manager, jupyter notebooks and monitoring facilities, can be accessed easily via a browser without the need of additional plugins navigating to [[https://xmaris.lorentz.leidenuniv.nl:4433|xmaris OpenOnDemand]].
-{{ :institute_lorentz:ondemand.png?direct&500 }}  
-Similarly to a traditional shell access, Xmaris OpenOnDemand is available only for connections within the IL subnetwork. IL users  who wish to access OpenOnDemand from their homes could instruct their browsers to SOCKS-proxy their connections via our SSH server. 
-Open a local terminal and type  
  
-<code> +{{ :institute_lorentz:oodxmaris1.png?direct&1000 }}   
-ssh -ND 7777 <your_IL_username>@ssh.lorentz.leidenuniv.nl+ 
 +Similarly to a traditional shell access, Xmaris OpenOnDemand is available only for connections within the __IL intranet__. IL users  who wish to access OpenOnDemand from their remote home locations could for example use the [[:vpn#lorentz_institute|IL VPN]] or instruct their browsers to SOCKS-proxy their connections via our SSH server. 
 +Open a local terminal and type (substitute username with your IL username) 
 + 
 +<code bash
 +ssh -ND 7777 username@ssh.lorentz.leidenuniv.nl
 </code>   </code>  
  
-then in your browser settings find the tab relative to the connection type and instruct the browser to use the SOCKS proxy located at ''localhost:7777'' to connect to the internet.+then in your browser settings find the tab relative to the connection type and instruct the browser to use the SOCKS proxy located at ''localhost:7777'' to connect to the internet. Alternatively, use the [[institute_lorentz:remote_workspace|Lorentz Institute Remote Workspace]]. 
  
 Xmaris OnDemand allows you to  Xmaris OnDemand allows you to 
Line 132: Line 134:
   * Submit batch jobs to the slurm scheduler/resource manager.   * Submit batch jobs to the slurm scheduler/resource manager.
   * Open a terminal.   * Open a terminal.
-  * Launch interactive jupyter notebooks.+  * Launch interactive applications such as jupyter notebooks, tensorboard, virtual desktops, etc..
   * Monitor cluster usage.   * Monitor cluster usage.
   * Create and launch your very own OnDemand application (read [[https://osc.github.io/ood-documentation/master/app-development/tutorials-passenger-apps.html|here]]).   * Create and launch your very own OnDemand application (read [[https://osc.github.io/ood-documentation/master/app-development/tutorials-passenger-apps.html|here]]).
  
 +:!: Please do not bookmark any other URL than https://xmaris.lorentz.leidenuniv.nl:4433 to connect to OpenOnDemand. Failing to do so can result in connection errors.
 +===== Xmaris Partitions =====
 +
 +^Partition^ Number nodes ^ Timelimit ^  Notes^
 +|compAMD*| 6 | 15 days | |
 +|compAMDlong| 3 | 60 days | |
 +|compIntel| 2 | 5 days and 12 hours| |
 +|gpuIntel| 1 | 3 days and 12 hours | GPU |
 +|ibIntel | 8 | 7 days | InfiniBand, Multiprocessing |
 +
 +*: default partition
  
 ===== Xmaris GPUs ===== ===== Xmaris GPUs =====
Line 142: Line 155:
 |maris075 |gpuIntel|2 x Nvidia Tesla P100 16GB | 6.0| |maris075 |gpuIntel|2 x Nvidia Tesla P100 16GB | 6.0|
  
-Use the ''--gres'' slurm option to allocate them for your job,  for instance via ''srun -p gpuIntel --gres=gpu:1 --pty bash -i''.+Xmaris GPUs must be allocated using slurm'''--gres'' option,  for instance  
 +<code bash> 
 +srun -p gpuIntel --gres=gpu:1 --pty bash -i 
 +</code>
 ===== Xmaris scientific software ===== ===== Xmaris scientific software =====
  
Line 170: Line 186:
 |cobaya | A code for Bayesian analysis in Cosmology | |cobaya | A code for Bayesian analysis in Cosmology |
 |Clang      | C language family frontend for LLVM | |Clang      | C language family frontend for LLVM |
 +|Graphviz | Graph visualization software |
 |Octave     | GNU Programming language for scientific computing | |Octave     | GNU Programming language for scientific computing |
 | Mathematica* | Technical computing system | | Mathematica* | Technical computing system |
Line 178: Line 195:
 Any pre-installed software can be made available in your environment via the ''module load <module_name>'' command. Any pre-installed software can be made available in your environment via the ''module load <module_name>'' command.
  
-It is possible to save a list of modules you use often in a //module collection// to load them in one command +It is possible to save a list of modules you use often in a //module collection// to load them just with one command 
-<code>+<code bash>
 module load mod1 mod2 mod3 mod4 mod5 mod6 module load mod1 mod2 mod3 mod4 mod5 mod6
 modules save collection1  modules save collection1 
Line 186: Line 203:
 module savelist module savelist
 </code> </code>
 +==== TensorFlow Notes ====
 +Xmaris has multiple modules that provide TensorFlow. See ''ml avail TensorFlow''.
 +
 +^Module^Hardware^Partition^Additional Ops^
 +|TensorFlow/2.1.0-fosscuda-2019b-Python-3.7.4 | CPU, GPU  |gpuIntel|TensorFlow Quantum|
 +|TensorFlow/1.12.0-fosscuda-2018b-Python-3.6.6 | CPU, GPU |gpuIntel| |
 +|TensorFlow-1.15.0-Miniconda3/4.7.10| CPU| All| |
 +
 +The following example shows how you can create a tensorflow-aware jupyter notebook kernel that you can use for instance via the OpenOnDemand interface 
 +
 +<code bash>
 +# We use maris075 (GPU node) and load the optimised tf module
 +ml load TensorFlow/2.1.0-fosscuda-2019b-Python-3.7.4
 +
 +# We install ipykernel, because necessary to run py notebooks
 +python -m pip install ipykernel --user
 +
 +# We create a kernel called TFQuantum based on python from TensorFlow/2.1.0-fosscuda-2019b-Python-3.7.4
 +python -m ipykernel install --name TFQuantum --display-name "TFQuantum" --user
 +
 +# We edit the kernel such that it does not execute python directly
 +# but via a custom wrapper script
 +cat $HOME/.local/share/jupyter/kernels/tfquantum/kernel.json
 +
 +{
 + "argv": [
 +  "/home/lenocil/.local/share/jupyter/kernels/tfquantum/wrapper.sh",
 +  "-m",
 +  "ipykernel_launcher",
 +  "-f",
 +  "{connection_file}"
 + ],
 + "display_name": "TFQuantum",
 + "language": "python",
 + "metadata": {
 +  "debugger": true
 + }
 +}
 +
 +# The wrapper script will call python but only after loading any
 +# appropriate module
 +cat /home/lenocil/.local/share/jupyter/kernels/tfquantum/wrapper.sh
 +
 +#!/bin/env bash
 +ml load TensorFlow/2.1.0-fosscuda-2019b-Python-3.7.4
 +
 +exec python $@
 +
 +# DONE. tfquantum will appear in the dropdown list of kernels
 +# upon creating a new notebook
 +
 +</code>
 +
 +
 +
 +=== TensorFlow with Graphviz ===
 +<code bash>
 +ml load TensorFlow/2.1.0-fosscuda-2019b-Python-3.7.4
 +pip install --user pydot
 +ml load Graphviz/2.42.2-foss-2019b-Python-3.7.4
 +python -c "import tensorflow as tf;m = tf.keras.Model(inputs=[], outputs=[]);tf.keras.utils.plot_model(m, show_shapes=True)"
 +</code>
 +
 ==== Installing extra software ==== ==== Installing extra software ====
  
Line 195: Line 275:
     * via a traditional //configure/make// procedure     * via a traditional //configure/make// procedure
  
-Whatever installation method you might choose, please note that you do not have administrative rights to the cluster. +Whatever installation method you might choose, please note that you **do not have** administrative rights to the cluster. 
  
  
 === Installing software via EasyBuild === === Installing software via EasyBuild ===
 +
 +:!: See also [[:easybuild_environment|Working with EasyBuild]].
  
 In order to use EasyBuild to build a software, you must first set up your development environment. This is usually done by In order to use EasyBuild to build a software, you must first set up your development environment. This is usually done by
  
-  * Loading the EasyBUild module+  * Loading the EasyBuild module
   * Indicating a directory in which to store your EasyBuild-built softwares   * Indicating a directory in which to store your EasyBuild-built softwares
   * Specifying EasyBuild's behaviour via EASYBUILD_* environment variables   * Specifying EasyBuild's behaviour via EASYBUILD_* environment variables
Line 209: Line 291:
 In their simplest form, the steps outlined above can be translated into the following shell commands In their simplest form, the steps outlined above can be translated into the following shell commands
  
-<code>+<code bash>
  
 module load EasyBuild module load EasyBuild
Line 221: Line 303:
 </code> </code>
  
-:!: The environment variable ''EASYBUILD_OPTARCH'' instructs EasyBuild to compile software in a generic way so that it can be used on different CPUs. This is rather convenient in heterogeneous clusters such as xmaris to avoid recompilations of the same softwares on different compute nodes. This convenience comes of course at a cost; the executables so produced will not be as efficient as they would be on a given CPU. For more info read [[https://easybuild.readthedocs.io/en/latest/Controlling_compiler_optimization_flags.ht +|:!: The environment variable ''EASYBUILD_OPTARCH'' instructs EasyBuild to compile software in a generic way so that it can be used on different CPUs. This is rather convenient in heterogeneous clusters such as xmaris to avoid recompilations of the same softwares on different compute nodes. This convenience comes of course at a cost; the executables so produced will not be as efficient as they would be on a given CPU. For more info read [[https://easybuild.readthedocs.io/en/latest/Controlling_compiler_optimization_flags.ht 
-ml|here]].+ml|here]].|
  
-:!: When compiling OpenBLAS it is not sufficient to define ''EASYBUILD_OPTARCH'' to ''GENERIC'' to achieve portability of the executables. Some extra steps must be taken as described in https://github.com/easybuilders/easybuild/blob/master/docs/Controlling_compiler_optimization_flags.rst. A list of targets supported by OpenBLAS can be found [[https://github.com/xianyi/OpenBLAS/blob/develop/TargetList.txt|here]].+|:!: When compiling OpenBLAS it is not sufficient to define ''EASYBUILD_OPTARCH'' to ''GENERIC'' to achieve portability of the executables. Some extra steps must be taken as described in https://github.com/easybuilders/easybuild/blob/master/docs/Controlling_compiler_optimization_flags.rst. A list of targets supported by OpenBLAS can be found [[https://github.com/xianyi/OpenBLAS/blob/develop/TargetList.txt|here]].|
  
 Then execute  Then execute 
Line 234: Line 316:
 to make available to the ''module'' comamnd any of the softwares built in your EasyBuild userspace. to make available to the ''module'' comamnd any of the softwares built in your EasyBuild userspace.
  
-:!: ''module use <path>'' will prepend <path> to your  ''MODULEPATH''. Should you want to append it instead, then add the option ''-a''. To remove <path> from ''MODULEPATH'' execute ''module unuse <path>''.+|:!: ''module use <path>'' will prepend <path> to your  ''MODULEPATH''. Should you want to append it instead, then add the option ''-a''. To remove <path> from ''MODULEPATH'' execute ''module unuse <path>''.|
  
 Should you want to customise the building process of a given software please read how to implement [[https://easybuild.readthedocs.io/en/latest/Implementing-easyblocks.html|EasyBlocks]] and write [[https://easybuild.readthedocs.io/en/latest/Writing_easyconfig_files.html|EasyConfig]] files  or  Should you want to customise the building process of a given software please read how to implement [[https://easybuild.readthedocs.io/en/latest/Implementing-easyblocks.html|EasyBlocks]] and write [[https://easybuild.readthedocs.io/en/latest/Writing_easyconfig_files.html|EasyConfig]] files  or 
Line 248: Line 330:
 <code> <code>
 > ml load Miniconda3/4.7.10 > ml load Miniconda3/4.7.10
-> conda create --name TEST+> # note that if you specify prefix, you cannot specify the name 
 +> conda create [--prefix <location where there is plenty of space and you can write to>] [--name TEST]
  
 # the following fails # the following fails
Line 271: Line 354:
  
 ## do this instead ## do this instead
-> source activate TEST+> source activate TEST  
 +> # or if you used the --prefix option to crete the env 
 +> # source activate <location where there is plenty of space and you can write to>
 > ... > ...
 > conda deactivate > conda deactivate
Line 431: Line 516:
 === Install a mathematica (wolfram) kernel === === Install a mathematica (wolfram) kernel ===
  
-https://github.com/WolframResearch/WolframLanguageForJupyter+You can set it up following these notes https://github.com/WolframResearch/WolframLanguageForJupyter or follow these steps for a preconfigured setup. 
 + 
 +  * Open an SSH connection to xmaris 
 +  * Run ''/marisdata/WOLFRAM/WolframLanguageForJupyter/configure-jupyter.wls add'' 
 +  * The ''wolframlanguage'' is now available among your kernels 
 + 
 ==== Debugging jupyter lab/notebook sessions ==== ==== Debugging jupyter lab/notebook sessions ====
  
Line 492: Line 583:
 ==== Run a multiprocessing application ==== ==== Run a multiprocessing application ====
  
-<code> +Xmaris supports OpenMPI in combination with slurm's ''srun'' and infiniband on all nodes in the ''ibIntel'' partition. 
-srun -p ibIntel -N 4 -n4 -c1 --mem=16000 --pty bash -i +First of all, make sure that the ''max locked memory'' is set to ''unlimited'' for your account by executing 
-module load foss +<code bash
-ulimit -l unlimited +ulimit -l 
-mpirun --mca btl openib,vader,self <YOUR_MPI_APP>+unlimited
 </code> </code>
  
-https://www.open-mpi.org/faq/?category=openfabrics#ib-btl+If that is NOT the case, please contact the IT support.  
 + 
 +To run an MPI application see the session below 
 + 
 +<code bash> 
 +# login to the headnode and request resources 
 +$ salloc  -N6  -n6 -p ibIntel --mem 2000 
 +sallocGranted job allocation 564086 
 +salloc: Waiting for resource configuration 
 +salloc: Nodes maris[078-083] are ready for job 
 +# load the needed modules for the app to run 
 +$ ml load OpenMPI/4.1.1-GCC-10.3.0 OpenBLAS/0.3.17-GCC-10.3.0 
 +execute the app (note that the default MPI is set to  pmi2) 
 +$ srun  ./mpi_example 
 +Hello world!  I am process number: 5 on host maris083.lorentz.leidenuniv.nl 
 +11.000000 -9.000000 5.000000 -9.000000 21.000000 -1.000000 5.000000 -1.000000 3.000000  
 +Hello world!  I am process number: 4 on host maris082.lorentz.leidenuniv.nl 
 +11.000000 -9.000000 5.000000 -9.000000 21.000000 -1.000000 5.000000 -1.000000 3.000000  
 +Hello world!  I am process number: 2 on host maris080.lorentz.leidenuniv.nl 
 +11.000000 -9.000000 5.000000 -9.000000 21.000000 -1.000000 5.000000 -1.000000 3.000000  
 +Hello world!  I am process number: 1 on host maris079.lorentz.leidenuniv.nl 
 +11.000000 -9.000000 5.000000 -9.000000 21.000000 -1.000000 5.000000 -1.000000 3.000000  
 +Hello world!  I am process number: 3 on host maris081.lorentz.leidenuniv.nl 
 +11.000000 -9.000000 5.000000 -9.000000 21.000000 -1.000000 5.000000 -1.000000 3.000000  
 +Hello world!  I am process number: 0 on host maris078.lorentz.leidenuniv.nl 
 +11.000000 -9.000000 5.000000 -9.000000 21.000000 -1.000000 5.000000 -1.000000 3.000000  
 + 
 +</code> 
 + 
 ===== Suggested readings ===== ===== Suggested readings =====
  
-  * https://slurm.schedmd.com/archive/slurm-18.08.6/+  * https://slurm.schedmd.com/archive/slurm-21.08.8-2/
   * https://osc.github.io/ood-documentation/master/   * https://osc.github.io/ood-documentation/master/
   * https://www.gnu.org/gnu/linux-and-gnu.en.html   * https://www.gnu.org/gnu/linux-and-gnu.en.html
Line 518: Line 638:
 ===== Recent Scientific Publications from maris  ===== ===== Recent Scientific Publications from maris  =====
  
-==== Quantum computing ====+==== Quantum physics and Quantum computing ====
  
   * [[https://journals.aps.org/pra/pdf/10.1103/PhysRevA.100.010302|Experimental error mitigation via symmetry verification in a variational quantum eigensolver]]   * [[https://journals.aps.org/pra/pdf/10.1103/PhysRevA.100.010302|Experimental error mitigation via symmetry verification in a variational quantum eigensolver]]
Line 528: Line 648:
   * [[https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.123.120502|Fast, High-Fidelity Conditional-Phase Gate Exploiting Leakage Interference in Weakly Anharmonic Superconducting Qubits]]   * [[https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.123.120502|Fast, High-Fidelity Conditional-Phase Gate Exploiting Leakage Interference in Weakly Anharmonic Superconducting Qubits]]
   * [[https://arxiv.org/abs/2002.07119|Leakage detection for a transmon-based surface code]]   * [[https://arxiv.org/abs/2002.07119|Leakage detection for a transmon-based surface code]]
 +  * [[https://journals.aps.org/prb/abstract/10.1103/PhysRevB.103.094518|Voltage staircase in a current-biased quantum-dot Josephson junction]]
  
-==== Statistical, nonlinear, biological, and soft matter physics ====+==== Statistical, nonlinear, biological, condensed matter and soft matter physics ====
  
   * [[https://journals.aps.org/pre/abstract/10.1103/PhysRevE.98.062101|Equivalent-neighbor percolation models in two dimensions: Crossover between mean-field and short-range behavior]]   * [[https://journals.aps.org/pre/abstract/10.1103/PhysRevE.98.062101|Equivalent-neighbor percolation models in two dimensions: Crossover between mean-field and short-range behavior]]
Line 535: Line 656:
   * [[https://journals.aps.org/pre/abstract/10.1103/PhysRevE.99.062133|Revisiting the field-driven edge transition of the tricritical two-dimensional Blume-Capel model]]   * [[https://journals.aps.org/pre/abstract/10.1103/PhysRevE.99.062133|Revisiting the field-driven edge transition of the tricritical two-dimensional Blume-Capel model]]
   * [[https://journals.aps.org/pre/abstract/10.1103/PhysRevE.101.012118|Three-state Potts model on the centered triangular lattice]]   * [[https://journals.aps.org/pre/abstract/10.1103/PhysRevE.101.012118|Three-state Potts model on the centered triangular lattice]]
 +  * [[https://www.pnas.org/content/118/4/e2020525118|Liquid-crystal-based topological photonics]]
  
 ==== Particles, fields, gravitation, and cosmology ==== ==== Particles, fields, gravitation, and cosmology ====
institute_lorentz/xmaris.1601046125.txt.gz · Last modified: 2020/09/25 15:02 by lenocil