User Tools

Site Tools


institute_lorentz:xmaris

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
institute_lorentz:xmaris [2022/06/10 07:56] – [Compute nodes data disks] lenocilinstitute_lorentz:xmaris [2022/06/10 12:47] – [Installing extra software] lenocil
Line 58: Line 58:
 xmaris users are strongly advised they delete (or at least move to the shared data disk), if any, their data from the compute nodes scratch disks upon completion of their calculations. All data on the scratch disks __might be cancelled without prior notice__. xmaris users are strongly advised they delete (or at least move to the shared data disk), if any, their data from the compute nodes scratch disks upon completion of their calculations. All data on the scratch disks __might be cancelled without prior notice__.
  
-The home disk ''/home'' has a 10GB/user quota where as ''/marisdata'' has a 2TB/user quota. Note that **these policies might change at any time at the discretion of the cluster owners**.+Note that **disk policies might change at any time at the discretion of the cluster owners**.
  
  
 ---- ----
  
 +Please also note the following  
 +  * xmaris' home disk is different than your [[institute_lorentz:gnulinux_workstations|IL workstation]] or [[institute_lorentz:remote_workspace|remote workspace]] home disk.
 +  * The **OLD** (as in the old maris) ''/clusterdata'' is deliberately made unavailable on xmaris, because it is **no longer** maintained.  If you have any data on it, **it is your responsibility** to create backups. All data on ''/clusterdata'' will get permanently lost in case of hardware failure.
    
-:!: **All data** on the scratch partitions is assumed to be temporary and will be **deleted** upon a node re-installation. 
  
-:!: When usisng xmaris please note that the home partition  is different than the IL workstations home. 
- 
-:!: The **OLD** (as in the old maris) ''/clusterdata'' is deliberately made unavailable on xmaris, because it is **no longer** maintained.  If you have any data on it, **it is your responsibility** to create backups. All data on ''/clusterdata'' will get permanently lost in case of hardware failure. 
 ==== Xmaris usage policies ==== ==== Xmaris usage policies ====
  
-Usage policies are updated regularly in accordance with the needs of the cluster owners and **may change at any time without notice**. At the moment there is an __enforced usage limit of 128 CPUs per user__ that does not apply to the owners. Job execution priorities are defined via a complex [[https://slurm.schedmd.com/archive/slurm-18.08.6/priority_multifactor.html|multi-factor algorithm]] whose parameters can be displayed on the command line via+Usage policies are updated regularly in accordance with the needs of the cluster owners and **may change at any time without notice**. At the moment there is an __enforced usage limit of 128 CPUs per user__ that does not apply to the owners. Job execution priorities are defined via a complex [[https://slurm.schedmd.com/archive/slurm-21.08.8-2/priority_multifactor.html|multi-factor algorithm]] whose parameters can be displayed on the command line via
 <code bash> <code bash>
 scontrol show config | grep -i priority scontrol show config | grep -i priority
Line 80: Line 79:
  
   * execute slurm's ''sinfo'' (this requires shell access to the cluster, see below)   * execute slurm's ''sinfo'' (this requires shell access to the cluster, see below)
-  * visit https://xmaris.lorentz.leidenuniv.nl:4433/pun/sys/ganglia-status (via a web browser)+  * browse to https://xmaris.lorentz.leidenuniv.nl/gangliafrom any IL workstation 
  
-:!: The link above is accessible only within the IL workstations network. 
 ===== How to access Xmaris ===== ===== How to access Xmaris =====
 Access to Xmaris is not granted automatically to all Lorentz Institute members. Instead, a preliminary approval must be granted to you by the cluster owners (read [[|here]]). Access to Xmaris is not granted automatically to all Lorentz Institute members. Instead, a preliminary approval must be granted to you by the cluster owners (read [[|here]]).
Line 88: Line 87:
 Once you have been authorised to use Xmaris, there are two ways to access its services: Once you have been authorised to use Xmaris, there are two ways to access its services:
  
-  - using an ssh client (traditional method). +  - using a web browser (**Strongly advised** to the novel HPC user
-  - using a web browser (modern method)+  - using an SSH client (Expert users) 
 + 
  
-Both methods can provide terminal access, but connections via web browsers offer you extra services such as sftp (drag-and-drop file transfers) and jupyter interactive notebooks. We advise all users unfamilair with  GNU/Linux to use a web browser to interact with Xmaris.+Both methods can provide terminal access, but connections via web browsers offer you extra services such as sftp (drag-and-drop file transfers)jupyter interactive notebooks, virtual desktops and more at the click of your mouse**We advise** all users either unfamilair with the GNU/Linux terminal or new to HPC to use a web browser to interact with Xmaris.
  
 ==== Access via an ssh client ==== ==== Access via an ssh client ====
Line 97: Line 97:
 The procedure differs on whether you try to connect with a client connected to the IL intranet or not. The procedure differs on whether you try to connect with a client connected to the IL intranet or not.
  
-  * When within the IL network, for instance if you are using a Lorentz Institute workstation, you have direct access to Xmaris. Open a terminal and type the command below+  * When within the IL network, for instance if you are using a Lorentz Institute workstation, you have direct access to Xmaris. Open a terminal and type the command below (substitute username with your own IL username)
  
 <code bash> <code bash>
Line 110: Line 110:
  
  
-:!: If you were a maris user prior to the configuration switch to xmaris, you might find out that many terminal functions and programs could not be working as expected. This is due to the presence in your xmaris home directory of old shell initialisation scripts still tied to the STRW sfinx environment. You can override them (preferably after making a backup copy) by replacing their contents with the default CentOS shell initialisation scripts, for instance for bash these are located in ''/etc/skel/.bashrc'' and ''/etc/skel/.bash_profile''.+|:!: If you were a maris user prior to the configuration switch to xmaris, you might find out that many terminal functions and programs could not be working as expected. This is due to the presence in your xmaris home directory of old shell initialisation scripts still tied to the STRW sfinx environment. You can override them (preferably after making a backup copy) by replacing their contents with the default CentOS shell initialisation scripts, for instance for bash these are located in ''/etc/skel/.bashrc'' and ''/etc/skel/.bash_profile''|
 ==== Web access ==== ==== Web access ====
  
 Xmaris services, that is terminal, scheduler/resource manager, jupyter notebooks and monitoring facilities, can be accessed easily via a browser without the need of additional plugins navigating to [[https://xmaris.lorentz.leidenuniv.nl:4433|xmaris OpenOnDemand]]. Xmaris services, that is terminal, scheduler/resource manager, jupyter notebooks and monitoring facilities, can be accessed easily via a browser without the need of additional plugins navigating to [[https://xmaris.lorentz.leidenuniv.nl:4433|xmaris OpenOnDemand]].
  
-{{:institute_lorentz:oodxmaris1.png?direct&600}}  {{:institute_lorentz:oodxmaris2.png?direct&400}}+{{ :institute_lorentz:oodxmaris1.png?direct&1000 }}  
  
-Similarly to a traditional shell access, Xmaris OpenOnDemand is available only for connections within the __IL intranet__. IL users  who wish to access OpenOnDemand from their remote home locations could for example instruct their browsers to SOCKS-proxy their connections via our SSH server. +Similarly to a traditional shell access, Xmaris OpenOnDemand is available only for connections within the __IL intranet__. IL users  who wish to access OpenOnDemand from their remote home locations could for example use the [[:vpn#lorentz_institute|IL VPN]] or instruct their browsers to SOCKS-proxy their connections via our SSH server. 
-Open a local terminal and type +Open a local terminal and type (substitute username with your IL username)
  
 <code bash> <code bash>
Line 132: Line 132:
   * Submit batch jobs to the slurm scheduler/resource manager.   * Submit batch jobs to the slurm scheduler/resource manager.
   * Open a terminal.   * Open a terminal.
-  * Launch interactive jupyter notebooks.+  * Launch interactive applications such as jupyter notebooks, tensorboard, virtual desktops, etc..
   * Monitor cluster usage.   * Monitor cluster usage.
   * Create and launch your very own OnDemand application (read [[https://osc.github.io/ood-documentation/master/app-development/tutorials-passenger-apps.html|here]]).   * Create and launch your very own OnDemand application (read [[https://osc.github.io/ood-documentation/master/app-development/tutorials-passenger-apps.html|here]]).
Line 153: Line 153:
 |maris075 |gpuIntel|2 x Nvidia Tesla P100 16GB | 6.0| |maris075 |gpuIntel|2 x Nvidia Tesla P100 16GB | 6.0|
  
-Use the ''--gres'' slurm option to allocate them for your job,  for instance via ''srun -p gpuIntel --gres=gpu:1 --pty bash -i''.+Xmaris GPUs must be allocated using slurm'''--gres'' option,  for instance  
 +<code bash> 
 +srun -p gpuIntel --gres=gpu:1 --pty bash -i 
 +</code>
 ===== Xmaris scientific software ===== ===== Xmaris scientific software =====
  
Line 190: Line 193:
 Any pre-installed software can be made available in your environment via the ''module load <module_name>'' command. Any pre-installed software can be made available in your environment via the ''module load <module_name>'' command.
  
-It is possible to save a list of modules you use often in a //module collection// to load them in one command+It is possible to save a list of modules you use often in a //module collection// to load them just with one command
 <code bash> <code bash>
 module load mod1 mod2 mod3 mod4 mod5 mod6 module load mod1 mod2 mod3 mod4 mod5 mod6
Line 206: Line 209:
 |TensorFlow-1.15.0-Miniconda3/4.7.10| CPU| All| | |TensorFlow-1.15.0-Miniconda3/4.7.10| CPU| All| |
  
-To create and use a tensorflow-aware jupyter kernel that is compatible with xmaris' OpenOnDemand interface do+The following example shows how you can create a tensorflow-aware jupyter notebook kernel that you can use for instance via the OpenOnDemand interface 
  
 <code bash> <code bash>
-only on maris075 (GPU node)+We use maris075 (GPU node) and load the optimised tf module
 ml load TensorFlow/2.1.0-fosscuda-2019b-Python-3.7.4 ml load TensorFlow/2.1.0-fosscuda-2019b-Python-3.7.4
-pip install --user ipykernel==5.1.2 + 
-pip install --user jupyter-client==5.3.1 +# We install ipykernel, because necessary to run py notebooks 
-ipython kernel install --name=tf210gpuquantum --user+python -m pip install ipykernel --user 
 + 
 +# We create a kernel called TFQuantum based on python from TensorFlow/2.1.0-fosscuda-2019b-Python-3.7.4 
 +python -m ipykernel install --name TFQuantum --display-name "TFQuantum" --user 
 + 
 +# We edit the kernel such that it does not execute python directly 
 +# but via a custom wrapper script 
 +cat $HOME/.local/share/jupyter/kernels/tfquantum/kernel.json 
 + 
 +
 + "argv":
 +  "/home/lenocil/.local/share/jupyter/kernels/tfquantum/wrapper.sh", 
 +  "-m", 
 +  "ipykernel_launcher", 
 +  "-f", 
 +  "{connection_file}" 
 + ], 
 + "display_name": "TFQuantum", 
 + "language": "python", 
 + "metadata":
 +  "debugger": true 
 + } 
 +
 + 
 +# The wrapper script will call python but only after loading any 
 +# appropriate module 
 +cat /home/lenocil/.local/share/jupyter/kernels/tfquantum/wrapper.sh 
 + 
 +#!/bin/env bash 
 +ml load TensorFlow/2.1.0-fosscuda-2019b-Python-3.7.4 
 + 
 +exec python $@ 
 + 
 +# DONE. tfquantum will appear in the dropdown list of kernels 
 +# upon creating a new notebook 
 </code> </code>
  
-When launching a jupyter notebook remember to specify ''TensorFlow/2.1.0-fosscuda-2019b-Python-3.7.4'' as an extra runtime module.+
  
 === TensorFlow with Graphviz === === TensorFlow with Graphviz ===
Line 235: Line 273:
     * via a traditional //configure/make// procedure     * via a traditional //configure/make// procedure
  
-Whatever installation method you might choose, please note that you do not have administrative rights to the cluster. +Whatever installation method you might choose, please note that you **do not have** administrative rights to the cluster. 
  
  
Line 263: Line 301:
 </code> </code>
  
-:!: The environment variable ''EASYBUILD_OPTARCH'' instructs EasyBuild to compile software in a generic way so that it can be used on different CPUs. This is rather convenient in heterogeneous clusters such as xmaris to avoid recompilations of the same softwares on different compute nodes. This convenience comes of course at a cost; the executables so produced will not be as efficient as they would be on a given CPU. For more info read [[https://easybuild.readthedocs.io/en/latest/Controlling_compiler_optimization_flags.ht +|:!: The environment variable ''EASYBUILD_OPTARCH'' instructs EasyBuild to compile software in a generic way so that it can be used on different CPUs. This is rather convenient in heterogeneous clusters such as xmaris to avoid recompilations of the same softwares on different compute nodes. This convenience comes of course at a cost; the executables so produced will not be as efficient as they would be on a given CPU. For more info read [[https://easybuild.readthedocs.io/en/latest/Controlling_compiler_optimization_flags.ht 
-ml|here]].+ml|here]].|
  
-:!: When compiling OpenBLAS it is not sufficient to define ''EASYBUILD_OPTARCH'' to ''GENERIC'' to achieve portability of the executables. Some extra steps must be taken as described in https://github.com/easybuilders/easybuild/blob/master/docs/Controlling_compiler_optimization_flags.rst. A list of targets supported by OpenBLAS can be found [[https://github.com/xianyi/OpenBLAS/blob/develop/TargetList.txt|here]].+|:!: When compiling OpenBLAS it is not sufficient to define ''EASYBUILD_OPTARCH'' to ''GENERIC'' to achieve portability of the executables. Some extra steps must be taken as described in https://github.com/easybuilders/easybuild/blob/master/docs/Controlling_compiler_optimization_flags.rst. A list of targets supported by OpenBLAS can be found [[https://github.com/xianyi/OpenBLAS/blob/develop/TargetList.txt|here]].|
  
 Then execute  Then execute 
institute_lorentz/xmaris.txt · Last modified: 2024/02/29 14:16 by jansen