This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
institute_lorentz:xmaris [2022/06/10 08:22] – [How to access Xmaris] lenocil | institute_lorentz:xmaris [2024/02/29 14:16] (current) – [Xmaris Partitions] jansen | ||
---|---|---|---|
Line 18: | Line 18: | ||
Xmaris is the successor of the maris cluster, renamed with a prefix '' | Xmaris is the successor of the maris cluster, renamed with a prefix '' | ||
- | [[https:// | + | [[https:// |
===== Xmaris features and expected cluster lifetime ===== | ===== Xmaris features and expected cluster lifetime ===== | ||
Line 44: | Line 44: | ||
^Mount Point^ Type ^Notes^ | ^Mount Point^ Type ^Notes^ | ||
|/scratch | HD | **temporary**, | |/scratch | HD | **temporary**, | ||
- | |/marisdata |NetApp| 2TB/user quota, remote| | + | |/marisdata |NetApp| 2TB/user quota, medium-term storage, remote| |
- | |/home |NetApp| 10GB/user quota, remote| | + | |/home |NetApp| 10GB/user quota, medium-term storage, remote| |
+ | |/ | ||
Extra efficient scratch spaces are available to all nodes on the infiniband network ('' | Extra efficient scratch spaces are available to all nodes on the infiniband network ('' | ||
^Mount Point^ Type^ Notes^ | ^Mount Point^ Type^ Notes^ | ||
- | |/IBSSD| SSD |**temporary**, InfiniBand/ | + | |/IBSSD| SSD |**DISCONTINUED**, InfiniBand/ |
|/PIBSSD| SSD|**temporary**, | |/PIBSSD| SSD|**temporary**, | ||
Line 58: | Line 60: | ||
xmaris users are strongly advised they delete (or at least move to the shared data disk), if any, their data from the compute nodes scratch disks upon completion of their calculations. All data on the scratch disks __might be cancelled without prior notice__. | xmaris users are strongly advised they delete (or at least move to the shared data disk), if any, their data from the compute nodes scratch disks upon completion of their calculations. All data on the scratch disks __might be cancelled without prior notice__. | ||
- | Note that **disk policies might change at any time at the discretion of the cluster owners**. | + | Note that **disk policies might change at any time at the discretion of the cluster owners**. |
Line 87: | Line 89: | ||
Once you have been authorised to use Xmaris, there are two ways to access its services: | Once you have been authorised to use Xmaris, there are two ways to access its services: | ||
- | | + | - using a web browser (**Strongly |
- | | + | - using an SSH client (Expert users) |
+ | |||
- | Both methods can provide terminal access, but connections via web browsers offer you extra services such as sftp (drag-and-drop file transfers) | + | Both methods can provide terminal access, but connections via web browsers offer you extra services such as sftp (drag-and-drop file transfers), jupyter interactive notebooks, virtual desktops and more at the click of your mouse. **We advise** all users either |
==== Access via an ssh client ==== | ==== Access via an ssh client ==== | ||
Line 96: | Line 99: | ||
The procedure differs on whether you try to connect with a client connected to the IL intranet or not. | The procedure differs on whether you try to connect with a client connected to the IL intranet or not. | ||
- | * When within the IL network, for instance if you are using a Lorentz Institute workstation, | + | * When within the IL network, for instance if you are using a Lorentz Institute workstation, |
<code bash> | <code bash> | ||
Line 109: | Line 112: | ||
- | :!: If you were a maris user prior to the configuration switch to xmaris, you might find out that many terminal functions and programs could not be working as expected. This is due to the presence in your xmaris home directory of old shell initialisation scripts still tied to the STRW sfinx environment. You can override them (preferably after making a backup copy) by replacing their contents with the default CentOS shell initialisation scripts, for instance for bash these are located in ''/ | + | |:!: If you were a maris user prior to the configuration switch to xmaris, you might find out that many terminal functions and programs could not be working as expected. This is due to the presence in your xmaris home directory of old shell initialisation scripts still tied to the STRW sfinx environment. You can override them (preferably after making a backup copy) by replacing their contents with the default CentOS shell initialisation scripts, for instance for bash these are located in ''/ |
==== Web access ==== | ==== Web access ==== | ||
Xmaris services, that is terminal, scheduler/ | Xmaris services, that is terminal, scheduler/ | ||
- | {{: | + | {{ : |
- | Similarly to a traditional shell access, Xmaris OpenOnDemand is available only for connections within the __IL intranet__. IL users who wish to access OpenOnDemand from their remote home locations could for example instruct their browsers to SOCKS-proxy their connections via our SSH server. | + | Similarly to a traditional shell access, Xmaris OpenOnDemand is available only for connections within the __IL intranet__. IL users who wish to access OpenOnDemand from their remote home locations could for example |
- | Open a local terminal and type | + | Open a local terminal and type (substitute username with your IL username) |
<code bash> | <code bash> | ||
Line 131: | Line 134: | ||
* Submit batch jobs to the slurm scheduler/ | * Submit batch jobs to the slurm scheduler/ | ||
* Open a terminal. | * Open a terminal. | ||
- | * Launch interactive jupyter notebooks. | + | * Launch interactive |
* Monitor cluster usage. | * Monitor cluster usage. | ||
* Create and launch your very own OnDemand application (read [[https:// | * Create and launch your very own OnDemand application (read [[https:// | ||
Line 139: | Line 142: | ||
^Partition^ Number nodes ^ Timelimit ^ Notes^ | ^Partition^ Number nodes ^ Timelimit ^ Notes^ | ||
- | |compAMD*| | + | |compAMD*| |
|compAMDlong| 3 | 60 days | | | |compAMDlong| 3 | 60 days | | | ||
|compIntel| 2 | 5 days and 12 hours| | | |compIntel| 2 | 5 days and 12 hours| | | ||
|gpuIntel| 1 | 3 days and 12 hours | GPU | | |gpuIntel| 1 | 3 days and 12 hours | GPU | | ||
- | |ibIntel | 4 | 7 days | InfiniBand, Multiprocessing | | + | |ibIntel | 8 | 7 days | InfiniBand, Multiprocessing | |
*: default partition | *: default partition | ||
Line 152: | Line 155: | ||
|maris075 |gpuIntel|2 x Nvidia Tesla P100 16GB | 6.0| | |maris075 |gpuIntel|2 x Nvidia Tesla P100 16GB | 6.0| | ||
- | Use the '' | + | Xmaris GPUs must be allocated using slurm' |
+ | <code bash> | ||
+ | srun -p gpuIntel --gres=gpu: | ||
+ | </ | ||
===== Xmaris scientific software ===== | ===== Xmaris scientific software ===== | ||
Line 189: | Line 195: | ||
Any pre-installed software can be made available in your environment via the '' | Any pre-installed software can be made available in your environment via the '' | ||
- | It is possible to save a list of modules you use often in a //module collection// | + | It is possible to save a list of modules you use often in a //module collection// |
<code bash> | <code bash> | ||
module load mod1 mod2 mod3 mod4 mod5 mod6 | module load mod1 mod2 mod3 mod4 mod5 mod6 | ||
Line 205: | Line 211: | ||
|TensorFlow-1.15.0-Miniconda3/ | |TensorFlow-1.15.0-Miniconda3/ | ||
- | To create | + | The following example shows how you can create a tensorflow-aware jupyter |
<code bash> | <code bash> | ||
- | # only on maris075 (GPU node) | + | # We use maris075 (GPU node) and load the optimised tf module |
ml load TensorFlow/ | ml load TensorFlow/ | ||
- | pip install --user | + | |
- | pip install --user jupyter-client==5.3.1 | + | # We install ipykernel, because necessary to run py notebooks |
- | ipython kernel install | + | python -m pip install |
+ | |||
+ | # We create a kernel called TFQuantum based on python from TensorFlow/ | ||
+ | python -m ipykernel | ||
+ | |||
+ | # We edit the kernel such that it does not execute python directly | ||
+ | # but via a custom wrapper script | ||
+ | cat $HOME/ | ||
+ | |||
+ | { | ||
+ | " | ||
+ | "/ | ||
+ | "-m", | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | ], | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | } | ||
+ | } | ||
+ | |||
+ | # The wrapper script will call python but only after loading any | ||
+ | # appropriate module | ||
+ | cat / | ||
+ | |||
+ | #!/bin/env bash | ||
+ | ml load TensorFlow/ | ||
+ | |||
+ | exec python $@ | ||
+ | |||
+ | # DONE. tfquantum will appear in the dropdown list of kernels | ||
+ | # upon creating a new notebook | ||
</ | </ | ||
- | When launching a jupyter notebook remember to specify '' | + | |
=== TensorFlow with Graphviz === | === TensorFlow with Graphviz === | ||
Line 234: | Line 275: | ||
* via a traditional // | * via a traditional // | ||
- | Whatever installation method you might choose, please note that you do not have administrative rights to the cluster. | + | Whatever installation method you might choose, please note that you **do not have** administrative rights to the cluster. |
Line 262: | Line 303: | ||
</ | </ | ||
- | :!: The environment variable '' | + | |:!: The environment variable '' |
- | ml|here]]. | + | ml|here]].| |
- | :!: When compiling OpenBLAS it is not sufficient to define '' | + | |:!: When compiling OpenBLAS it is not sufficient to define '' |
Then execute | Then execute | ||
Line 275: | Line 316: | ||
to make available to the '' | to make available to the '' | ||
- | :!: '' | + | |:!: '' |
Should you want to customise the building process of a given software please read how to implement [[https:// | Should you want to customise the building process of a given software please read how to implement [[https:// | ||
Line 581: | Line 622: | ||
===== Suggested readings ===== | ===== Suggested readings ===== | ||
- | * https:// | + | * https:// |
* https:// | * https:// | ||
* https:// | * https:// |