diff --git a/mkdocs/docs/HPC/FAQ.md b/mkdocs/docs/HPC/FAQ.md index 4ac40f1f73c4..10ee104e5478 100644 --- a/mkdocs/docs/HPC/FAQ.md +++ b/mkdocs/docs/HPC/FAQ.md @@ -319,8 +319,8 @@ Please send an e-mail to {{hpcinfo}} that includes: {% endif %} -If the software is a Python package, you can manually install it in a virtual environment. -More information can be found [here](./setting_up_python_virtual_environments.md). +If the software is a Python package, you can manually +[install it in a virtual environment](./setting_up_python_virtual_environments.md). Note that it is still preferred to submit a software installation request, as the software installed by the HPC team will be optimized for the HPC environment. This can lead to dramatic performance improvements. diff --git a/mkdocs/docs/HPC/alphafold.md b/mkdocs/docs/HPC/alphafold.md index 0890136f8767..b653aefa23e3 100644 --- a/mkdocs/docs/HPC/alphafold.md +++ b/mkdocs/docs/HPC/alphafold.md @@ -20,8 +20,8 @@ It is therefore recommended to first familiarize yourself with AlphaFold. The fo - VSC webpage about AlphaFold: - Introductory course on AlphaFold by VIB: - "Getting Started with AlphaFold" presentation by Kenneth Hoste (HPC-UGent) - - recording available [on YouTube](https://www.youtube.com/watch?v=jP9Qg1yBGcs) - - slides available [here (PDF)](https://www.vscentrum.be/_files/ugd/5446c2_f19a8723f7f7460ebe990c28a53e56a2.pdf?index=true) + - [recording available](https://www.youtube.com/watch?v=jP9Qg1yBGcs) (on YouTube) + - [slides available](https://www.vscentrum.be/_files/ugd/5446c2_f19a8723f7f7460ebe990c28a53e56a2.pdf?index=true) (PDF) - see also @@ -130,8 +130,8 @@ Likewise for `jackhmmer`, the core count can be controlled via `$ALPHAFOLD_JACKH ### CPU/GPU comparison -The provided timings were obtained by executing the `T1050.fasta` example, as outlined in the Alphafold [README]({{readme}}). -For the corresponding jobscripts, they are available [here](./example-jobscripts). +The provided timings were obtained by executing the `T1050.fasta` example, as outlined in the [Alphafold README]({{readme}}). +The [corresponding jobscripts](#example-jobscripts) are available. Using `--db_preset=full_dbs`, the following runtime data was collected: diff --git a/mkdocs/docs/HPC/getting_started.md b/mkdocs/docs/HPC/getting_started.md index d43087486087..91e15ceedd9d 100644 --- a/mkdocs/docs/HPC/getting_started.md +++ b/mkdocs/docs/HPC/getting_started.md @@ -79,8 +79,12 @@ Make sure you can get to a shell access to the {{hpcinfra}} before proceeding wi Now that you can login, it is time to transfer files from your local computer to your **home directory** on the {{hpcinfra}}. -Download [tensorflow_mnist.py](https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/{{exampleloc}}/tensorflow_mnist.py) -and [run.sh](https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/{{exampleloc}}/run.sh) example scripts to your computer (from [here](https://github.com/hpcugent/vsc_user_docs/tree/main/{{exampleloc}})). +Download following the example scripts to your computer: + +- [tensorflow_mnist.py](https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/{{exampleloc}}/tensorflow_mnist.py) +- [run.sh](https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/{{exampleloc}}/run.sh) + +You can also find the example scripts in our git repo: [https://github.com/hpcugent/vsc_user_docs/](https://github.com/hpcugent/vsc_user_docs/tree/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist). {%- if OS == windows %} diff --git a/mkdocs/docs/HPC/infrastructure.md b/mkdocs/docs/HPC/infrastructure.md index 331fedb2b3f0..42d7c9a74787 100644 --- a/mkdocs/docs/HPC/infrastructure.md +++ b/mkdocs/docs/HPC/infrastructure.md @@ -13,8 +13,8 @@ Science and Innovation (EWI). Log in to the HPC-UGent Tier-2 infrastructure via [https://login.hpc.ugent.be](https://login.hpc.ugent.be) or using SSH via `login.hpc.ugent.be`. -More info on using the web portal you can find [here](web_portal.md), -and about connection with SSH [here](connecting.md). +Read more info on [using the web portal](web_portal.md), +and [about making a connection with SSH](connecting.md). ## Tier-2 compute clusters diff --git a/mkdocs/docs/HPC/jupyter.md b/mkdocs/docs/HPC/jupyter.md index 3f6776b38e7f..d24598050461 100644 --- a/mkdocs/docs/HPC/jupyter.md +++ b/mkdocs/docs/HPC/jupyter.md @@ -89,7 +89,9 @@ $ module load SciPy-bundle/2023.11-gfbf-2023b ``` This throws no errors, since this module uses a toolchain that is compatible with the toolchain used by the notebook -If we use a different SciPy module that uses an incompatible toolchain, we will get a module load conflict when trying to load it (For more info on these errors, see [here](troubleshooting.md#module-conflicts)). +If we use a different SciPy module that uses an incompatible toolchain, +we will get a module load conflict when trying to load it +(for more info on these errors, consult the [troubleshooting page](troubleshooting.md#module-conflicts)). ```shell $ module load JupyterNotebook/7.2.0-GCCcore-13.2.0 diff --git a/mkdocs/docs/HPC/multi_core_jobs.md b/mkdocs/docs/HPC/multi_core_jobs.md index 00834138cbd5..518a3f0b2a4f 100644 --- a/mkdocs/docs/HPC/multi_core_jobs.md +++ b/mkdocs/docs/HPC/multi_core_jobs.md @@ -47,7 +47,7 @@ MPI. !!! warning Just requesting more nodes and/or cores does not mean that your job will automatically run faster. - You can find more about this [here](troubleshooting.md#job_does_not_run_faster). + This is explained on the [troubleshooting page](troubleshooting.md#job_does_not_run_faster). ## Parallel Computing with threads diff --git a/mkdocs/docs/HPC/multi_job_submission.md b/mkdocs/docs/HPC/multi_job_submission.md index 70239ac4d433..ddebf7561cf8 100644 --- a/mkdocs/docs/HPC/multi_job_submission.md +++ b/mkdocs/docs/HPC/multi_job_submission.md @@ -159,7 +159,11 @@ a parameter instance is called a work item in Worker parlance. ``` module swap cluster/donphan ``` - We recommend using a `module swap cluster` command after submitting the jobs. Additional information about this as well as more comprehensive details concerning the 'Illegal instruction' error can be accessed [here](troubleshooting.md#multi-job-submissions-on-a-non-default-cluster). + + We recommend using a `module swap cluster` command after submitting the jobs. + Additional information about this as well as more comprehensive details + concerning the 'Illegal instruction' error can be found + on [the troubleshooting page](troubleshooting.md#multi-job-submissions-on-a-non-default-cluster). ## The Worker framework: Job arrays [//]: # (sec:worker-framework-job-arrays) diff --git a/mkdocs/docs/HPC/only/gent/2023/donphan-gallade.md b/mkdocs/docs/HPC/only/gent/2023/donphan-gallade.md index 7d3aa9cf0ac1..5bd5f5853367 100644 --- a/mkdocs/docs/HPC/only/gent/2023/donphan-gallade.md +++ b/mkdocs/docs/HPC/only/gent/2023/donphan-gallade.md @@ -15,7 +15,7 @@ For software installation requests, please use the [request form](https://www.ug `donphan` is the new debug/interactive cluster. -It replaces `slaking`, which will be retired on **Monday 22 May 2023**. +It replaces `slaking`, which was retired on **Monday 22 May 2023**. It is primarily intended for interactive use: interactive shell sessions, using GUI applications through the [HPC-UGent web portal](../../../web_portal.md), etc. @@ -135,4 +135,6 @@ a `gallade` workernode has 128 cores (so ~7.3 GiB per core on average), while a (so ~20.5 GiB per core on average). It is important to take this aspect into account when submitting jobs to `gallade`, especially when requesting -all cores via `ppn=all`. You may need to explictly request more memory (see also [here](../../../fine_tuning_job_specifications#pbs_mem)). +all cores via `ppn=all`. +You may need to explictly request more memory by +[setting the memory parameter](../../../fine_tuning_job_specifications#pbs_mem). diff --git a/mkdocs/docs/HPC/running_batch_jobs.md b/mkdocs/docs/HPC/running_batch_jobs.md index 054d268979fd..c9ae045da310 100644 --- a/mkdocs/docs/HPC/running_batch_jobs.md +++ b/mkdocs/docs/HPC/running_batch_jobs.md @@ -833,7 +833,9 @@ The output of the various commands interacting with jobs (`qsub`, It is possible to submit jobs from a job to a cluster different than the one your job is running on. This could come in handy if, for example, the tool used to submit jobs only works on a particular cluster (or only on the login nodes), but the jobs can be run on several clusters. -An example of this is the `wsub` command of `worker`, see also [here](troubleshooting.md#multi-job-submissions-on-a-non-default-cluster). +An example of this is the `wsub` command of `worker`. +More info on these commands is in the document on [multi job submission](multi_job_submission.md) +or on the [troubleshooting page](troubleshooting.md#multi-job-submissions-on-a-non-default-cluster). To submit jobs to the `{{othercluster}}` cluster, you can change only what is needed in your session environment to submit jobs to that particular cluster by using `module swap env/slurm/{{othercluster}}` instead of using diff --git a/mkdocs/docs/HPC/setting_up_python_virtual_environments.md b/mkdocs/docs/HPC/setting_up_python_virtual_environments.md index 896d3d3f22a0..a9797c0cbe53 100644 --- a/mkdocs/docs/HPC/setting_up_python_virtual_environments.md +++ b/mkdocs/docs/HPC/setting_up_python_virtual_environments.md @@ -363,7 +363,8 @@ $ python Illegal instruction (core dumped) ``` -we are presented with the illegal instruction error. More info on this [here](troubleshooting.md#illegal-instruction-error) +we are presented with the illegal instruction error. +More info on this on the [troubleshooting page](troubleshooting.md#illegal-instruction-error) ### Error: GLIBC not found diff --git a/mkdocs/docs/HPC/troubleshooting.md b/mkdocs/docs/HPC/troubleshooting.md index f3453b194e83..a463d5dd9f11 100644 --- a/mkdocs/docs/HPC/troubleshooting.md +++ b/mkdocs/docs/HPC/troubleshooting.md @@ -60,7 +60,7 @@ or because the pinning is done incorrectly and several threads/processes are bei - **Lack of sufficient memory**: When there is not enough memory available, or not enough memory bandwidth, it is likely that you will not see a significant speedup when using more cores (since each thread or process most likely requires additional memory). -More info on running multi-core workloads on the {{ hpcinfra }} can be found [here](multi_core_jobs.md). +There is more info on [running multi-core workloads](multi_core_jobs.md) on the {{ hpcinfra }}. ### Using multiple nodes When trying to use multiple (worker)nodes to improve the performance of your workloads, you may not see (significant) speedup. @@ -74,7 +74,7 @@ Actually using additional nodes is not as straightforward as merely asking for m Using the resources of multiple nodes is often done using a [Message Passing Interface (MPI)](https://en.wikipedia.org/wiki/Message_Passing_Interface) library. MPI allows nodes to communicate and coordinate, but it also introduces additional complexity. -An example of how you can make beneficial use of multiple nodes can be found [here](multi_core_jobs.md#parallel-computing-with-mpi). +We have an example of [how you can make beneficial use of multiple nodes](multi_core_jobs.md#parallel-computing-with-mpi). You can also use MPI in Python, some useful packages that are also available on the HPC are: diff --git a/mkdocs/docs/HPC/web_portal.md b/mkdocs/docs/HPC/web_portal.md index c34e211625e2..986f075ece1c 100644 --- a/mkdocs/docs/HPC/web_portal.md +++ b/mkdocs/docs/HPC/web_portal.md @@ -32,11 +32,9 @@ Through this web portal, you can: - open a terminal session directly in your web browser; -More detailed information is available below, as well as in the [Open -OnDemand -documentation](https://osc.github.io/ood-documentation/latest/). A -walkthrough video is available on YouTube -[here](https://www.youtube.com/watch?v=4-w-4wjlnPk). +More detailed information is available below, as well as in the +[Open OnDemand documentation](https://osc.github.io/ood-documentation/latest/). +A [walkthrough video](https://www.youtube.com/watch?v=4-w-4wjlnPk) is available on YouTube. ## Pilot access