diff --git a/.custom_wordlist.txt b/.custom_wordlist.txt index 978dc23..d830bc2 100644 --- a/.custom_wordlist.txt +++ b/.custom_wordlist.txt @@ -1,6 +1,7 @@ HPC hostname Slurm +slurm sackd munge LXD @@ -20,3 +21,8 @@ autonomizing Terraform terraform Traefik +GRES +gres +PCIe +RESource +conf diff --git a/explanation/gpus/driver.md b/explanation/gpus/driver.md new file mode 100644 index 0000000..69a0dcf --- /dev/null +++ b/explanation/gpus/driver.md @@ -0,0 +1,10 @@ +(driver)= +# GPU driver installation and management + +## Auto-install + +Charmed HPC installs GPU drivers when the `slurmd` charm is deployed on a compute node equipped with a supported NVIDIA GPU. Driver detection is performed via the API for [`ubuntu-drivers-common`](https://documentation.ubuntu.com/server/how-to/graphics/install-nvidia-drivers/#the-recommended-way-ubuntu-drivers-tool), a package which examines node hardware, determines appropriate third-party drivers and recommends a set of driver packages that are installed from the Ubuntu repositories. + +## Libraries used + +- [`ubuntu-drivers-common`](https://github.com/canonical/ubuntu-drivers-common), from GitHub. diff --git a/explanation/gpus/index.md b/explanation/gpus/index.md new file mode 100644 index 0000000..cb2fe71 --- /dev/null +++ b/explanation/gpus/index.md @@ -0,0 +1,16 @@ +(gpus)= +# GPUs + +A Graphics Processing Unit (GPU) is a specialized hardware resource that was originally designed to accelerate computer graphics calculations but has expanded use in general purpose computing across a number of fields. GPU-enabled workloads are supported on a Charmed HPC cluster with the necessary driver and workload manager configuration automatically handled by the charms. + +- {ref}`driver` +- {ref}`slurmconf` + +```{toctree} +:titlesonly: +:maxdepth: 1 +:hidden: + +Drivers +Slurm enlistment +``` diff --git a/explanation/gpus/slurmconf.md b/explanation/gpus/slurmconf.md new file mode 100644 index 0000000..2b6751d --- /dev/null +++ b/explanation/gpus/slurmconf.md @@ -0,0 +1,42 @@ +--- +relatedlinks: "[Slurm Workload Manager - gres.conf](https://slurm.schedmd.com/gres.conf.html)" +--- + +(slurmconf)= +# Slurm enlistment + +To allow cluster users to submit jobs requesting GPUs, detected GPUs are automatically added to the [Generic RESource (GRES) Slurm configuration](https://slurm.schedmd.com/gres.html). GRES is a feature in Slurm which enables scheduling of arbitrary generic resources, including GPUs. + +## Device details + +GPU details are gathered by [`pynvml`](https://pypi.org/project/nvidia-ml-py/), the official Python bindings for the NVIDIA management library, which enables GPU counts, associated device files and model names to be queried from the drivers. For compatibility with Slurm configuration files, retrieved model names are converted to lowercase and white space is replaced with underscores. β€œTesla T4” becomes `tesla_t4`, for example. + +## Slurm configuration + +Each GPU-equipped node is added to the _gres.conf_ configuration file following the format defined in the [Slurm _gres.conf_ documentation](https://slurm.schedmd.com/gres.conf.html). A single _gres.conf_ is shared by all compute nodes in the cluster, using the optional `NodeName` specification to define GPU resources per node. Each line in _gres.conf_ uses the following parameters to define a GPU resource: + +| Parameter | Value | +| ---------- | ---------------------------------------------------------- | +| `NodeName` | Node the _gres.conf_ line applies to. | +| `Name` | Name of the generic resource. Always `gpu` here. | +| `Type` | GPU model name. | +| `File` | Path of the device file(s) associated with this GPU model. | + +In _slurm.conf_, if a node is GPU-equipped, its configuration line includes an additional `Gres=`, element, containing a comma-separated list of GPU configurations. If a node is not GPU-equipped, its configuration line does not contain `Gres=`. The format for each configuration is: `::`, as seen in the example below. + +For example, a Microsoft Azure `Standard_NC24ads_A100_v4` node, equipped with a NVIDIA A100 PCIe GPU, is given a node configuration in _slurm.conf_ of: + +``` +NodeName=juju-e33208-1 CPUs=24 Boards=1 SocketsPerBoard=1 CoresPerSocket=24 ThreadsPerCore=1 RealMemory=221446 Gres=gpu:nvidia_a100_80gb_pcie:1 MemSpecLimit=1024 +``` + +and corresponding _gres.conf_ line: + +``` +NodeName=juju-e33208-1 Name=gpu Type=nvidia_a100_80gb_pcie File=/dev/nvidia0 +``` + +## Libraries used + +- [`pynvml / nvidia-ml-py`](https://pypi.org/project/nvidia-ml-py/), from PyPI. + diff --git a/explanation/index.md b/explanation/index.md index f8c3910..4f618c2 100644 --- a/explanation/index.md +++ b/explanation/index.md @@ -2,8 +2,7 @@ # Explanation - {ref}`cryptography` - -🚧 Under construction 🚧 +- {ref}`GPUs` ```{toctree} :titlesonly: @@ -11,4 +10,5 @@ :hidden: cryptography/index -``` \ No newline at end of file +gpus/index +``` diff --git a/reference/gres/index.md b/reference/gres/index.md new file mode 100644 index 0000000..38bf573 --- /dev/null +++ b/reference/gres/index.md @@ -0,0 +1,11 @@ +(gres)= +# Generic Resource (GRES) Scheduling + +Each line in _gres.conf_ uses the following parameters to define a GPU resource: + +| Parameter | Value | +| ---------- | ---------------------------------------------------------- | +| `NodeName` | Node the _gres.conf_ line applies to. | +| `Name` | Name of the generic resource. Always `gpu`. | +| `Type` | GPU model name. | +| `File` | Path of the device file(s) associated with this GPU model. | diff --git a/reference/index.md b/reference/index.md index 763e92b..c3aec36 100644 --- a/reference/index.md +++ b/reference/index.md @@ -1,4 +1,12 @@ (reference)= # Reference -🚧 Under construction 🚧 +- {ref}`gres` + +```{toctree} +:titlesonly: +:maxdepth: 1 +:hidden: + +gres/index +```