From e53056cbb35ab11d3f47645e6a143408c40ebf74 Mon Sep 17 00:00:00 2001 From: Carlos Eduardo Arango Gutierrez Date: Thu, 6 Jul 2023 16:42:37 +0200 Subject: [PATCH 1/2] Enhance documentation for the repo, now with GFD and nfd as sub chart Signed-off-by: Carlos Eduardo Arango Gutierrez --- README.md | 62 +++--- docs/building_and_running.md | 79 +++++++ docs/customizing.md | 284 ++++++++++++++++++++++++++ docs/deployment_via_helm.md | 384 +++++++++++++++++++++++++++++++++++ docs/gfd_cmd.md | 38 ++++ docs/gfd_labels.md | 69 +++++++ docs/quick_start.md | 191 +++++++++++++++++ 7 files changed, 1071 insertions(+), 36 deletions(-) create mode 100644 docs/building_and_running.md create mode 100644 docs/customizing.md create mode 100644 docs/deployment_via_helm.md create mode 100644 docs/gfd_cmd.md create mode 100644 docs/gfd_labels.md create mode 100644 docs/quick_start.md diff --git a/README.md b/README.md index 09aaf2355..3d33e2a3d 100644 --- a/README.md +++ b/README.md @@ -48,7 +48,6 @@ Please note that: - The NVIDIA device plugin is currently lacking - Comprehensive GPU health checking features - GPU cleanup features - - ... - Support will only be provided for the official NVIDIA device plugin (and not for forks or other variants of this plugin). @@ -1016,38 +1015,29 @@ See the [changelog](CHANGELOG.md) * You can report a bug by [filing a new issue](https://github.com/NVIDIA/k8s-device-plugin/issues/new) * You can contribute by opening a [pull request](https://help.github.com/articles/using-pull-requests/) -### Versioning - -Before v1.10 the versioning scheme of the device plugin had to match exactly the version of Kubernetes. -After the promotion of device plugins to beta this condition was was no longer required. -We quickly noticed that this versioning scheme was very confusing for users as they still expected to see -a version of the device plugin for each version of Kubernetes. - -This versioning scheme applies to the tags `v1.8`, `v1.9`, `v1.10`, `v1.11`, `v1.12`. - -We have now changed the versioning to follow [SEMVER](https://semver.org/). The -first version following this scheme has been tagged `v0.0.0`. - -Going forward, the major version of the device plugin will only change -following a change in the device plugin API itself. For example, version -`v1beta1` of the device plugin API corresponds to version `v0.x.x` of the -device plugin. If a new `v2beta2` version of the device plugin API comes out, -then the device plugin will increase its major version to `1.x.x`. - -As of now, the device plugin API for Kubernetes >= v1.10 is `v1beta1`. If you -have a version of Kubernetes >= 1.10 you can deploy any device plugin version > -`v0.0.0`. - -### Upgrading Kubernetes with the Device Plugin - -Upgrading Kubernetes when you have a device plugin deployed doesn't require you -to do any, particular changes to your workflow. The API is versioned and is -pretty stable (though it is not guaranteed to be non breaking). Starting with -Kubernetes version 1.10, you can use `v0.3.0` of the device plugin to perform -upgrades, and Kubernetes won't require you to deploy a different version of the -device plugin. Once a node comes back online after the upgrade, you will see -GPUs re-registering themselves automatically. - -Upgrading the device plugin itself is a more complex task. It is recommended to -drain GPU tasks as we cannot guarantee that GPU tasks will survive a rolling -upgrade. However we make best efforts to preserve GPU tasks during an upgrade. +## Documentation + +- [Quick Start](docs/quick_start.md) + * [Prerequisites](docs/quick_start.md#prerequisites) + * [Preparing your GPU Nodes](docs/quick_start.md#preparing-your-gpu-nodes) + * [Node Feature Discovery (NFD)](docs/quick_start.md#node-feature-discovery-nfd) + * [Enabling GPU Support in Kubernetes](docs/quick_start.md#enabling-gpu-support-in-kubernetes) + * [Running GPU Jobs](docs/quick_start.md#running-gpu-jobs) +- [Configuring the NVIDIA device plugin binary](docs/customizing.md) + * [As command line flags or envvars](docs/customizing.md#as-command-line-flags-or-envvars) + * [As a configuration file](docs/customizing.md#as-a-configuration-file) + * [Configuration Option Details](docs/customizing.md#configuration-option-details) + * [Shared Access to GPUs with CUDA Time-Slicing](docs/customizing.md#shared-access-to-gpus-with-cuda-time-slicing) +- [Deployment via `helm`](docs/deployment_via_helm.md) + * [Configuring the device plugin's `helm` chart](docs/deployment_via_helm.md#configuring-the-device-plugins-helm-chart) + + [Passing configuration to the plugin via a `ConfigMap`.](docs/deployment_via_helm.md#passing-configuration-to-the-plugin-via-a-configmap) + - [Single Config File Example](docs/deployment_via_helm.md#single-config-file-example) + - [Multiple Config File Example](docs/deployment_via_helm.md#multiple-config-file-example) + - [Updating Per-Node Configuration With a Node Label](docs/deployment_via_helm.md#updating-per-node-configuration-with-a-node-label) + + [Setting other helm chart values](docs/deployment_via_helm.md#setting-other-helm-chart-values) + + [Deploying with gpu-feature-discovery for automatic node labels](docs/deployment_via_helm.md#deploying-with-gpu-feature-discovery-for-automatic-node-labels) + * [Deploying via `helm install` with a direct URL to the `helm` package](docs/deployment_via_helm.md#deploying-via-helm-install-with-a-direct-url-to-the-helm-package) +- [Building and Running Locally](docs/building_and_running.md) +- [GPU Feature Discovery CMD](docs/gfd_cmd.md) +- [GPU Feature Discovery Labels](docs/gfd_labels.md) +- [Changelog](CHANGELOG.md) diff --git a/docs/building_and_running.md b/docs/building_and_running.md new file mode 100644 index 000000000..da207128d --- /dev/null +++ b/docs/building_and_running.md @@ -0,0 +1,79 @@ +## Building and Running Locally + +The next sections are focused on building the device plugin locally and running it. +It is intended purely for development and testing, and not required by most users. +It assumes you are pinning to the latest release tag (i.e. `v0.14.0`), but can +easily be modified to work with any available tag or branch. + +### With Docker + +#### Build +Option 1, pull the prebuilt image from [Docker Hub](https://hub.docker.com/r/nvidia/k8s-device-plugin): + +```shell +$ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.14.0 +$ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.14.0 nvcr.io/nvidia/k8s-device-plugin:devel +``` + +Option 2, build without cloning the repository: + +```shell +$ docker build \ + -t nvcr.io/nvidia/k8s-device-plugin:devel \ + -f deployments/container/Dockerfile.ubuntu \ + https://github.com/NVIDIA/k8s-device-plugin.git#v0.14.0 +``` + +Option 3, if you want to modify the code: + +```shell +$ git clone https://github.com/NVIDIA/k8s-device-plugin.git && cd k8s-device-plugin +$ make -f deployments/container/Makefile build-ubuntu20.04 +``` + +#### Run +Without compatibility for the `CPUManager` static policy: + +```shell +$ docker run \ + -it \ + --security-opt=no-new-privileges \ + --cap-drop=ALL \ + --network=none \ + -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins \ + nvcr.io/nvidia/k8s-device-plugin:devel +``` + +With compatibility for the `CPUManager` static policy: + +```shell +$ docker run \ + -it \ + --privileged \ + --network=none \ + -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins \ + nvcr.io/nvidia/k8s-device-plugin:devel --pass-device-specs +``` + +### Without Docker + +#### Build + + +```shell +$ make cmds +``` + +#### Run +Without compatibility for the `CPUManager` static policy: + +```shell +$ ./gpu-feature-discovery --output=$(pwd)/gfd +$ ./k8s-device-plugin +``` + +With compatibility for the `CPUManager` static policy: + +```shell +$ ./k8s-device-plugin --pass-device-specs +``` diff --git a/docs/customizing.md b/docs/customizing.md new file mode 100644 index 000000000..8e609a155 --- /dev/null +++ b/docs/customizing.md @@ -0,0 +1,284 @@ +## Configuring the NVIDIA device plugin binary + +The NVIDIA device plugin has a number of options that can be configured for it. +These options can be configured as command line flags, environment variables, +or via a config file when launching the device plugin. Here we explain what +each of these options are and how to configure them directly against the plugin +binary. The following section explains how to set these configurations when +deploying the plugin via `helm`. + +### As command line flags or envvars + +| Flag | Envvar | Default Value | +|--------------------------|-------------------------|-----------------| +| `--mig-strategy` | `$MIG_STRATEGY` | `"none"` | +| `--fail-on-init-error` | `$FAIL_ON_INIT_ERROR` | `true` | +| `--nvidia-driver-root` | `$NVIDIA_DRIVER_ROOT` | `"/"` | +| `--pass-device-specs` | `$PASS_DEVICE_SPECS` | `false` | +| `--device-list-strategy` | `$DEVICE_LIST_STRATEGY` | `"envvar"` | +| `--device-id-strategy` | `$DEVICE_ID_STRATEGY` | `"uuid"` | +| `--config-file` | `$CONFIG_FILE` | `""` | + +### As a configuration file + +```yaml +version: v1 +flags: + migStrategy: "none" + failOnInitError: true + nvidiaDriverRoot: "/" + plugin: + passDeviceSpecs: false + deviceListStrategy: "envvar" + deviceIDStrategy: "uuid" +``` + +**Note:** The configuration file has an explicit `plugin` section because it +is a shared configuration between the plugin and +[`gpu-feature-discovery`](https://github.com/NVIDIA/gpu-feature-discovery). +All options inside the `plugin` section are specific to the plugin. All +options outside of this section are shared. + +### Configuration Option Details +**`MIG_STRATEGY`**: + the desired strategy for exposing MIG devices on GPUs that support it + + `[none | single | mixed] (default 'none')` + + The `MIG_STRATEGY` option configures the daemonset to be able to expose + Multi-Instance GPUs (MIG) on GPUs that support them. More information on what + these strategies are and how they should be used can be found in [Supporting + Multi-Instance GPUs (MIG) in + Kubernetes](https://docs.google.com/document/d/1mdgMQ8g7WmaI_XVVRrCvHPFPOMCm5LQD5JefgAh6N8g). + + **Note:** With a `MIG_STRATEGY` of mixed, you will have additional resources + available to you of the form `nvidia.com/mig-g.gb` + that you can set in your pod spec to get access to a specific MIG device. + +**`FAIL_ON_INIT_ERROR`**: + fail the plugin if an error is encountered during initialization, otherwise block indefinitely + + `(default 'true')` + + When set to true, the `FAIL_ON_INIT_ERROR` option fails the plugin if an error is + encountered during initialization. When set to false, it prints an error + message and blocks the plugin indefinitely instead of failing. Blocking + indefinitely follows legacy semantics that allow the plugin to deploy + successfully on nodes that don't have GPUs on them (and aren't supposed to have + GPUs on them) without throwing an error. In this way, you can blindly deploy a + daemonset with the plugin on all nodes in your cluster, whether they have GPUs + on them or not, without encountering an error. However, doing so means that + there is no way to detect an actual error on nodes that are supposed to have + GPUs on them. Failing if an initialization error is encountered is now the + default and should be adopted by all new deployments. + +**`NVIDIA_DRIVER_ROOT`**: + the root path for the NVIDIA driver installation + + `(default '/')` + + When the NVIDIA drivers are installed directly on the host, this should be + set to `'/'`. When installed elsewhere (e.g. via a driver container), this + should be set to the root filesystem where the drivers are installed (e.g. + `'/run/nvidia/driver'`). + + **Note:** This option is only necessary when used in conjunction with the + `$PASS_DEVICE_SPECS` option described below. It tells the plugin what prefix + to add to any device file paths passed back as part of the device specs. + +**`PASS_DEVICE_SPECS`**: + pass the paths and desired device node permissions for any NVIDIA devices + being allocated to the container + + `(default 'false')` + + This option exists for the sole purpose of allowing the device plugin to + interoperate with the `CPUManager` in Kubernetes. Setting this flag also + requires one to deploy the daemonset with elevated privileges, so only do so if + you know you need to interoperate with the `CPUManager`. + +**`DEVICE_LIST_STRATEGY`**: + the desired strategy for passing the device list to the underlying runtime + + `[envvar | volume-mounts] (default 'envvar')` + + The `DEVICE_LIST_STRATEGY` flag allows one to choose which strategy the plugin + will use to advertise the list of GPUs allocated to a container. This is + traditionally done by setting the `NVIDIA_VISIBLE_DEVICES` environment variable + as described + [here](https://github.com/NVIDIA/nvidia-container-runtime#nvidia_visible_devices). + This strategy can be selected via the (default) `envvar` option. Support has + been added to the `nvidia-container-toolkit` to also allow passing the list + of devices as a set of volume mounts instead of as an environment variable. + This strategy can be selected via the `volume-mounts` option. Details for the + rationale behind this strategy can be found + [here](https://docs.google.com/document/d/1uXVF-NWZQXgP1MLb87_kMkQvidpnkNWicdpO2l9g-fw/edit#heading=h.b3ti65rojfy5). + +**`DEVICE_ID_STRATEGY`**: + the desired strategy for passing device IDs to the underlying runtime + + `[uuid | index] (default 'uuid')` + + The `DEVICE_ID_STRATEGY` flag allows one to choose which strategy the plugin will + use to pass the device ID of the GPUs allocated to a container. The device ID + has traditionally been passed as the UUID of the GPU. This flag lets a user + decide if they would like to use the UUID or the index of the GPU (as seen in + the output of `nvidia-smi`) as the identifier passed to the underlying runtime. + Passing the index may be desirable in situations where pods that have been + allocated GPUs by the plugin get restarted with different physical GPUs + attached to them. + +**`CONFIG_FILE`**: + point the plugin at a configuration file instead of relying on command line + flags or environment variables + + `(default '')` + + The order of precedence for setting each option is (1) command line flag, (2) + environment variable, (3) configuration file. In this way, one could use a + pre-defined configuration file, but then override the values set in it at + launch time. As described below, a `ConfigMap` can be used to point the + plugin at a desired configuration file when deploying via `helm`. + +### Shared Access to GPUs with CUDA Time-Slicing + +The NVIDIA device plugin allows oversubscription of GPUs through a set of +extended options in its configuration file. Under the hood, CUDA time-slicing +is used to allow workloads that land on oversubscribed GPUs to interleave with +one another. However, nothing special is done to isolate workloads that are +granted replicas from the same underlying GPU, and each workload has access to +the GPU memory and runs in the same fault-domain as of all the others (meaning +if one workload crashes, they all do). + + +These extended options can be seen below: + +```yaml +version: v1 +sharing: + timeSlicing: + renameByDefault: + failRequestsGreaterThanOne: + resources: + - name: + replicas: + ... +``` + +That is, for each named resource under `sharing.timeSlicing.resources`, a number +of replicas can now be specified for that resource type. These replicas +represent the number of shared accesses that will be granted for a GPU +represented by that resource type. + +If `renameByDefault=true`, then each resource will be advertised under the name +`.shared` instead of simply ``. + +If `failRequestsGreaterThanOne=true`, then the plugin will fail to allocate any +shared resources to a container if they request more than one. The container’s +pod will fail with an `UnexpectedAdmissionError` and need to be manually deleted, +updated, and redeployed. + +For example: + +```yaml +version: v1 +sharing: + timeSlicing: + resources: + - name: nvidia.com/gpu + replicas: 10 +``` + +If this configuration were applied to a node with 8 GPUs on it, the plugin +would now advertise 80 `nvidia.com/gpu` resources to Kubernetes instead of 8. + +```shell +$ kubectl describe node +... +Capacity: + nvidia.com/gpu: 80 +... +``` + +Likewise, if the following configuration were applied to a node, then 80 +`nvidia.com/gpu.shared` resources would be advertised to Kubernetes instead of 8 +`nvidia.com/gpu` resources. + +```yaml +version: v1 +sharing: + timeSlicing: + renameByDefault: true + resources: + - name: nvidia.com/gpu + replicas: 10 + ... +``` + +```shell +$ kubectl describe node +... +Capacity: + nvidia.com/gpu.shared: 80 +... +``` + +In both cases, the plugin simply creates 10 references to each GPU and +indiscriminately hands them out to anyone that asks for them. + +If `failRequestsGreaterThanOne=true` were set in either of these +configurations and a user requested more than one `nvidia.com/gpu` or +`nvidia.com/gpu.shared` resource in their pod spec, then the container would +fail with the resulting error: + +```shell +$ kubectl describe pod gpu-pod +... +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Warning UnexpectedAdmissionError 13s kubelet Allocate failed due to rpc error: code = Unknown desc = request for 'nvidia.com/gpu: 2' too large: maximum request size for shared resources is 1, which is unexpected +... +``` + +**Note:** Unlike with "normal" GPU requests, requesting more than one shared +GPU does not imply that you will get guaranteed access to a proportional amount +of compute power. It only implies that you will get access to a GPU that is +shared by other clients (each of which has the freedom to run as many processes +on the underlying GPU as they want). Under the hood CUDA will simply give an +equal share of time to all of the GPU processes across all of the clients. The +`failRequestsGreaterThanOne` flag is meant to help users understand this +subtlety, by treating a request of `1` as an access request rather than an +exclusive resource request. Setting `failRequestsGreaterThanOne=true` is +recommended, but it is set to `false` by default to retain backwards +compatibility. + +As of now, the only supported resource available for time-slicing are +`nvidia.com/gpu` as well as any of the resource types that emerge from +configuring a node with the mixed MIG strategy. + +For example, the full set of time-sliceable resources on a T4 card would be: + +```shell +nvidia.com/gpu +``` + +And the full set of time-sliceable resources on an A100 40GB card would be: + +```shell +nvidia.com/gpu +nvidia.com/mig-1g.5gb +nvidia.com/mig-2g.10gb +nvidia.com/mig-3g.20gb +nvidia.com/mig-7g.40gb +``` + +Likewise, on an A100 80GB card, they would be: + +```shell +nvidia.com/gpu +nvidia.com/mig-1g.10gb +nvidia.com/mig-2g.20gb +nvidia.com/mig-3g.40gb +nvidia.com/mig-7g.80gb +``` \ No newline at end of file diff --git a/docs/deployment_via_helm.md b/docs/deployment_via_helm.md new file mode 100644 index 000000000..79f5e0520 --- /dev/null +++ b/docs/deployment_via_helm.md @@ -0,0 +1,384 @@ +## Deployment via `helm` + +The preferred method to deploy the `GPU Feature Discovery` and `Device Plugin` +is as a daemonset using `helm`. Instructions for installing `helm` can be +found [here](https://helm.sh/docs/intro/install/). + +Begin by setting up the plugin's `helm` repository and updating it at follows: + +```shell +$ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin +$ helm repo update +``` + +Then verify that the latest release (`v0.14.1`) of the plugin is available: +``` +$ helm search repo nvdp --devel +NAME CHART VERSION APP VERSION DESCRIPTION +nvdp/nvidia-device-plugin 0.14.0 0.14.0 A Helm chart for ... +``` + +Once this repo is updated, you can begin installing packages from it to deploy +the `gpu-feature-discovery` and `nvidia-device-plugin` helm chart. + +The most basic installation command without any options is then: + +```shell +helm upgrade -i nvdp nvdp/nvidia-device-plugin \ + --namespace nvidia-device-plugin \ + --create-namespace \ + --version 0.14.1 +``` + +**Note:** As os `v0.14.1`, by default helm will install `NFD` , +`gpu-feature-discovery` and `nvidia-device-plugin` in the +`nvidia-device-plugin` namespace. If you want to install them in a different +namespace, you can use the `--namespace` flag. You can turn off the +installation of `NFD` and `gpu-feature-discovery` by setting +`nfd.enabled=false`, `gpuFeatureDiscovery.enabled=false` or +`devicePlugin.enabled=false` respectively. + +**Note:** You only need the to pass the `--devel` flag to `helm search repo` +and the `--version` flag to `helm upgrade -i` if this is a pre-release +version (e.g. `-rc.1`). Full releases will be listed without this. + +### Configuring the device plugin's `helm` chart + +The `helm` chart for the latest release of the plugin (`v0.14.0`) includes +a number of customizable values. + +Prior to `v0.12.0` the most commonly used values were those that had direct +mappings to the command line options of the plugin binary. As of `v0.12.0`, the +preferred method to set these options is via a `ConfigMap`. The primary use +case of the original values is then to override an option from the `ConfigMap` +if desired. Both methods are discussed in more detail below. + +**Note:** The following document provides more information on the available MIG +strategies and how they should be used [Supporting Multi-Instance GPUs (MIG) in +Kubernetes](https://docs.google.com/document/d/1mdgMQ8g7WmaI_XVVRrCvHPFPOMCm5LQD5JefgAh6N8g). + +Please take a look in the following `values.yaml` files to see the full set of +overridable parameters for both the top-level `gpu-feature-discovery` chart and +the `node-feature-discovery` subchart. + +The full set of values that can be set are found here: +[here](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.14.1/deployments/helm/nvidia-device-plugin/values.yaml). + +#### Passing configuration to the plugin via a `ConfigMap`. + +In general, we provide a mechanism to pass _multiple_ configuration files to +to the plugin's `helm` chart, with the ability to choose which configuration +file should be applied to a node via a node label. + +In this way, a single chart can be used to deploy each component, but custom +configurations can be applied to different nodes throughout the cluster. + +There are two ways to provide a `ConfigMap` for use by the plugin: + + 1. Via an external reference to a pre-defined `ConfigMap` + 1. As a set of named config files to build an integrated `ConfigMap` associated with the chart + +These can be set via the chart values `config.name` and `config.map` respectively. +In both cases, the value `config.default` can be set to point to one of the +named configs in the `ConfigMap` and provide a default configuration for nodes +that have not been customized via a node label (more on this later). + +##### Single Config File Example +As an example, create a valid config file on your local filesystem, such as the following: + +```shell +cat << EOF > /tmp/dp-example-config0.yaml +version: v1 +flags: + migStrategy: "none" + failOnInitError: true + nvidiaDriverRoot: "/" + plugin: + passDeviceSpecs: false + deviceListStrategy: envvar + deviceIDStrategy: uuid +EOF +``` + +And deploy the device plugin via helm (pointing it at this config file and giving it a name): + +```shell +$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ + --version=0.14.0 \ + --namespace nvidia-device-plugin \ + --create-namespace \ + --set-file config.map.config=/tmp/dp-example-config0.yaml +``` + +Under the hood this will deploy a `ConfigMap` associated with the plugin and put +the contents of the `dp-example-config0.yaml` file into it, using the name +`config` as its key. It will then start the plugin such that this config gets +applied when the plugin comes online. + +If you don’t want the plugin’s helm chart to create the `ConfigMap` for you, you +can also point it at a pre-created `ConfigMap` as follows: + +```shell +$ kubectl create ns nvidia-device-plugin +``` + +```shell +$ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \ + --from-file=config=/tmp/dp-example-config0.yaml +``` + +```shell +$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ + --version=0.14.0 \ + --namespace nvidia-device-plugin \ + --create-namespace \ + --set config.name=nvidia-plugin-configs +``` + +##### Multiple Config File Example + +For multiple config files, the procedure is similar. + +Create a second `config` file with the following contents: + +```shell +cat << EOF > /tmp/dp-example-config1.yaml +version: v1 +flags: + migStrategy: "mixed" # Only change from config0.yaml + failOnInitError: true + nvidiaDriverRoot: "/" + plugin: + passDeviceSpecs: false + deviceListStrategy: envvar + deviceIDStrategy: uuid +EOF +``` + +And redeploy the device plugin via helm (pointing it at both configs with a specified default). + +```shell +$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ + --version=0.14.0 \ + --namespace nvidia-device-plugin \ + --create-namespace \ + --set config.default=config0 \ + --set-file config.map.config0=/tmp/dp-example-config0.yaml \ + --set-file config.map.config1=/tmp/dp-example-config1.yaml +``` + +As before, this can also be done with a pre-created `ConfigMap` if desired: + +```shell +$ kubectl create ns nvidia-device-plugin +``` + +```shell +$ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \ + --from-file=config0=/tmp/dp-example-config0.yaml \ + --from-file=config1=/tmp/dp-example-config1.yaml +``` + +```shell +$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ + --version=0.14.0 \ + --namespace nvidia-device-plugin \ + --create-namespace \ + --set config.default=config0 \ + --set config.name=nvidia-plugin-configs +``` + +**Note:** If the `config.default` flag is not explicitly set, then a default +value will be inferred from the config if one of the config names is set to +'`default`'. If neither of these are set, then the deployment will fail unless +there is only **_one_** config provided. In the case of just a single config being +provided, it will be chosen as the default because there is no other option. + +##### Updating Per-Node Configuration With a Node Label + +With this setup, plugins on all nodes will have `config0` configured for them +by default. However, the following label can be set to change which +configuration is applied: + +```shell +kubectl label nodes –-overwrite \ + nvidia.com/device-plugin.config= +``` + +For example, applying a custom config for all nodes that have T4 GPUs installed +on them might be: + +```shell +kubectl label node \ + --overwrite \ + --selector=nvidia.com/gpu.product=TESLA-T4 \ + nvidia.com/device-plugin.config=t4-config +``` + +**Note:** This label can be applied either _before_ or _after_ the plugin is +started to get the desired configuration applied on the node. Anytime it +changes value, the plugin will immediately be updated to start serving the +desired configuration. If it is set to an unknown value, it will skip +reconfiguration. If it is ever unset, it will fallback to the default. + +#### Setting other helm chart values + +As mentioned previously, the device plugin's helm chart continues to provide +direct values to set the configuration options of the plugin without using a +`ConfigMap`. These should only be used to set globally applicable options +(which should then never be embedded in the set of config files provided by the +`ConfigMap`), or used to override these options as desired. + +These values are as follows: + +``` + migStrategy: + the desired strategy for exposing MIG devices on GPUs that support it + [none | single | mixed] (default "none") + failOnInitError: + fail the plugin if an error is encountered during initialization, otherwise block indefinitely + (default 'true') + compatWithCPUManager: + run with escalated privileges to be compatible with the static CPUManager policy + (default 'false') + deviceListStrategy: + the desired strategy for passing the device list to the underlying runtime + [envvar | volume-mounts] (default "envvar") + deviceIDStrategy: + the desired strategy for passing device IDs to the underlying runtime + [uuid | index] (default "uuid") + nvidiaDriverRoot: + the root path for the NVIDIA driver installation (typical values are '/' or '/run/nvidia/driver') +``` + +**Note:** There is no value that directly maps to the `PASS_DEVICE_SPECS` +configuration option of the plugin. Instead a value called +`compatWithCPUManager` is provided which acts as a proxy for this option. +It both sets the `PASS_DEVICE_SPECS` option of the plugin to true **AND** makes +sure that the plugin is started with elevated privileges to ensure proper +compatibility with the `CPUManager`. + +Besides these custom configuration options for the plugin, other standard helm +chart values that are commonly overridden are: + +``` + legacyDaemonsetAPI: + use the legacy daemonset API version 'extensions/v1beta1' + (default 'false') + runtimeClassName: + the runtimeClassName to use, for use with clusters that have multiple runtimes. (typical value is 'nvidia') +``` + +Please take a look in the +[`values.yaml`](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.14.0/deployments/helm/nvidia-device-plugin/values.yaml) +file to see the full set of overridable parameters for the device plugin. + +Examples of setting these options include: + +Enabling compatibility with the `CPUManager` and running with a request for +100ms of CPU time and a limit of 512MB of memory. + +```shell +$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ + --version=0.14.0 \ + --namespace nvidia-device-plugin \ + --create-namespace \ + --set compatWithCPUManager=true \ + --set resources.requests.cpu=100m \ + --set resources.limits.memory=512Mi +``` + +Using the legacy Daemonset API (only available on Kubernetes < `v1.16`): + +```shell +$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ + --version=0.14.0 \ + --namespace nvidia-device-plugin \ + --create-namespace \ + --set legacyDaemonsetAPI=true +``` + +Enabling compatibility with the `CPUManager` and the `mixed` `migStrategy` + +```shell +$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ + --version=0.14.0 \ + --namespace nvidia-device-plugin \ + --create-namespace \ + --set compatWithCPUManager=true \ + --set migStrategy=mixed +``` + +#### Deploying with gpu-feature-discovery for automatic node labels + +As of `v0.12.0`, the device plugin's helm chart has integrated support to +deploy +[`gpu-feature-discovery`](https://github.com/NVIDIA/gpu-feature-discovery) +(GFD) as a subchart. One can use GFD to automatically generate labels for the +set of GPUs available on a node. Under the hood, it leverages Node Feature +Discovery to perform this labeling. + +To enable it, simply set `gfd.enabled=true` during helm install. + +```shell +helm upgrade -i nvdp nvdp/nvidia-device-plugin \ + --version=0.14.0 \ + --namespace nvidia-device-plugin \ + --create-namespace \ + --set gfd.enabled=true +``` + +Under the hood this will also deploy +[`node-feature-discovery`](https://github.com/kubernetes-sigs/node-feature-discovery) +(NFD) since it is a prerequisite of GFD. If you already have NFD deployed on +your cluster and do not wish for it to be pulled in by this installation, you +can disable it with `nfd.enabled=false`. + +In addition to the standard node labels applied by GFD, the following label +will also be included when deploying the plugin with the time-slicing extensions +described [above](#shared-access-to-gpus-with-cuda-time-slicing). + +```shell +nvidia.com/.replicas = +``` + +Additionally, the `nvidia.com/.product` will be modified as follows if +`renameByDefault=false`. + +```shell +nvidia.com/.product = -SHARED +``` + +Using these labels, users have a way of selecting a shared vs. non-shared GPU +in the same way they would traditionally select one GPU model over another. +That is, the `SHARED` annotation ensures that a `nodeSelector` can be used to +attract pods to nodes that have shared GPUs on them. + +Since having `renameByDefault=true` already encodes the fact that the resource is +shared on the resource name , there is no need to annotate the product +name with `SHARED`. Users can already find the shared resources they need by +simply requesting it in their pod spec. + +Note: When running with `renameByDefault=false` and `migStrategy=single` both +the MIG profile name and the new `SHARED` annotation will be appended to the +product name, e.g.: + +```shell +nvidia.com/gpu.product = A100-SXM4-40GB-MIG-1g.5gb-SHARED +``` + +### Deploying via `helm install` with a direct URL to the `helm` package + +If you prefer not to install from the `nvidia-device-plugin` `helm` repo, you can +run `helm install` directly against the tarball of the plugin's `helm` package. +The example below installs the same chart as the method above, except that +it uses a direct URL to the `helm` chart instead of via the `helm` repo. + +Using the default values for the flags: + +```shell +$ helm upgrade -i nvdp \ + --namespace nvidia-device-plugin \ + --create-namespace \ + https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.14.0.tgz +``` diff --git a/docs/gfd_cmd.md b/docs/gfd_cmd.md new file mode 100644 index 000000000..254219847 --- /dev/null +++ b/docs/gfd_cmd.md @@ -0,0 +1,38 @@ +## The GFD Command line interface + +Available options: +``` +gpu-feature-discovery: +Usage: + gpu-feature-discovery [--fail-on-init-error=] [--mig-strategy=] [--oneshot | --sleep-interval=] [--no-timestamp] [--output-file= | -o ] + gpu-feature-discovery -h | --help + gpu-feature-discovery --version + +Options: + -h --help Show this help message and exit + --version Display version and exit + --oneshot Label once and exit + --no-timestamp Do not add timestamp to the labels + --fail-on-init-error= Fail if there is an error during initialization of any label sources [Default: true] + --sleep-interval= Time to sleep between labeling [Default: 60s] + --mig-strategy= Strategy to use for MIG-related labels [Default: none] + -o --output-file= Path to output file + [Default: /etc/kubernetes/node-feature-discovery/features.d/gfd] + +Arguments: + : none | single | mixed + +``` + +You can also use environment variables: + +| Env Variable | Option | Example | +| ---------------------- | -------------------- | ------- | +| GFD_FAIL_ON_INIT_ERROR | --fail-on-init-error | true | +| GFD_MIG_STRATEGY | --mig-strategy | none | +| GFD_ONESHOT | --oneshot | TRUE | +| GFD_NO_TIMESTAMP | --no-timestamp | TRUE | +| GFD_OUTPUT_FILE | --output-file | output | +| GFD_SLEEP_INTERVAL | --sleep-interval | 10s | + +Environment variables override the command line options if they conflict. \ No newline at end of file diff --git a/docs/gfd_labels.md b/docs/gfd_labels.md new file mode 100644 index 000000000..839016d1d --- /dev/null +++ b/docs/gfd_labels.md @@ -0,0 +1,69 @@ +## GPU Feature Discovery generated Labels + +This is the list of the labels generated by NVIDIA GPU Feature Discovery and +their meaning: + +| Label Name | Value Type | Meaning | Example | +| -------------------------------| ---------- | -------------------------------------------- | -------------- | +| nvidia.com/cuda.driver.major | Integer | Major of the version of NVIDIA driver | 418 | +| nvidia.com/cuda.driver.minor | Integer | Minor of the version of NVIDIA driver | 30 | +| nvidia.com/cuda.driver.rev | Integer | Revision of the version of NVIDIA driver | 40 | +| nvidia.com/cuda.runtime.major | Integer | Major of the version of CUDA | 10 | +| nvidia.com/cuda.runtime.minor | Integer | Minor of the version of CUDA | 1 | +| nvidia.com/gfd.timestamp | Integer | Timestamp of the generated labels (optional) | 1555019244 | +| nvidia.com/gpu.compute.major | Integer | Major of the compute capabilities | 3 | +| nvidia.com/gpu.compute.minor | Integer | Minor of the compute capabilities | 3 | +| nvidia.com/gpu.count | Integer | Number of GPUs | 2 | +| nvidia.com/gpu.family | String | Architecture family of the GPU | kepler | +| nvidia.com/gpu.machine | String | Machine type | DGX-1 | +| nvidia.com/gpu.memory | Integer | Memory of the GPU in Mb | 2048 | +| nvidia.com/gpu.product | String | Model of the GPU | GeForce-GT-710 | + +Depending on the MIG strategy used, the following set of labels may also be +available (or override the default values for some of the labels listed above): + +### MIG 'single' strategy + +With this strategy, the single `nvidia.com/gpu` label is overloaded to provide +information about MIG devices on the node, rather than full GPUs. This assumes +all GPUs on the node have been divided into identical partitions of the same +size. The example below shows info for a system with 8 full GPUs, each of which +is partitioned into 7 equal sized MIG devices (56 total). + +| Label Name | Value Type | Meaning | Example | +| ----------------------------------- | ---------- | ---------------------------------------- | ------------------------- | +| nvidia.com/mig.strategy | String | MIG strategy in use | single | +| nvidia.com/gpu.product (overridden) | String | Model of the GPU (with MIG info added) | A100-SXM4-40GB-MIG-1g.5gb | +| nvidia.com/gpu.count (overridden) | Integer | Number of MIG devices | 56 | +| nvidia.com/gpu.memory (overridden) | Integer | Memory of each MIG device in Mb | 5120 | +| nvidia.com/gpu.multiprocessors | Integer | Number of Multiprocessors for MIG device | 14 | +| nvidia.com/gpu.slices.gi | Integer | Number of GPU Instance slices | 1 | +| nvidia.com/gpu.slices.ci | Integer | Number of Compute Instance slices | 1 | +| nvidia.com/gpu.engines.copy | Integer | Number of DMA engines for MIG device | 1 | +| nvidia.com/gpu.engines.decoder | Integer | Number of decoders for MIG device | 1 | +| nvidia.com/gpu.engines.encoder | Integer | Number of encoders for MIG device | 1 | +| nvidia.com/gpu.engines.jpeg | Integer | Number of JPEG engines for MIG device | 0 | +| nvidia.com/gpu.engines.ofa | Integer | Number of OfA engines for MIG device | 0 | + +### MIG 'mixed' strategy + +With this strategy, a separate set of labels for each MIG device type is +generated. The name of each MIG device type is defines as follows: +``` +MIG_TYPE=mig-g..gb +e.g. MIG_TYPE=mig-3g.20gb +``` + +| Label Name | Value Type | Meaning | Example | +| ------------------------------------ | ---------- | ---------------------------------------- | -------------- | +| nvidia.com/mig.strategy | String | MIG strategy in use | mixed | +| nvidia.com/MIG\_TYPE.count | Integer | Number of MIG devices of this type | 2 | +| nvidia.com/MIG\_TYPE.memory | Integer | Memory of MIG device type in Mb | 10240 | +| nvidia.com/MIG\_TYPE.multiprocessors | Integer | Number of Multiprocessors for MIG device | 14 | +| nvidia.com/MIG\_TYPE.slices.ci | Integer | Number of GPU Instance slices | 1 | +| nvidia.com/MIG\_TYPE.slices.gi | Integer | Number of Compute Instance slices | 1 | +| nvidia.com/MIG\_TYPE.engines.copy | Integer | Number of DMA engines for MIG device | 1 | +| nvidia.com/MIG\_TYPE.engines.decoder | Integer | Number of decoders for MIG device | 1 | +| nvidia.com/MIG\_TYPE.engines.encoder | Integer | Number of encoders for MIG device | 1 | +| nvidia.com/MIG\_TYPE.engines.jpeg | Integer | Number of JPEG engines for MIG device | 0 | +| nvidia.com/MIG\_TYPE.engines.ofa | Integer | Number of OfA engines for MIG device | 0 | diff --git a/docs/quick_start.md b/docs/quick_start.md new file mode 100644 index 000000000..9ee23f748 --- /dev/null +++ b/docs/quick_start.md @@ -0,0 +1,191 @@ +## Quick Start + +### Prerequisites + +The list of prerequisites for running the NVIDIA GPU Feature Discovery and the Device Plugin is described below: +* NVIDIA drivers >= 384.81 +* nvidia-docker >= 2.0 || nvidia-container-toolkit >= 1.7.0 (>= 1.11.0 to use integrated GPUs on Tegra-based systems) +* nvidia-container-runtime configured as the default low-level runtime +* Kubernetes version >= 1.10 + +### Preparing your GPU Nodes + +The following steps need to be executed on all your GPU nodes. +This README assumes that the NVIDIA drivers and the `nvidia-container-toolkit` have been pre-installed. +It also assumes that you have configured the `nvidia-container-runtime` as the default low-level runtime to use. + +Please see: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html + +#### Example for debian-based systems with `docker` and `containerd` + +##### Install the `nvidia-container-toolkit` + +```shell +distribution=$(. /etc/os-release;echo $ID$VERSION_ID) +curl -s -L https://nvidia.github.io/libnvidia-container/gpgkey | sudo apt-key add - +curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | sudo tee /etc/apt/sources.list.d/libnvidia-container.list + +sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit +``` + +##### Configure `docker` +When running `kubernetes` with `docker`, edit the config file which is usually +present at `/etc/docker/daemon.json` to set up `nvidia-container-runtime` as +the default low-level runtime: + +```json +{ + "default-runtime": "nvidia", + "runtimes": { + "nvidia": { + "path": "/usr/bin/nvidia-container-runtime", + "runtimeArgs": [] + } + } +} +``` + +And then restart `docker`: + +```shell +$ sudo systemctl restart docker +``` + +##### Configure `containerd` +When running `kubernetes` with `containerd`, edit the config file which is +usually present at `/etc/containerd/config.toml` to set up +`nvidia-container-runtime` as the default low-level runtime: + +``` +version = 2 +[plugins] + [plugins."io.containerd.grpc.v1.cri"] + [plugins."io.containerd.grpc.v1.cri".containerd] + default_runtime_name = "nvidia" + + [plugins."io.containerd.grpc.v1.cri".containerd.runtimes] + [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia] + privileged_without_host_devices = false + runtime_engine = "" + runtime_root = "" + runtime_type = "io.containerd.runc.v2" + [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia.options] + BinaryName = "/usr/bin/nvidia-container-runtime" +``` + +And then restart `containerd`: + +```shell +$ sudo systemctl restart containerd +``` + +### Node Feature Discovery (NFD) + +The first step is to make sure that [Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery) +is running on every node you want to label. NVIDIA GPU Feature Discovery use +the `local` source so be sure to mount volumes. See +https://github.com/kubernetes-sigs/node-feature-discovery for more details. + +You also need to configure the `Node Feature Discovery` to only expose vendor +IDs in the PCI source. To do so, please refer to the Node Feature Discovery +documentation. + +The following command will deploy NFD with the minimum required set of +parameters to run `gpu-feature-discovery`. + +```shell +$ kubectl apply -k https://github.com/kubernetes-sigs/node-feature-discovery/deployment/overlays/default?ref=v0.13.2 +``` + +**Note:** This is a simple static daemonset meant to demonstrate the basic +features required of `node-feature-discovery` in order to successfully run +`gpu-feature-discovery`. Please see the instructions below for [Deployment via +`helm`](#deployment-via-helm) when deploying in a production setting. + + +### Enabling GPU Support in Kubernetes + +Once you have configured the options above on all the GPU nodes in your +cluster, you can enable GPU support by deploying the following Daemonsets: + +```shell +$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.14.0/deployments/static/gpu-feature-discovery-daemonset.yaml +$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.14.0/deployments/static/nvidia-device-plugin.yml +``` + +**Note:** This is a simple static daemonset meant to demonstrate the basic +features of the `gpu-feature-discovery` and`nvidia-device-plugin`. Please see +the instructions below for [Deployment via `helm`](deployment_via_helm.md) +when deploying the plugin in a production setting. + +### Verifying Everything Works + +With both NFD and GFD deployed and running, you should now be able to see GPU +related labels appearing on any nodes that have GPUs installed on them. + +```shell +$ kubectl get nodes -o yaml +apiVersion: v1 +items: +- apiVersion: v1 + kind: Node + metadata: + ... + + labels: + nvidia.com/cuda.driver.major: "455" + nvidia.com/cuda.driver.minor: "06" + nvidia.com/cuda.driver.rev: "" + nvidia.com/cuda.runtime.major: "11" + nvidia.com/cuda.runtime.minor: "1" + nvidia.com/gpu.compute.major: "8" + nvidia.com/gpu.compute.minor: "0" + nvidia.com/gfd.timestamp: "1594644571" + nvidia.com/gpu.count: "1" + nvidia.com/gpu.family: ampere + nvidia.com/gpu.machine: NVIDIA DGX-2H + nvidia.com/gpu.memory: "39538" + nvidia.com/gpu.product: A100-SXM4-40GB + ... +... + +``` + +### Running GPU Jobs + +With the daemonset deployed, NVIDIA GPUs can now be requested by a container +using the `nvidia.com/gpu` resource type: + +```yaml +$ cat < **WARNING:** *if you don't request GPUs when using the device plugin with NVIDIA images all +> the GPUs on the machine will be exposed inside your container.* \ No newline at end of file From 9a780e578e39596aafcf5936a5d6a321b1f29dbf Mon Sep 17 00:00:00 2001 From: Carlos Eduardo Arango Gutierrez Date: Thu, 6 Jul 2023 16:43:20 +0200 Subject: [PATCH 2/2] update gitignore file Signed-off-by: Carlos Eduardo Arango Gutierrez --- README.md | 108 ++++---- docs/building_and_running.md | 79 ------ docs/customizing.md | 284 -------------------- docs/deployment_via_helm.md | 384 --------------------------- docs/gfd_cmd.md | 38 --- docs/gfd_labels.md | 69 ----- docs/gpu-feature-discovery/README.md | 357 +++++++++++++++++++++++++ docs/quick_start.md | 191 ------------- 8 files changed, 415 insertions(+), 1095 deletions(-) delete mode 100644 docs/building_and_running.md delete mode 100644 docs/customizing.md delete mode 100644 docs/deployment_via_helm.md delete mode 100644 docs/gfd_cmd.md delete mode 100644 docs/gfd_labels.md create mode 100644 docs/gpu-feature-discovery/README.md delete mode 100644 docs/quick_start.md diff --git a/README.md b/README.md index 3d33e2a3d..788c5b262 100644 --- a/README.md +++ b/README.md @@ -25,10 +25,7 @@ - [Updating Per-Node Configuration With a Node Label](#updating-per-node-configuration-with-a-node-label) + [Setting other helm chart values](#setting-other-helm-chart-values) + [Deploying with gpu-feature-discovery for automatic node labels](#deploying-with-gpu-feature-discovery-for-automatic-node-labels) - * [Deploying via `helm install` with a direct URL to the `helm` package](#deploying-via-helm-install-with-a-direct-url-to-the-helm-package) - [Building and Running Locally](#building-and-running-locally) - [Changelog](#changelog) @@ -42,6 +39,8 @@ The NVIDIA device plugin for Kubernetes is a Daemonset that allows you to automa - Run GPU enabled containers in your Kubernetes cluster. This repository contains NVIDIA's official implementation of the [Kubernetes device plugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/). +As of v0.15.0 this repository also holds the implementation for GPU Feature Discovery labels, +for further information on GPU Feature Discovery see [here](docs/gpu-feature-discovery/README.md). Please note that: - The NVIDIA device plugin API is beta as of Kubernetes v1.10. @@ -559,11 +558,11 @@ $ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin $ helm repo update ``` -Then verify that the latest release (`v0.14.5`) of the plugin is available: +Then verify that the latest release (`v0.15.0`) of the plugin is available: ``` $ helm search repo nvdp --devel NAME CHART VERSION APP VERSION DESCRIPTION -nvdp/nvidia-device-plugin 0.14.5 0.14.5 A Helm chart for ... +nvdp/nvidia-device-plugin 0.15.0 0.15.0 A Helm chart for ... ``` Once this repo is updated, you can begin installing packages from it to deploy @@ -574,7 +573,7 @@ The most basic installation command without any options is then: helm upgrade -i nvdp nvdp/nvidia-device-plugin \ --namespace nvidia-device-plugin \ --create-namespace \ - --version 0.14.5 + --version 0.15.0 ``` **Note:** You only need the to pass the `--devel` flag to `helm search repo` @@ -583,7 +582,7 @@ version (e.g. `-rc.1`). Full releases will be listed without this. ### Configuring the device plugin's `helm` chart -The `helm` chart for the latest release of the plugin (`v0.14.5`) includes +The `helm` chart for the latest release of the plugin (`v0.15.0`) includes a number of customizable values. Prior to `v0.12.0` the most commonly used values were those that had direct @@ -593,7 +592,7 @@ case of the original values is then to override an option from the `ConfigMap` if desired. Both methods are discussed in more detail below. The full set of values that can be set are found here: -[here](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.14.5/deployments/helm/nvidia-device-plugin/values.yaml). +[here](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.15.0/deployments/helm/nvidia-device-plugin/values.yaml). #### Passing configuration to the plugin via a `ConfigMap`. @@ -632,7 +631,7 @@ EOF And deploy the device plugin via helm (pointing it at this config file and giving it a name): ``` $ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ - --version=0.14.5 \ + --version=0.15.0 \ --namespace nvidia-device-plugin \ --create-namespace \ --set-file config.map.config=/tmp/dp-example-config0.yaml @@ -654,7 +653,7 @@ $ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \ ``` ``` $ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ - --version=0.14.5 \ + --version=0.15.0 \ --namespace nvidia-device-plugin \ --create-namespace \ --set config.name=nvidia-plugin-configs @@ -682,7 +681,7 @@ EOF And redeploy the device plugin via helm (pointing it at both configs with a specified default). ``` $ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ - --version=0.14.5 \ + --version=0.15.0 \ --namespace nvidia-device-plugin \ --create-namespace \ --set config.default=config0 \ @@ -701,7 +700,7 @@ $ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \ ``` ``` $ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ - --version=0.14.5 \ + --version=0.15.0 \ --namespace nvidia-device-plugin \ --create-namespace \ --set config.default=config0 \ @@ -784,7 +783,7 @@ chart values that are commonly overridden are: ``` Please take a look in the -[`values.yaml`](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.14.5/deployments/helm/nvidia-device-plugin/values.yaml) +[`values.yaml`](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.15.0/deployments/helm/nvidia-device-plugin/values.yaml) file to see the full set of overridable parameters for the device plugin. Examples of setting these options include: @@ -793,7 +792,7 @@ Enabling compatibility with the `CPUManager` and running with a request for 100ms of CPU time and a limit of 512MB of memory. ```shell $ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ - --version=0.14.5 \ + --version=0.15.0 \ --namespace nvidia-device-plugin \ --create-namespace \ --set compatWithCPUManager=true \ @@ -804,7 +803,7 @@ $ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ Enabling compatibility with the `CPUManager` and the `mixed` `migStrategy` ```shell $ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ - --version=0.14.5 \ + --version=0.15.0 \ --namespace nvidia-device-plugin \ --create-namespace \ --set compatWithCPUManager=true \ @@ -823,7 +822,7 @@ Discovery to perform this labeling. To enable it, simply set `gfd.enabled=true` during helm install. ``` helm upgrade -i nvdp nvdp/nvidia-device-plugin \ - --version=0.14.5 \ + --version=0.15.0 \ --namespace nvidia-device-plugin \ --create-namespace \ --set gfd.enabled=true @@ -865,8 +864,7 @@ product name, e.g.: ``` nvidia.com/gpu.product = A100-SXM4-40GB-MIG-1g.5gb-SHARED ``` - + ## Building and Running Locally The next sections are focused on building the device plugin locally and running it. It is intended purely for development and testing, and not required by most users. -It assumes you are pinning to the latest release tag (i.e. `v0.14.5`), but can +It assumes you are pinning to the latest release tag (i.e. `v0.15.0`), but can easily be modified to work with any available tag or branch. ### With Docker @@ -944,8 +943,8 @@ easily be modified to work with any available tag or branch. #### Build Option 1, pull the prebuilt image from [Docker Hub](https://hub.docker.com/r/nvidia/k8s-device-plugin): ```shell -$ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.14.5 -$ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.14.5 nvcr.io/nvidia/k8s-device-plugin:devel +$ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.15.0 +$ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.15.0 nvcr.io/nvidia/k8s-device-plugin:devel ``` Option 2, build without cloning the repository: @@ -953,7 +952,7 @@ Option 2, build without cloning the repository: $ docker build \ -t nvcr.io/nvidia/k8s-device-plugin:devel \ -f deployments/container/Dockerfile.ubuntu \ - https://github.com/NVIDIA/k8s-device-plugin.git#v0.14.5 + https://github.com/NVIDIA/k8s-device-plugin.git#v0.15.0 ``` Option 3, if you want to modify the code: @@ -1015,29 +1014,38 @@ See the [changelog](CHANGELOG.md) * You can report a bug by [filing a new issue](https://github.com/NVIDIA/k8s-device-plugin/issues/new) * You can contribute by opening a [pull request](https://help.github.com/articles/using-pull-requests/) -## Documentation - -- [Quick Start](docs/quick_start.md) - * [Prerequisites](docs/quick_start.md#prerequisites) - * [Preparing your GPU Nodes](docs/quick_start.md#preparing-your-gpu-nodes) - * [Node Feature Discovery (NFD)](docs/quick_start.md#node-feature-discovery-nfd) - * [Enabling GPU Support in Kubernetes](docs/quick_start.md#enabling-gpu-support-in-kubernetes) - * [Running GPU Jobs](docs/quick_start.md#running-gpu-jobs) -- [Configuring the NVIDIA device plugin binary](docs/customizing.md) - * [As command line flags or envvars](docs/customizing.md#as-command-line-flags-or-envvars) - * [As a configuration file](docs/customizing.md#as-a-configuration-file) - * [Configuration Option Details](docs/customizing.md#configuration-option-details) - * [Shared Access to GPUs with CUDA Time-Slicing](docs/customizing.md#shared-access-to-gpus-with-cuda-time-slicing) -- [Deployment via `helm`](docs/deployment_via_helm.md) - * [Configuring the device plugin's `helm` chart](docs/deployment_via_helm.md#configuring-the-device-plugins-helm-chart) - + [Passing configuration to the plugin via a `ConfigMap`.](docs/deployment_via_helm.md#passing-configuration-to-the-plugin-via-a-configmap) - - [Single Config File Example](docs/deployment_via_helm.md#single-config-file-example) - - [Multiple Config File Example](docs/deployment_via_helm.md#multiple-config-file-example) - - [Updating Per-Node Configuration With a Node Label](docs/deployment_via_helm.md#updating-per-node-configuration-with-a-node-label) - + [Setting other helm chart values](docs/deployment_via_helm.md#setting-other-helm-chart-values) - + [Deploying with gpu-feature-discovery for automatic node labels](docs/deployment_via_helm.md#deploying-with-gpu-feature-discovery-for-automatic-node-labels) - * [Deploying via `helm install` with a direct URL to the `helm` package](docs/deployment_via_helm.md#deploying-via-helm-install-with-a-direct-url-to-the-helm-package) -- [Building and Running Locally](docs/building_and_running.md) -- [GPU Feature Discovery CMD](docs/gfd_cmd.md) -- [GPU Feature Discovery Labels](docs/gfd_labels.md) -- [Changelog](CHANGELOG.md) +### Versioning + +Before v1.10 the versioning scheme of the device plugin had to match exactly the version of Kubernetes. +After the promotion of device plugins to beta this condition was was no longer required. +We quickly noticed that this versioning scheme was very confusing for users as they still expected to see +a version of the device plugin for each version of Kubernetes. + +This versioning scheme applies to the tags `v1.8`, `v1.9`, `v1.10`, `v1.11`, `v1.12`. + +We have now changed the versioning to follow [SEMVER](https://semver.org/). The +first version following this scheme has been tagged `v0.0.0`. + +Going forward, the major version of the device plugin will only change +following a change in the device plugin API itself. For example, version +`v1beta1` of the device plugin API corresponds to version `v0.x.x` of the +device plugin. If a new `v2beta2` version of the device plugin API comes out, +then the device plugin will increase its major version to `1.x.x`. + +As of now, the device plugin API for Kubernetes >= v1.10 is `v1beta1`. If you +have a version of Kubernetes >= 1.10 you can deploy any device plugin version > +`v0.0.0`. + +### Upgrading Kubernetes with the Device Plugin + +Upgrading Kubernetes when you have a device plugin deployed doesn't require you +to do any, particular changes to your workflow. The API is versioned and is +pretty stable (though it is not guaranteed to be non breaking). Starting with +Kubernetes version 1.10, you can use `v0.3.0` of the device plugin to perform +upgrades, and Kubernetes won't require you to deploy a different version of the +device plugin. Once a node comes back online after the upgrade, you will see +GPUs re-registering themselves automatically. + +Upgrading the device plugin itself is a more complex task. It is recommended to +drain GPU tasks as we cannot guarantee that GPU tasks will survive a rolling +upgrade. However we make best efforts to preserve GPU tasks during an upgrade. diff --git a/docs/building_and_running.md b/docs/building_and_running.md deleted file mode 100644 index da207128d..000000000 --- a/docs/building_and_running.md +++ /dev/null @@ -1,79 +0,0 @@ -## Building and Running Locally - -The next sections are focused on building the device plugin locally and running it. -It is intended purely for development and testing, and not required by most users. -It assumes you are pinning to the latest release tag (i.e. `v0.14.0`), but can -easily be modified to work with any available tag or branch. - -### With Docker - -#### Build -Option 1, pull the prebuilt image from [Docker Hub](https://hub.docker.com/r/nvidia/k8s-device-plugin): - -```shell -$ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.14.0 -$ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.14.0 nvcr.io/nvidia/k8s-device-plugin:devel -``` - -Option 2, build without cloning the repository: - -```shell -$ docker build \ - -t nvcr.io/nvidia/k8s-device-plugin:devel \ - -f deployments/container/Dockerfile.ubuntu \ - https://github.com/NVIDIA/k8s-device-plugin.git#v0.14.0 -``` - -Option 3, if you want to modify the code: - -```shell -$ git clone https://github.com/NVIDIA/k8s-device-plugin.git && cd k8s-device-plugin -$ make -f deployments/container/Makefile build-ubuntu20.04 -``` - -#### Run -Without compatibility for the `CPUManager` static policy: - -```shell -$ docker run \ - -it \ - --security-opt=no-new-privileges \ - --cap-drop=ALL \ - --network=none \ - -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins \ - nvcr.io/nvidia/k8s-device-plugin:devel -``` - -With compatibility for the `CPUManager` static policy: - -```shell -$ docker run \ - -it \ - --privileged \ - --network=none \ - -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins \ - nvcr.io/nvidia/k8s-device-plugin:devel --pass-device-specs -``` - -### Without Docker - -#### Build - - -```shell -$ make cmds -``` - -#### Run -Without compatibility for the `CPUManager` static policy: - -```shell -$ ./gpu-feature-discovery --output=$(pwd)/gfd -$ ./k8s-device-plugin -``` - -With compatibility for the `CPUManager` static policy: - -```shell -$ ./k8s-device-plugin --pass-device-specs -``` diff --git a/docs/customizing.md b/docs/customizing.md deleted file mode 100644 index 8e609a155..000000000 --- a/docs/customizing.md +++ /dev/null @@ -1,284 +0,0 @@ -## Configuring the NVIDIA device plugin binary - -The NVIDIA device plugin has a number of options that can be configured for it. -These options can be configured as command line flags, environment variables, -or via a config file when launching the device plugin. Here we explain what -each of these options are and how to configure them directly against the plugin -binary. The following section explains how to set these configurations when -deploying the plugin via `helm`. - -### As command line flags or envvars - -| Flag | Envvar | Default Value | -|--------------------------|-------------------------|-----------------| -| `--mig-strategy` | `$MIG_STRATEGY` | `"none"` | -| `--fail-on-init-error` | `$FAIL_ON_INIT_ERROR` | `true` | -| `--nvidia-driver-root` | `$NVIDIA_DRIVER_ROOT` | `"/"` | -| `--pass-device-specs` | `$PASS_DEVICE_SPECS` | `false` | -| `--device-list-strategy` | `$DEVICE_LIST_STRATEGY` | `"envvar"` | -| `--device-id-strategy` | `$DEVICE_ID_STRATEGY` | `"uuid"` | -| `--config-file` | `$CONFIG_FILE` | `""` | - -### As a configuration file - -```yaml -version: v1 -flags: - migStrategy: "none" - failOnInitError: true - nvidiaDriverRoot: "/" - plugin: - passDeviceSpecs: false - deviceListStrategy: "envvar" - deviceIDStrategy: "uuid" -``` - -**Note:** The configuration file has an explicit `plugin` section because it -is a shared configuration between the plugin and -[`gpu-feature-discovery`](https://github.com/NVIDIA/gpu-feature-discovery). -All options inside the `plugin` section are specific to the plugin. All -options outside of this section are shared. - -### Configuration Option Details -**`MIG_STRATEGY`**: - the desired strategy for exposing MIG devices on GPUs that support it - - `[none | single | mixed] (default 'none')` - - The `MIG_STRATEGY` option configures the daemonset to be able to expose - Multi-Instance GPUs (MIG) on GPUs that support them. More information on what - these strategies are and how they should be used can be found in [Supporting - Multi-Instance GPUs (MIG) in - Kubernetes](https://docs.google.com/document/d/1mdgMQ8g7WmaI_XVVRrCvHPFPOMCm5LQD5JefgAh6N8g). - - **Note:** With a `MIG_STRATEGY` of mixed, you will have additional resources - available to you of the form `nvidia.com/mig-g.gb` - that you can set in your pod spec to get access to a specific MIG device. - -**`FAIL_ON_INIT_ERROR`**: - fail the plugin if an error is encountered during initialization, otherwise block indefinitely - - `(default 'true')` - - When set to true, the `FAIL_ON_INIT_ERROR` option fails the plugin if an error is - encountered during initialization. When set to false, it prints an error - message and blocks the plugin indefinitely instead of failing. Blocking - indefinitely follows legacy semantics that allow the plugin to deploy - successfully on nodes that don't have GPUs on them (and aren't supposed to have - GPUs on them) without throwing an error. In this way, you can blindly deploy a - daemonset with the plugin on all nodes in your cluster, whether they have GPUs - on them or not, without encountering an error. However, doing so means that - there is no way to detect an actual error on nodes that are supposed to have - GPUs on them. Failing if an initialization error is encountered is now the - default and should be adopted by all new deployments. - -**`NVIDIA_DRIVER_ROOT`**: - the root path for the NVIDIA driver installation - - `(default '/')` - - When the NVIDIA drivers are installed directly on the host, this should be - set to `'/'`. When installed elsewhere (e.g. via a driver container), this - should be set to the root filesystem where the drivers are installed (e.g. - `'/run/nvidia/driver'`). - - **Note:** This option is only necessary when used in conjunction with the - `$PASS_DEVICE_SPECS` option described below. It tells the plugin what prefix - to add to any device file paths passed back as part of the device specs. - -**`PASS_DEVICE_SPECS`**: - pass the paths and desired device node permissions for any NVIDIA devices - being allocated to the container - - `(default 'false')` - - This option exists for the sole purpose of allowing the device plugin to - interoperate with the `CPUManager` in Kubernetes. Setting this flag also - requires one to deploy the daemonset with elevated privileges, so only do so if - you know you need to interoperate with the `CPUManager`. - -**`DEVICE_LIST_STRATEGY`**: - the desired strategy for passing the device list to the underlying runtime - - `[envvar | volume-mounts] (default 'envvar')` - - The `DEVICE_LIST_STRATEGY` flag allows one to choose which strategy the plugin - will use to advertise the list of GPUs allocated to a container. This is - traditionally done by setting the `NVIDIA_VISIBLE_DEVICES` environment variable - as described - [here](https://github.com/NVIDIA/nvidia-container-runtime#nvidia_visible_devices). - This strategy can be selected via the (default) `envvar` option. Support has - been added to the `nvidia-container-toolkit` to also allow passing the list - of devices as a set of volume mounts instead of as an environment variable. - This strategy can be selected via the `volume-mounts` option. Details for the - rationale behind this strategy can be found - [here](https://docs.google.com/document/d/1uXVF-NWZQXgP1MLb87_kMkQvidpnkNWicdpO2l9g-fw/edit#heading=h.b3ti65rojfy5). - -**`DEVICE_ID_STRATEGY`**: - the desired strategy for passing device IDs to the underlying runtime - - `[uuid | index] (default 'uuid')` - - The `DEVICE_ID_STRATEGY` flag allows one to choose which strategy the plugin will - use to pass the device ID of the GPUs allocated to a container. The device ID - has traditionally been passed as the UUID of the GPU. This flag lets a user - decide if they would like to use the UUID or the index of the GPU (as seen in - the output of `nvidia-smi`) as the identifier passed to the underlying runtime. - Passing the index may be desirable in situations where pods that have been - allocated GPUs by the plugin get restarted with different physical GPUs - attached to them. - -**`CONFIG_FILE`**: - point the plugin at a configuration file instead of relying on command line - flags or environment variables - - `(default '')` - - The order of precedence for setting each option is (1) command line flag, (2) - environment variable, (3) configuration file. In this way, one could use a - pre-defined configuration file, but then override the values set in it at - launch time. As described below, a `ConfigMap` can be used to point the - plugin at a desired configuration file when deploying via `helm`. - -### Shared Access to GPUs with CUDA Time-Slicing - -The NVIDIA device plugin allows oversubscription of GPUs through a set of -extended options in its configuration file. Under the hood, CUDA time-slicing -is used to allow workloads that land on oversubscribed GPUs to interleave with -one another. However, nothing special is done to isolate workloads that are -granted replicas from the same underlying GPU, and each workload has access to -the GPU memory and runs in the same fault-domain as of all the others (meaning -if one workload crashes, they all do). - - -These extended options can be seen below: - -```yaml -version: v1 -sharing: - timeSlicing: - renameByDefault: - failRequestsGreaterThanOne: - resources: - - name: - replicas: - ... -``` - -That is, for each named resource under `sharing.timeSlicing.resources`, a number -of replicas can now be specified for that resource type. These replicas -represent the number of shared accesses that will be granted for a GPU -represented by that resource type. - -If `renameByDefault=true`, then each resource will be advertised under the name -`.shared` instead of simply ``. - -If `failRequestsGreaterThanOne=true`, then the plugin will fail to allocate any -shared resources to a container if they request more than one. The container’s -pod will fail with an `UnexpectedAdmissionError` and need to be manually deleted, -updated, and redeployed. - -For example: - -```yaml -version: v1 -sharing: - timeSlicing: - resources: - - name: nvidia.com/gpu - replicas: 10 -``` - -If this configuration were applied to a node with 8 GPUs on it, the plugin -would now advertise 80 `nvidia.com/gpu` resources to Kubernetes instead of 8. - -```shell -$ kubectl describe node -... -Capacity: - nvidia.com/gpu: 80 -... -``` - -Likewise, if the following configuration were applied to a node, then 80 -`nvidia.com/gpu.shared` resources would be advertised to Kubernetes instead of 8 -`nvidia.com/gpu` resources. - -```yaml -version: v1 -sharing: - timeSlicing: - renameByDefault: true - resources: - - name: nvidia.com/gpu - replicas: 10 - ... -``` - -```shell -$ kubectl describe node -... -Capacity: - nvidia.com/gpu.shared: 80 -... -``` - -In both cases, the plugin simply creates 10 references to each GPU and -indiscriminately hands them out to anyone that asks for them. - -If `failRequestsGreaterThanOne=true` were set in either of these -configurations and a user requested more than one `nvidia.com/gpu` or -`nvidia.com/gpu.shared` resource in their pod spec, then the container would -fail with the resulting error: - -```shell -$ kubectl describe pod gpu-pod -... -Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Warning UnexpectedAdmissionError 13s kubelet Allocate failed due to rpc error: code = Unknown desc = request for 'nvidia.com/gpu: 2' too large: maximum request size for shared resources is 1, which is unexpected -... -``` - -**Note:** Unlike with "normal" GPU requests, requesting more than one shared -GPU does not imply that you will get guaranteed access to a proportional amount -of compute power. It only implies that you will get access to a GPU that is -shared by other clients (each of which has the freedom to run as many processes -on the underlying GPU as they want). Under the hood CUDA will simply give an -equal share of time to all of the GPU processes across all of the clients. The -`failRequestsGreaterThanOne` flag is meant to help users understand this -subtlety, by treating a request of `1` as an access request rather than an -exclusive resource request. Setting `failRequestsGreaterThanOne=true` is -recommended, but it is set to `false` by default to retain backwards -compatibility. - -As of now, the only supported resource available for time-slicing are -`nvidia.com/gpu` as well as any of the resource types that emerge from -configuring a node with the mixed MIG strategy. - -For example, the full set of time-sliceable resources on a T4 card would be: - -```shell -nvidia.com/gpu -``` - -And the full set of time-sliceable resources on an A100 40GB card would be: - -```shell -nvidia.com/gpu -nvidia.com/mig-1g.5gb -nvidia.com/mig-2g.10gb -nvidia.com/mig-3g.20gb -nvidia.com/mig-7g.40gb -``` - -Likewise, on an A100 80GB card, they would be: - -```shell -nvidia.com/gpu -nvidia.com/mig-1g.10gb -nvidia.com/mig-2g.20gb -nvidia.com/mig-3g.40gb -nvidia.com/mig-7g.80gb -``` \ No newline at end of file diff --git a/docs/deployment_via_helm.md b/docs/deployment_via_helm.md deleted file mode 100644 index 79f5e0520..000000000 --- a/docs/deployment_via_helm.md +++ /dev/null @@ -1,384 +0,0 @@ -## Deployment via `helm` - -The preferred method to deploy the `GPU Feature Discovery` and `Device Plugin` -is as a daemonset using `helm`. Instructions for installing `helm` can be -found [here](https://helm.sh/docs/intro/install/). - -Begin by setting up the plugin's `helm` repository and updating it at follows: - -```shell -$ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin -$ helm repo update -``` - -Then verify that the latest release (`v0.14.1`) of the plugin is available: -``` -$ helm search repo nvdp --devel -NAME CHART VERSION APP VERSION DESCRIPTION -nvdp/nvidia-device-plugin 0.14.0 0.14.0 A Helm chart for ... -``` - -Once this repo is updated, you can begin installing packages from it to deploy -the `gpu-feature-discovery` and `nvidia-device-plugin` helm chart. - -The most basic installation command without any options is then: - -```shell -helm upgrade -i nvdp nvdp/nvidia-device-plugin \ - --namespace nvidia-device-plugin \ - --create-namespace \ - --version 0.14.1 -``` - -**Note:** As os `v0.14.1`, by default helm will install `NFD` , -`gpu-feature-discovery` and `nvidia-device-plugin` in the -`nvidia-device-plugin` namespace. If you want to install them in a different -namespace, you can use the `--namespace` flag. You can turn off the -installation of `NFD` and `gpu-feature-discovery` by setting -`nfd.enabled=false`, `gpuFeatureDiscovery.enabled=false` or -`devicePlugin.enabled=false` respectively. - -**Note:** You only need the to pass the `--devel` flag to `helm search repo` -and the `--version` flag to `helm upgrade -i` if this is a pre-release -version (e.g. `-rc.1`). Full releases will be listed without this. - -### Configuring the device plugin's `helm` chart - -The `helm` chart for the latest release of the plugin (`v0.14.0`) includes -a number of customizable values. - -Prior to `v0.12.0` the most commonly used values were those that had direct -mappings to the command line options of the plugin binary. As of `v0.12.0`, the -preferred method to set these options is via a `ConfigMap`. The primary use -case of the original values is then to override an option from the `ConfigMap` -if desired. Both methods are discussed in more detail below. - -**Note:** The following document provides more information on the available MIG -strategies and how they should be used [Supporting Multi-Instance GPUs (MIG) in -Kubernetes](https://docs.google.com/document/d/1mdgMQ8g7WmaI_XVVRrCvHPFPOMCm5LQD5JefgAh6N8g). - -Please take a look in the following `values.yaml` files to see the full set of -overridable parameters for both the top-level `gpu-feature-discovery` chart and -the `node-feature-discovery` subchart. - -The full set of values that can be set are found here: -[here](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.14.1/deployments/helm/nvidia-device-plugin/values.yaml). - -#### Passing configuration to the plugin via a `ConfigMap`. - -In general, we provide a mechanism to pass _multiple_ configuration files to -to the plugin's `helm` chart, with the ability to choose which configuration -file should be applied to a node via a node label. - -In this way, a single chart can be used to deploy each component, but custom -configurations can be applied to different nodes throughout the cluster. - -There are two ways to provide a `ConfigMap` for use by the plugin: - - 1. Via an external reference to a pre-defined `ConfigMap` - 1. As a set of named config files to build an integrated `ConfigMap` associated with the chart - -These can be set via the chart values `config.name` and `config.map` respectively. -In both cases, the value `config.default` can be set to point to one of the -named configs in the `ConfigMap` and provide a default configuration for nodes -that have not been customized via a node label (more on this later). - -##### Single Config File Example -As an example, create a valid config file on your local filesystem, such as the following: - -```shell -cat << EOF > /tmp/dp-example-config0.yaml -version: v1 -flags: - migStrategy: "none" - failOnInitError: true - nvidiaDriverRoot: "/" - plugin: - passDeviceSpecs: false - deviceListStrategy: envvar - deviceIDStrategy: uuid -EOF -``` - -And deploy the device plugin via helm (pointing it at this config file and giving it a name): - -```shell -$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ - --version=0.14.0 \ - --namespace nvidia-device-plugin \ - --create-namespace \ - --set-file config.map.config=/tmp/dp-example-config0.yaml -``` - -Under the hood this will deploy a `ConfigMap` associated with the plugin and put -the contents of the `dp-example-config0.yaml` file into it, using the name -`config` as its key. It will then start the plugin such that this config gets -applied when the plugin comes online. - -If you don’t want the plugin’s helm chart to create the `ConfigMap` for you, you -can also point it at a pre-created `ConfigMap` as follows: - -```shell -$ kubectl create ns nvidia-device-plugin -``` - -```shell -$ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \ - --from-file=config=/tmp/dp-example-config0.yaml -``` - -```shell -$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ - --version=0.14.0 \ - --namespace nvidia-device-plugin \ - --create-namespace \ - --set config.name=nvidia-plugin-configs -``` - -##### Multiple Config File Example - -For multiple config files, the procedure is similar. - -Create a second `config` file with the following contents: - -```shell -cat << EOF > /tmp/dp-example-config1.yaml -version: v1 -flags: - migStrategy: "mixed" # Only change from config0.yaml - failOnInitError: true - nvidiaDriverRoot: "/" - plugin: - passDeviceSpecs: false - deviceListStrategy: envvar - deviceIDStrategy: uuid -EOF -``` - -And redeploy the device plugin via helm (pointing it at both configs with a specified default). - -```shell -$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ - --version=0.14.0 \ - --namespace nvidia-device-plugin \ - --create-namespace \ - --set config.default=config0 \ - --set-file config.map.config0=/tmp/dp-example-config0.yaml \ - --set-file config.map.config1=/tmp/dp-example-config1.yaml -``` - -As before, this can also be done with a pre-created `ConfigMap` if desired: - -```shell -$ kubectl create ns nvidia-device-plugin -``` - -```shell -$ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \ - --from-file=config0=/tmp/dp-example-config0.yaml \ - --from-file=config1=/tmp/dp-example-config1.yaml -``` - -```shell -$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ - --version=0.14.0 \ - --namespace nvidia-device-plugin \ - --create-namespace \ - --set config.default=config0 \ - --set config.name=nvidia-plugin-configs -``` - -**Note:** If the `config.default` flag is not explicitly set, then a default -value will be inferred from the config if one of the config names is set to -'`default`'. If neither of these are set, then the deployment will fail unless -there is only **_one_** config provided. In the case of just a single config being -provided, it will be chosen as the default because there is no other option. - -##### Updating Per-Node Configuration With a Node Label - -With this setup, plugins on all nodes will have `config0` configured for them -by default. However, the following label can be set to change which -configuration is applied: - -```shell -kubectl label nodes –-overwrite \ - nvidia.com/device-plugin.config= -``` - -For example, applying a custom config for all nodes that have T4 GPUs installed -on them might be: - -```shell -kubectl label node \ - --overwrite \ - --selector=nvidia.com/gpu.product=TESLA-T4 \ - nvidia.com/device-plugin.config=t4-config -``` - -**Note:** This label can be applied either _before_ or _after_ the plugin is -started to get the desired configuration applied on the node. Anytime it -changes value, the plugin will immediately be updated to start serving the -desired configuration. If it is set to an unknown value, it will skip -reconfiguration. If it is ever unset, it will fallback to the default. - -#### Setting other helm chart values - -As mentioned previously, the device plugin's helm chart continues to provide -direct values to set the configuration options of the plugin without using a -`ConfigMap`. These should only be used to set globally applicable options -(which should then never be embedded in the set of config files provided by the -`ConfigMap`), or used to override these options as desired. - -These values are as follows: - -``` - migStrategy: - the desired strategy for exposing MIG devices on GPUs that support it - [none | single | mixed] (default "none") - failOnInitError: - fail the plugin if an error is encountered during initialization, otherwise block indefinitely - (default 'true') - compatWithCPUManager: - run with escalated privileges to be compatible with the static CPUManager policy - (default 'false') - deviceListStrategy: - the desired strategy for passing the device list to the underlying runtime - [envvar | volume-mounts] (default "envvar") - deviceIDStrategy: - the desired strategy for passing device IDs to the underlying runtime - [uuid | index] (default "uuid") - nvidiaDriverRoot: - the root path for the NVIDIA driver installation (typical values are '/' or '/run/nvidia/driver') -``` - -**Note:** There is no value that directly maps to the `PASS_DEVICE_SPECS` -configuration option of the plugin. Instead a value called -`compatWithCPUManager` is provided which acts as a proxy for this option. -It both sets the `PASS_DEVICE_SPECS` option of the plugin to true **AND** makes -sure that the plugin is started with elevated privileges to ensure proper -compatibility with the `CPUManager`. - -Besides these custom configuration options for the plugin, other standard helm -chart values that are commonly overridden are: - -``` - legacyDaemonsetAPI: - use the legacy daemonset API version 'extensions/v1beta1' - (default 'false') - runtimeClassName: - the runtimeClassName to use, for use with clusters that have multiple runtimes. (typical value is 'nvidia') -``` - -Please take a look in the -[`values.yaml`](https://github.com/NVIDIA/k8s-device-plugin/blob/v0.14.0/deployments/helm/nvidia-device-plugin/values.yaml) -file to see the full set of overridable parameters for the device plugin. - -Examples of setting these options include: - -Enabling compatibility with the `CPUManager` and running with a request for -100ms of CPU time and a limit of 512MB of memory. - -```shell -$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ - --version=0.14.0 \ - --namespace nvidia-device-plugin \ - --create-namespace \ - --set compatWithCPUManager=true \ - --set resources.requests.cpu=100m \ - --set resources.limits.memory=512Mi -``` - -Using the legacy Daemonset API (only available on Kubernetes < `v1.16`): - -```shell -$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ - --version=0.14.0 \ - --namespace nvidia-device-plugin \ - --create-namespace \ - --set legacyDaemonsetAPI=true -``` - -Enabling compatibility with the `CPUManager` and the `mixed` `migStrategy` - -```shell -$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ - --version=0.14.0 \ - --namespace nvidia-device-plugin \ - --create-namespace \ - --set compatWithCPUManager=true \ - --set migStrategy=mixed -``` - -#### Deploying with gpu-feature-discovery for automatic node labels - -As of `v0.12.0`, the device plugin's helm chart has integrated support to -deploy -[`gpu-feature-discovery`](https://github.com/NVIDIA/gpu-feature-discovery) -(GFD) as a subchart. One can use GFD to automatically generate labels for the -set of GPUs available on a node. Under the hood, it leverages Node Feature -Discovery to perform this labeling. - -To enable it, simply set `gfd.enabled=true` during helm install. - -```shell -helm upgrade -i nvdp nvdp/nvidia-device-plugin \ - --version=0.14.0 \ - --namespace nvidia-device-plugin \ - --create-namespace \ - --set gfd.enabled=true -``` - -Under the hood this will also deploy -[`node-feature-discovery`](https://github.com/kubernetes-sigs/node-feature-discovery) -(NFD) since it is a prerequisite of GFD. If you already have NFD deployed on -your cluster and do not wish for it to be pulled in by this installation, you -can disable it with `nfd.enabled=false`. - -In addition to the standard node labels applied by GFD, the following label -will also be included when deploying the plugin with the time-slicing extensions -described [above](#shared-access-to-gpus-with-cuda-time-slicing). - -```shell -nvidia.com/.replicas = -``` - -Additionally, the `nvidia.com/.product` will be modified as follows if -`renameByDefault=false`. - -```shell -nvidia.com/.product = -SHARED -``` - -Using these labels, users have a way of selecting a shared vs. non-shared GPU -in the same way they would traditionally select one GPU model over another. -That is, the `SHARED` annotation ensures that a `nodeSelector` can be used to -attract pods to nodes that have shared GPUs on them. - -Since having `renameByDefault=true` already encodes the fact that the resource is -shared on the resource name , there is no need to annotate the product -name with `SHARED`. Users can already find the shared resources they need by -simply requesting it in their pod spec. - -Note: When running with `renameByDefault=false` and `migStrategy=single` both -the MIG profile name and the new `SHARED` annotation will be appended to the -product name, e.g.: - -```shell -nvidia.com/gpu.product = A100-SXM4-40GB-MIG-1g.5gb-SHARED -``` - -### Deploying via `helm install` with a direct URL to the `helm` package - -If you prefer not to install from the `nvidia-device-plugin` `helm` repo, you can -run `helm install` directly against the tarball of the plugin's `helm` package. -The example below installs the same chart as the method above, except that -it uses a direct URL to the `helm` chart instead of via the `helm` repo. - -Using the default values for the flags: - -```shell -$ helm upgrade -i nvdp \ - --namespace nvidia-device-plugin \ - --create-namespace \ - https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.14.0.tgz -``` diff --git a/docs/gfd_cmd.md b/docs/gfd_cmd.md deleted file mode 100644 index 254219847..000000000 --- a/docs/gfd_cmd.md +++ /dev/null @@ -1,38 +0,0 @@ -## The GFD Command line interface - -Available options: -``` -gpu-feature-discovery: -Usage: - gpu-feature-discovery [--fail-on-init-error=] [--mig-strategy=] [--oneshot | --sleep-interval=] [--no-timestamp] [--output-file= | -o ] - gpu-feature-discovery -h | --help - gpu-feature-discovery --version - -Options: - -h --help Show this help message and exit - --version Display version and exit - --oneshot Label once and exit - --no-timestamp Do not add timestamp to the labels - --fail-on-init-error= Fail if there is an error during initialization of any label sources [Default: true] - --sleep-interval= Time to sleep between labeling [Default: 60s] - --mig-strategy= Strategy to use for MIG-related labels [Default: none] - -o --output-file= Path to output file - [Default: /etc/kubernetes/node-feature-discovery/features.d/gfd] - -Arguments: - : none | single | mixed - -``` - -You can also use environment variables: - -| Env Variable | Option | Example | -| ---------------------- | -------------------- | ------- | -| GFD_FAIL_ON_INIT_ERROR | --fail-on-init-error | true | -| GFD_MIG_STRATEGY | --mig-strategy | none | -| GFD_ONESHOT | --oneshot | TRUE | -| GFD_NO_TIMESTAMP | --no-timestamp | TRUE | -| GFD_OUTPUT_FILE | --output-file | output | -| GFD_SLEEP_INTERVAL | --sleep-interval | 10s | - -Environment variables override the command line options if they conflict. \ No newline at end of file diff --git a/docs/gfd_labels.md b/docs/gfd_labels.md deleted file mode 100644 index 839016d1d..000000000 --- a/docs/gfd_labels.md +++ /dev/null @@ -1,69 +0,0 @@ -## GPU Feature Discovery generated Labels - -This is the list of the labels generated by NVIDIA GPU Feature Discovery and -their meaning: - -| Label Name | Value Type | Meaning | Example | -| -------------------------------| ---------- | -------------------------------------------- | -------------- | -| nvidia.com/cuda.driver.major | Integer | Major of the version of NVIDIA driver | 418 | -| nvidia.com/cuda.driver.minor | Integer | Minor of the version of NVIDIA driver | 30 | -| nvidia.com/cuda.driver.rev | Integer | Revision of the version of NVIDIA driver | 40 | -| nvidia.com/cuda.runtime.major | Integer | Major of the version of CUDA | 10 | -| nvidia.com/cuda.runtime.minor | Integer | Minor of the version of CUDA | 1 | -| nvidia.com/gfd.timestamp | Integer | Timestamp of the generated labels (optional) | 1555019244 | -| nvidia.com/gpu.compute.major | Integer | Major of the compute capabilities | 3 | -| nvidia.com/gpu.compute.minor | Integer | Minor of the compute capabilities | 3 | -| nvidia.com/gpu.count | Integer | Number of GPUs | 2 | -| nvidia.com/gpu.family | String | Architecture family of the GPU | kepler | -| nvidia.com/gpu.machine | String | Machine type | DGX-1 | -| nvidia.com/gpu.memory | Integer | Memory of the GPU in Mb | 2048 | -| nvidia.com/gpu.product | String | Model of the GPU | GeForce-GT-710 | - -Depending on the MIG strategy used, the following set of labels may also be -available (or override the default values for some of the labels listed above): - -### MIG 'single' strategy - -With this strategy, the single `nvidia.com/gpu` label is overloaded to provide -information about MIG devices on the node, rather than full GPUs. This assumes -all GPUs on the node have been divided into identical partitions of the same -size. The example below shows info for a system with 8 full GPUs, each of which -is partitioned into 7 equal sized MIG devices (56 total). - -| Label Name | Value Type | Meaning | Example | -| ----------------------------------- | ---------- | ---------------------------------------- | ------------------------- | -| nvidia.com/mig.strategy | String | MIG strategy in use | single | -| nvidia.com/gpu.product (overridden) | String | Model of the GPU (with MIG info added) | A100-SXM4-40GB-MIG-1g.5gb | -| nvidia.com/gpu.count (overridden) | Integer | Number of MIG devices | 56 | -| nvidia.com/gpu.memory (overridden) | Integer | Memory of each MIG device in Mb | 5120 | -| nvidia.com/gpu.multiprocessors | Integer | Number of Multiprocessors for MIG device | 14 | -| nvidia.com/gpu.slices.gi | Integer | Number of GPU Instance slices | 1 | -| nvidia.com/gpu.slices.ci | Integer | Number of Compute Instance slices | 1 | -| nvidia.com/gpu.engines.copy | Integer | Number of DMA engines for MIG device | 1 | -| nvidia.com/gpu.engines.decoder | Integer | Number of decoders for MIG device | 1 | -| nvidia.com/gpu.engines.encoder | Integer | Number of encoders for MIG device | 1 | -| nvidia.com/gpu.engines.jpeg | Integer | Number of JPEG engines for MIG device | 0 | -| nvidia.com/gpu.engines.ofa | Integer | Number of OfA engines for MIG device | 0 | - -### MIG 'mixed' strategy - -With this strategy, a separate set of labels for each MIG device type is -generated. The name of each MIG device type is defines as follows: -``` -MIG_TYPE=mig-g..gb -e.g. MIG_TYPE=mig-3g.20gb -``` - -| Label Name | Value Type | Meaning | Example | -| ------------------------------------ | ---------- | ---------------------------------------- | -------------- | -| nvidia.com/mig.strategy | String | MIG strategy in use | mixed | -| nvidia.com/MIG\_TYPE.count | Integer | Number of MIG devices of this type | 2 | -| nvidia.com/MIG\_TYPE.memory | Integer | Memory of MIG device type in Mb | 10240 | -| nvidia.com/MIG\_TYPE.multiprocessors | Integer | Number of Multiprocessors for MIG device | 14 | -| nvidia.com/MIG\_TYPE.slices.ci | Integer | Number of GPU Instance slices | 1 | -| nvidia.com/MIG\_TYPE.slices.gi | Integer | Number of Compute Instance slices | 1 | -| nvidia.com/MIG\_TYPE.engines.copy | Integer | Number of DMA engines for MIG device | 1 | -| nvidia.com/MIG\_TYPE.engines.decoder | Integer | Number of decoders for MIG device | 1 | -| nvidia.com/MIG\_TYPE.engines.encoder | Integer | Number of encoders for MIG device | 1 | -| nvidia.com/MIG\_TYPE.engines.jpeg | Integer | Number of JPEG engines for MIG device | 0 | -| nvidia.com/MIG\_TYPE.engines.ofa | Integer | Number of OfA engines for MIG device | 0 | diff --git a/docs/gpu-feature-discovery/README.md b/docs/gpu-feature-discovery/README.md new file mode 100644 index 000000000..965740420 --- /dev/null +++ b/docs/gpu-feature-discovery/README.md @@ -0,0 +1,357 @@ +# NVIDIA GPU feature discovery + +> Migrated from https://gitlab.com/nvidia/kubernetes/gpu-feature-discovery + +## Table of Contents + +- [Overview](#overview) +- [Beta Version](#beta-version) +- [Prerequisites](#prerequisites) +- [Quick Start](#quick-start) + * [Node Feature Discovery (NFD)](#node-feature-discovery-nfd) + * [Preparing your GPU Nodes](#preparing-your-gpu-nodes) + * [Deploy NVIDIA GPU Feature Discovery (GFD)](#deploy-nvidia-gpu-feature-discovery-gfd) + + [Daemonset](#daemonset) + + [Job](#job) + * [Verifying Everything Works](#verifying-everything-works) +- [The GFD Command line interface](#the-gfd-command-line-interface) +- [Generated Labels](#generated-labels) + * [MIG 'single' strategy](#mig-single-strategy) + * [MIG 'mixed' strategy](#mig-mixed-strategy) +- [Deployment via `helm`](#deployment-via-helm) + + [Deploying via `helm install` with a direct URL to the `helm` package](#deploying-via-helm-install-with-a-direct-url-to-the-helm-package) +- [Building and running locally on your native machine](#building-and-running-locally-on-your-native-machine) + +## Overview + +NVIDIA GPU Feature Discovery for Kubernetes is a software component that allows +you to automatically generate labels for the set of GPUs available on a node. +It leverages the [Node Feature +Discovery](https://github.com/kubernetes-sigs/node-feature-discovery) +to perform this labeling. + +## Beta Version + +This tool should be considered beta until it reaches `v1.0.0`. As such, we may +break the API before reaching `v1.0.0`, but we will setup a deprecation policy +to ease the transition. + +## Prerequisites + +The list of prerequisites for running the NVIDIA GPU Feature Discovery is +described below: +* nvidia-docker version > 2.0 (see how to [install](https://github.com/NVIDIA/nvidia-docker) +and it's [prerequisites](https://github.com/nvidia/nvidia-docker/wiki/Installation-\(version-2.0\)#prerequisites)) +* docker configured with nvidia as the [default runtime](https://github.com/NVIDIA/nvidia-docker/wiki/Advanced-topics#default-runtime). +* Kubernetes version >= 1.10 +* NVIDIA device plugin for Kubernetes (see how to [setup](https://github.com/NVIDIA/k8s-device-plugin)) +* NFD deployed on each node you want to label with the local source configured + * When deploying GPU feature discovery with helm (as described below) we provide a way to automatically deploy NFD for you + * To deploy NFD yourself, please see https://github.com/kubernetes-sigs/node-feature-discovery + +## Quick Start + +The following assumes you have at least one node in your cluster with GPUs and +the standard NVIDIA [drivers](https://www.nvidia.com/Download/index.aspx) have +already been installed on it. + +### Node Feature Discovery (NFD) + +The first step is to make sure that [Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery) +is running on every node you want to label. NVIDIA GPU Feature Discovery use +the `local` source so be sure to mount volumes. See +https://github.com/kubernetes-sigs/node-feature-discovery for more details. + +You also need to configure the `Node Feature Discovery` to only expose vendor +IDs in the PCI source. To do so, please refer to the Node Feature Discovery +documentation. + +The following command will deploy NFD with the minimum required set of +parameters to run `gpu-feature-discovery`. + +```shell +kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.15.0/deployments/static/nfd.yaml +``` + +**Note:** This is a simple static daemonset meant to demonstrate the basic +features required of `node-feature-discovery` in order to successfully run +`gpu-feature-discovery`. Please see the instructions below for [Deployment via +`helm`](#deployment-via-helm) when deploying in a production setting. + +### Preparing your GPU Nodes + +The following steps need to be executed on all your GPU nodes. +This README assumes that the NVIDIA drivers and the `nvidia-container-toolkit` have been pre-installed. +It also assumes that you have configured the `nvidia-container-runtime` as the default low-level runtime to use. + +Please see: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html + +### Deploy NVIDIA GPU Feature Discovery (GFD) + +The next step is to run NVIDIA GPU Feature Discovery on each node as a Daemonset +or as a Job. + +#### Daemonset + +```shell +kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.15.0/deployments/static/gpu-feature-discovery-daemonset.yaml +``` + +**Note:** This is a simple static daemonset meant to demonstrate the basic +features required of `gpu-feature-discovery`. Please see the instructions below +for [Deployment via `helm`](#deployment-via-helm) when deploying in a +production setting. + +#### Job + +You must change the `NODE_NAME` value in the template to match the name of the +node you want to label: + +```shell +$ export NODE_NAME= +$ curl https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.15.0/deployments/static/gpu-feature-discovery-job.yaml.template \ + | sed "s/NODE_NAME/${NODE_NAME}/" > gpu-feature-discovery-job.yaml +$ kubectl apply -f gpu-feature-discovery-job.yaml +``` + +**Note:** This method should only be used for testing and not deployed in a +productions setting. + +### Verifying Everything Works + +With both NFD and GFD deployed and running, you should now be able to see GPU +related labels appearing on any nodes that have GPUs installed on them. + +``` +$ kubectl get nodes -o yaml +apiVersion: v1 +items: +- apiVersion: v1 + kind: Node + metadata: + ... + + labels: + nvidia.com/cuda.driver.major: "455" + nvidia.com/cuda.driver.minor: "06" + nvidia.com/cuda.driver.rev: "" + nvidia.com/cuda.runtime.major: "11" + nvidia.com/cuda.runtime.minor: "1" + nvidia.com/gpu.compute.major: "8" + nvidia.com/gpu.compute.minor: "0" + nvidia.com/gfd.timestamp: "1594644571" + nvidia.com/gpu.count: "1" + nvidia.com/gpu.family: ampere + nvidia.com/gpu.machine: NVIDIA DGX-2H + nvidia.com/gpu.memory: "39538" + nvidia.com/gpu.product: A100-SXM4-40GB + ... +... + +``` + +## The GFD Command line interface + +Available options: +``` +gpu-feature-discovery: +Usage: + gpu-feature-discovery [--fail-on-init-error=] [--mig-strategy=] [--oneshot | --sleep-interval=] [--no-timestamp] [--output-file= | -o ] + gpu-feature-discovery -h | --help + gpu-feature-discovery --version + +Options: + -h --help Show this help message and exit + --version Display version and exit + --oneshot Label once and exit + --no-timestamp Do not add timestamp to the labels + --fail-on-init-error= Fail if there is an error during initialization of any label sources [Default: true] + --sleep-interval= Time to sleep between labeling [Default: 60s] + --mig-strategy= Strategy to use for MIG-related labels [Default: none] + -o --output-file= Path to output file + [Default: /etc/kubernetes/node-feature-discovery/features.d/gfd] + +Arguments: + : none | single | mixed + +``` + +You can also use environment variables: + +| Env Variable | Option | Example | +| ---------------------- | -------------------- | ------- | +| GFD_FAIL_ON_INIT_ERROR | --fail-on-init-error | true | +| GFD_MIG_STRATEGY | --mig-strategy | none | +| GFD_ONESHOT | --oneshot | TRUE | +| GFD_NO_TIMESTAMP | --no-timestamp | TRUE | +| GFD_OUTPUT_FILE | --output-file | output | +| GFD_SLEEP_INTERVAL | --sleep-interval | 10s | + +Environment variables override the command line options if they conflict. + +## Generated Labels + +This is the list of the labels generated by NVIDIA GPU Feature Discovery and +their meaning: + +| Label Name | Value Type | Meaning | Example | +| -------------------------------| ---------- | -------------------------------------------- | -------------- | +| nvidia.com/cuda.driver.major | Integer | Major of the version of NVIDIA driver | 418 | +| nvidia.com/cuda.driver.minor | Integer | Minor of the version of NVIDIA driver | 30 | +| nvidia.com/cuda.driver.rev | Integer | Revision of the version of NVIDIA driver | 40 | +| nvidia.com/cuda.runtime.major | Integer | Major of the version of CUDA | 10 | +| nvidia.com/cuda.runtime.minor | Integer | Minor of the version of CUDA | 1 | +| nvidia.com/gfd.timestamp | Integer | Timestamp of the generated labels (optional) | 1555019244 | +| nvidia.com/gpu.compute.major | Integer | Major of the compute capabilities | 3 | +| nvidia.com/gpu.compute.minor | Integer | Minor of the compute capabilities | 3 | +| nvidia.com/gpu.count | Integer | Number of GPUs | 2 | +| nvidia.com/gpu.family | String | Architecture family of the GPU | kepler | +| nvidia.com/gpu.machine | String | Machine type | DGX-1 | +| nvidia.com/gpu.memory | Integer | Memory of the GPU in Mb | 2048 | +| nvidia.com/gpu.product | String | Model of the GPU | GeForce-GT-710 | + +Depending on the MIG strategy used, the following set of labels may also be +available (or override the default values for some of the labels listed above): + +### MIG 'single' strategy + +With this strategy, the single `nvidia.com/gpu` label is overloaded to provide +information about MIG devices on the node, rather than full GPUs. This assumes +all GPUs on the node have been divided into identical partitions of the same +size. The example below shows info for a system with 8 full GPUs, each of which +is partitioned into 7 equal sized MIG devices (56 total). + +| Label Name | Value Type | Meaning | Example | +| ----------------------------------- | ---------- | ---------------------------------------- | ------------------------- | +| nvidia.com/mig.strategy | String | MIG strategy in use | single | +| nvidia.com/gpu.product (overridden) | String | Model of the GPU (with MIG info added) | A100-SXM4-40GB-MIG-1g.5gb | +| nvidia.com/gpu.count (overridden) | Integer | Number of MIG devices | 56 | +| nvidia.com/gpu.memory (overridden) | Integer | Memory of each MIG device in Mb | 5120 | +| nvidia.com/gpu.multiprocessors | Integer | Number of Multiprocessors for MIG device | 14 | +| nvidia.com/gpu.slices.gi | Integer | Number of GPU Instance slices | 1 | +| nvidia.com/gpu.slices.ci | Integer | Number of Compute Instance slices | 1 | +| nvidia.com/gpu.engines.copy | Integer | Number of DMA engines for MIG device | 1 | +| nvidia.com/gpu.engines.decoder | Integer | Number of decoders for MIG device | 1 | +| nvidia.com/gpu.engines.encoder | Integer | Number of encoders for MIG device | 1 | +| nvidia.com/gpu.engines.jpeg | Integer | Number of JPEG engines for MIG device | 0 | +| nvidia.com/gpu.engines.ofa | Integer | Number of OfA engines for MIG device | 0 | + +### MIG 'mixed' strategy + +With this strategy, a separate set of labels for each MIG device type is +generated. The name of each MIG device type is defines as follows: +``` +MIG_TYPE=mig-g..gb +e.g. MIG_TYPE=mig-3g.20gb +``` + +| Label Name | Value Type | Meaning | Example | +| ------------------------------------ | ---------- | ---------------------------------------- | -------------- | +| nvidia.com/mig.strategy | String | MIG strategy in use | mixed | +| nvidia.com/MIG\_TYPE.count | Integer | Number of MIG devices of this type | 2 | +| nvidia.com/MIG\_TYPE.memory | Integer | Memory of MIG device type in Mb | 10240 | +| nvidia.com/MIG\_TYPE.multiprocessors | Integer | Number of Multiprocessors for MIG device | 14 | +| nvidia.com/MIG\_TYPE.slices.ci | Integer | Number of GPU Instance slices | 1 | +| nvidia.com/MIG\_TYPE.slices.gi | Integer | Number of Compute Instance slices | 1 | +| nvidia.com/MIG\_TYPE.engines.copy | Integer | Number of DMA engines for MIG device | 1 | +| nvidia.com/MIG\_TYPE.engines.decoder | Integer | Number of decoders for MIG device | 1 | +| nvidia.com/MIG\_TYPE.engines.encoder | Integer | Number of encoders for MIG device | 1 | +| nvidia.com/MIG\_TYPE.engines.jpeg | Integer | Number of JPEG engines for MIG device | 0 | +| nvidia.com/MIG\_TYPE.engines.ofa | Integer | Number of OfA engines for MIG device | 0 | + +## Deployment via `helm` + +The preferred method to deploy `gpu-feature-discovery` is as a daemonset using `helm`. +Instructions for installing `helm` can be found +[here](https://helm.sh/docs/intro/install/). + +As of `v0.15.0`, the device plugin's helm chart has integrated support to deploy +[`gpu-feature-discovery`](https://gitlab.com/nvidia/kubernetes/gpu-feature-discovery/-/tree/main) + +When gpu-feature-discovery in deploying standalone, begin by setting up the +plugin's `helm` repository and updating it at follows: + +```shell +$ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin +$ helm repo update +``` + +Then verify that the latest release (`v0.15.0`) of the plugin is available +(Note that this includes the GFD chart): + +```shell +$ helm search repo nvdp --devel +NAME CHART VERSION APP VERSION DESCRIPTION +nvdp/nvidia-device-plugin 0.15.0 0.15.0 A Helm chart for ... +``` + +Once this repo is updated, you can begin installing packages from it to deploy +the `gpu-feature-discovery` component in standalone mode. + +The most basic installation command without any options is then: + +``` +$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ + --version 0.15.0 \ + --namespace gpu-feature-discovery \ + --create-namespace \ + --set devicePlugin.enabled=false +``` + +Disabling auto-deployment of NFD and running with a MIG strategy of 'mixed' in +the default namespace. + +```shell +$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \ + --version=0.15.0 \ + --set allowDefaultNamespace=true \ + --set nfd.enabled=false \ + --set migStrategy=mixed \ + --set devicePlugin.enabled=false +``` + +**Note:** You only need the to pass the `--devel` flag to `helm search repo` +and the `--version` flag to `helm upgrade -i` if this is a pre-release +version (e.g. `-rc.1`). Full releases will be listed without this. + +### Deploying via `helm install` with a direct URL to the `helm` package + +If you prefer not to install from the `nvidia-device-plugin` `helm` repo, you can +run `helm install` directly against the tarball of the plugin's `helm` package. +The example below installs the same chart as the method above, except that +it uses a direct URL to the `helm` chart instead of via the `helm` repo. + +Using the default values for the flags: + +```shell +$ helm upgrade -i nvdp \ + --namespace gpu-feature-discovery \ + --set devicePlugin.enabled=false \ + --create-namespace \ + https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.15.0.tgz +``` + +## Building and running locally on your native machine + +Download the source code: + +```shell +git clone https://github.com/NVIDIA/k8s-device-plugin +``` + +Get dependies: + +```shell +make vendor +``` + +Build it: + +``` +make build +``` + +Run it: +``` +./gpu-feature-discovery --output=$(pwd)/gfd +``` diff --git a/docs/quick_start.md b/docs/quick_start.md deleted file mode 100644 index 9ee23f748..000000000 --- a/docs/quick_start.md +++ /dev/null @@ -1,191 +0,0 @@ -## Quick Start - -### Prerequisites - -The list of prerequisites for running the NVIDIA GPU Feature Discovery and the Device Plugin is described below: -* NVIDIA drivers >= 384.81 -* nvidia-docker >= 2.0 || nvidia-container-toolkit >= 1.7.0 (>= 1.11.0 to use integrated GPUs on Tegra-based systems) -* nvidia-container-runtime configured as the default low-level runtime -* Kubernetes version >= 1.10 - -### Preparing your GPU Nodes - -The following steps need to be executed on all your GPU nodes. -This README assumes that the NVIDIA drivers and the `nvidia-container-toolkit` have been pre-installed. -It also assumes that you have configured the `nvidia-container-runtime` as the default low-level runtime to use. - -Please see: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html - -#### Example for debian-based systems with `docker` and `containerd` - -##### Install the `nvidia-container-toolkit` - -```shell -distribution=$(. /etc/os-release;echo $ID$VERSION_ID) -curl -s -L https://nvidia.github.io/libnvidia-container/gpgkey | sudo apt-key add - -curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | sudo tee /etc/apt/sources.list.d/libnvidia-container.list - -sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit -``` - -##### Configure `docker` -When running `kubernetes` with `docker`, edit the config file which is usually -present at `/etc/docker/daemon.json` to set up `nvidia-container-runtime` as -the default low-level runtime: - -```json -{ - "default-runtime": "nvidia", - "runtimes": { - "nvidia": { - "path": "/usr/bin/nvidia-container-runtime", - "runtimeArgs": [] - } - } -} -``` - -And then restart `docker`: - -```shell -$ sudo systemctl restart docker -``` - -##### Configure `containerd` -When running `kubernetes` with `containerd`, edit the config file which is -usually present at `/etc/containerd/config.toml` to set up -`nvidia-container-runtime` as the default low-level runtime: - -``` -version = 2 -[plugins] - [plugins."io.containerd.grpc.v1.cri"] - [plugins."io.containerd.grpc.v1.cri".containerd] - default_runtime_name = "nvidia" - - [plugins."io.containerd.grpc.v1.cri".containerd.runtimes] - [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia] - privileged_without_host_devices = false - runtime_engine = "" - runtime_root = "" - runtime_type = "io.containerd.runc.v2" - [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia.options] - BinaryName = "/usr/bin/nvidia-container-runtime" -``` - -And then restart `containerd`: - -```shell -$ sudo systemctl restart containerd -``` - -### Node Feature Discovery (NFD) - -The first step is to make sure that [Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery) -is running on every node you want to label. NVIDIA GPU Feature Discovery use -the `local` source so be sure to mount volumes. See -https://github.com/kubernetes-sigs/node-feature-discovery for more details. - -You also need to configure the `Node Feature Discovery` to only expose vendor -IDs in the PCI source. To do so, please refer to the Node Feature Discovery -documentation. - -The following command will deploy NFD with the minimum required set of -parameters to run `gpu-feature-discovery`. - -```shell -$ kubectl apply -k https://github.com/kubernetes-sigs/node-feature-discovery/deployment/overlays/default?ref=v0.13.2 -``` - -**Note:** This is a simple static daemonset meant to demonstrate the basic -features required of `node-feature-discovery` in order to successfully run -`gpu-feature-discovery`. Please see the instructions below for [Deployment via -`helm`](#deployment-via-helm) when deploying in a production setting. - - -### Enabling GPU Support in Kubernetes - -Once you have configured the options above on all the GPU nodes in your -cluster, you can enable GPU support by deploying the following Daemonsets: - -```shell -$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.14.0/deployments/static/gpu-feature-discovery-daemonset.yaml -$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.14.0/deployments/static/nvidia-device-plugin.yml -``` - -**Note:** This is a simple static daemonset meant to demonstrate the basic -features of the `gpu-feature-discovery` and`nvidia-device-plugin`. Please see -the instructions below for [Deployment via `helm`](deployment_via_helm.md) -when deploying the plugin in a production setting. - -### Verifying Everything Works - -With both NFD and GFD deployed and running, you should now be able to see GPU -related labels appearing on any nodes that have GPUs installed on them. - -```shell -$ kubectl get nodes -o yaml -apiVersion: v1 -items: -- apiVersion: v1 - kind: Node - metadata: - ... - - labels: - nvidia.com/cuda.driver.major: "455" - nvidia.com/cuda.driver.minor: "06" - nvidia.com/cuda.driver.rev: "" - nvidia.com/cuda.runtime.major: "11" - nvidia.com/cuda.runtime.minor: "1" - nvidia.com/gpu.compute.major: "8" - nvidia.com/gpu.compute.minor: "0" - nvidia.com/gfd.timestamp: "1594644571" - nvidia.com/gpu.count: "1" - nvidia.com/gpu.family: ampere - nvidia.com/gpu.machine: NVIDIA DGX-2H - nvidia.com/gpu.memory: "39538" - nvidia.com/gpu.product: A100-SXM4-40GB - ... -... - -``` - -### Running GPU Jobs - -With the daemonset deployed, NVIDIA GPUs can now be requested by a container -using the `nvidia.com/gpu` resource type: - -```yaml -$ cat < **WARNING:** *if you don't request GPUs when using the device plugin with NVIDIA images all -> the GPUs on the machine will be exposed inside your container.* \ No newline at end of file