@@ -123,7 +123,7 @@ Once you have configured the options above on all the GPU nodes in your
123
123
cluster, you can enable GPU support by deploying the following Daemonset:
124
124
125
125
``` shell
126
- $ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.14.4 /nvidia-device-plugin.yml
126
+ $ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.14.5 /nvidia-device-plugin.yml
127
127
```
128
128
129
129
** Note:** This is a simple static daemonset meant to demonstrate the basic
@@ -551,11 +551,11 @@ $ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
551
551
$ helm repo update
552
552
```
553
553
554
- Then verify that the latest release (` v0.14.4 ` ) of the plugin is available:
554
+ Then verify that the latest release (` v0.14.5 ` ) of the plugin is available:
555
555
```
556
556
$ helm search repo nvdp --devel
557
557
NAME CHART VERSION APP VERSION DESCRIPTION
558
- nvdp/nvidia-device-plugin 0.14.4 0.14.4 A Helm chart for ...
558
+ nvdp/nvidia-device-plugin 0.14.5 0.14.5 A Helm chart for ...
559
559
```
560
560
561
561
Once this repo is updated, you can begin installing packages from it to deploy
@@ -566,7 +566,7 @@ The most basic installation command without any options is then:
566
566
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
567
567
--namespace nvidia-device-plugin \
568
568
--create-namespace \
569
- --version 0.14.4
569
+ --version 0.14.5
570
570
```
571
571
572
572
** Note:** You only need the to pass the ` --devel ` flag to ` helm search repo `
@@ -575,7 +575,7 @@ version (e.g. `<version>-rc.1`). Full releases will be listed without this.
575
575
576
576
### Configuring the device plugin's ` helm ` chart
577
577
578
- The ` helm ` chart for the latest release of the plugin (` v0.14.4 ` ) includes
578
+ The ` helm ` chart for the latest release of the plugin (` v0.14.5 ` ) includes
579
579
a number of customizable values.
580
580
581
581
Prior to ` v0.12.0 ` the most commonly used values were those that had direct
@@ -585,7 +585,7 @@ case of the original values is then to override an option from the `ConfigMap`
585
585
if desired. Both methods are discussed in more detail below.
586
586
587
587
The full set of values that can be set are found here:
588
- [ here] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.14.4 /deployments/helm/nvidia-device-plugin/values.yaml ) .
588
+ [ here] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.14.5 /deployments/helm/nvidia-device-plugin/values.yaml ) .
589
589
590
590
#### Passing configuration to the plugin via a ` ConfigMap ` .
591
591
624
624
And deploy the device plugin via helm (pointing it at this config file and giving it a name):
625
625
```
626
626
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
627
- --version=0.14.4 \
627
+ --version=0.14.5 \
628
628
--namespace nvidia-device-plugin \
629
629
--create-namespace \
630
630
--set-file config.map.config=/tmp/dp-example-config0.yaml
@@ -646,7 +646,7 @@ $ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
646
646
```
647
647
```
648
648
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
649
- --version=0.14.4 \
649
+ --version=0.14.5 \
650
650
--namespace nvidia-device-plugin \
651
651
--create-namespace \
652
652
--set config.name=nvidia-plugin-configs
674
674
And redeploy the device plugin via helm (pointing it at both configs with a specified default).
675
675
```
676
676
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
677
- --version=0.14.4 \
677
+ --version=0.14.5 \
678
678
--namespace nvidia-device-plugin \
679
679
--create-namespace \
680
680
--set config.default=config0 \
@@ -693,7 +693,7 @@ $ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
693
693
```
694
694
```
695
695
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
696
- --version=0.14.4 \
696
+ --version=0.14.5 \
697
697
--namespace nvidia-device-plugin \
698
698
--create-namespace \
699
699
--set config.default=config0 \
@@ -776,7 +776,7 @@ chart values that are commonly overridden are:
776
776
```
777
777
778
778
Please take a look in the
779
- [ ` values.yaml ` ] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.14.4 /deployments/helm/nvidia-device-plugin/values.yaml )
779
+ [ ` values.yaml ` ] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.14.5 /deployments/helm/nvidia-device-plugin/values.yaml )
780
780
file to see the full set of overridable parameters for the device plugin.
781
781
782
782
Examples of setting these options include:
@@ -785,7 +785,7 @@ Enabling compatibility with the `CPUManager` and running with a request for
785
785
100ms of CPU time and a limit of 512MB of memory.
786
786
``` shell
787
787
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
788
- --version=0.14.4 \
788
+ --version=0.14.5 \
789
789
--namespace nvidia-device-plugin \
790
790
--create-namespace \
791
791
--set compatWithCPUManager=true \
@@ -796,7 +796,7 @@ $ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
796
796
Enabling compatibility with the ` CPUManager ` and the ` mixed ` ` migStrategy `
797
797
``` shell
798
798
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
799
- --version=0.14.4 \
799
+ --version=0.14.5 \
800
800
--namespace nvidia-device-plugin \
801
801
--create-namespace \
802
802
--set compatWithCPUManager=true \
@@ -815,7 +815,7 @@ Discovery to perform this labeling.
815
815
To enable it, simply set ` gfd.enabled=true ` during helm install.
816
816
```
817
817
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
818
- --version=0.14.4 \
818
+ --version=0.14.5 \
819
819
--namespace nvidia-device-plugin \
820
820
--create-namespace \
821
821
--set gfd.enabled=true
@@ -921,31 +921,31 @@ Using the default values for the flags:
921
921
$ helm upgrade -i nvdp \
922
922
--namespace nvidia-device-plugin \
923
923
--create-namespace \
924
- https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.14.4 .tgz
924
+ https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.14.5 .tgz
925
925
```
926
926
-->
927
927
## Building and Running Locally
928
928
929
929
The next sections are focused on building the device plugin locally and running it.
930
930
It is intended purely for development and testing, and not required by most users.
931
- It assumes you are pinning to the latest release tag (i.e. ` v0.14.4 ` ), but can
931
+ It assumes you are pinning to the latest release tag (i.e. ` v0.14.5 ` ), but can
932
932
easily be modified to work with any available tag or branch.
933
933
934
934
### With Docker
935
935
936
936
#### Build
937
937
Option 1, pull the prebuilt image from [ Docker Hub] ( https://hub.docker.com/r/nvidia/k8s-device-plugin ) :
938
938
``` shell
939
- $ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.14.4
940
- $ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.14.4 nvcr.io/nvidia/k8s-device-plugin:devel
939
+ $ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.14.5
940
+ $ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.14.5 nvcr.io/nvidia/k8s-device-plugin:devel
941
941
```
942
942
943
943
Option 2, build without cloning the repository:
944
944
``` shell
945
945
$ docker build \
946
946
-t nvcr.io/nvidia/k8s-device-plugin:devel \
947
947
-f deployments/container/Dockerfile.ubuntu \
948
- https://github.com/NVIDIA/k8s-device-plugin.git#v0.14.4
948
+ https://github.com/NVIDIA/k8s-device-plugin.git#v0.14.5
949
949
```
950
950
951
951
Option 3, if you want to modify the code:
0 commit comments