-
Notifications
You must be signed in to change notification settings - Fork 318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using nvidia runtime with buildkit #122
Comments
It's because normal In general, it is not recommended to build containers with |
@klueska indeed, I have a Is it possible to set |
I have the same issue. In some cases, it is useful to have nvidia-runtime at build stage. |
Would the following be useful: moby/buildkit#1283 |
Anyone have a work around or example of this? I am running into the same issue. |
@chris-volley, only disable buildkit. |
I need to use buildkit to leverage cache mount for an nvidia based docker build! Is there no way to do this? That can't be!? |
That is my state of work currently. I wrote an in-house build system around docker, which selectively turns BuildKit on and off. Parts, which have to be compiled against CUDA libraries and tested with the GPU capabbilities are in separate Dockerfiles. These Dockerfiles are built with BuidlKit turned off. Other parts have to authenticate against our servers to download some data or code and want to mount secrets. These Dockerfiles are called with BuildKit turned on. The dependency information, which specifies the build-order and whether to turn on BuildKit or not, is stored in a central metadata index-file. Janky as hell (number of Dockerfiles grow quickly), but I don't know any other solution. It's basically a step backward from multi-stage Dockerfiles. If someone got a better solution, please let me know. @elezar I was not able to follow the explanations in the pull request and transfer it to this use case. |
As a general question (before I spend more time familiarizing myself with buildkit), would
One could set up two different workers here -- one for the non-GPU builds and one for the cases where the drivers are required at build time. |
@Lucas-Steinmann Thanks for explaining. My situation involves a cmake project that definitely absolutely needs cuda for anything nontrivial that it builds. So far it seems like a practical way to go to work around the buildkit/nvidia blockage to leverage ccache is with this rsync trick. |
was anyone able to get this working? unable to find a single example of how to set |
I tried this. The obvious problem is that the If I understand correctly, one would have to create a version of |
I figured out a workaround for this: https://stackoverflow.com/questions/59691207/docker-build-with-nvidia-runtime/77348905#77348905. I had to disable/remove buildkit. |
Just want to throw in another opinion here, sometimes nvidia is absolutely needed in building dependencies. Also generally I think its not a great thing that buildkit silently ignores the runtime settings. Edit: I just notices this is the nvidia toolkit git and not docker, this was a rant against docker, not nvidia, oopsie |
Is there any related Issue in the Docker issue tracker? |
@pktiuk there is moby/buildkit#4056 that adds CDI support to buildkit. |
I'm trying to have nvidia driver available during build which works with the default
build
command but not when using buildkit.I have this minimal
Dockerfile
Which I can build as follow:
But when using buildkit I get
Then I figured out I have to use
RUN --security=insecure
and usedocker buildx
as followsI create the builder
then I build the image as follows
I know the insecure flag works when I use another command that requires privilege (i.e.:
mount --bind /dev /tmp
)This is my
daemon.json
The output of
docker info
The output of
nvidia-container-cli -k -d /dev/tty info
Looks like the builder is not using the nvidia runtime. What am I missing?
The text was updated successfully, but these errors were encountered: