@@ -172,3 +172,178 @@ make use of an environment management system such as
172
172
Alternatively, the build environment's C++ toolchain can be downgraded using
173
173
``conda install -c conda-forge libstdcxx-ng=8.5 ``. Or, one can ``activate `` or
174
174
``deactivate `` Conda environments as needed for building vs. running Warp.
175
+
176
+ Using Warp in Docker
177
+ --------------------
178
+
179
+ Docker containers can be useful for developing and deploying applications that use Warp.
180
+ They provide build environment isolation and consistency benefits.
181
+
182
+ In order to have Warp detect GPUs from inside a Docker container, the
183
+ `NVIDIA Container Toolkit <https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/index.html >`__
184
+ should be installed.
185
+ Pass the ``--gpus all `` flag to the ``docker run `` command to make all GPUs available to the container.
186
+
187
+ Building Warp from source in Docker
188
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
189
+
190
+ To build Warp from source in Docker, you should ensure that the container has either ``curl `` or ``wget `` installed.
191
+ This is required so that Packman can download dependencies like libmathdx and LLVM/Clang from the internet
192
+ when building Warp.
193
+
194
+ We recommend using one of the NVIDIA CUDA images from `nvidia/cuda <https://hub.docker.com/r/nvidia/cuda >`__ as a base
195
+ image.
196
+ Choose a ``devel `` flavor that matches your desired CUDA Toolkit version.
197
+
198
+ The following Dockerfile clones the Warp repository, builds Warp, and installs it into the system Python
199
+ environment:
200
+
201
+ .. code-block :: dockerfile
202
+
203
+ FROM nvidia/cuda:13.0.0-devel-ubuntu24.04
204
+
205
+ RUN apt-get update && apt-get install -y --no-install-recommends \
206
+ git \
207
+ git-lfs \
208
+ curl \
209
+ python3 \
210
+ python3-pip \
211
+ && rm -rf /var/lib/apt/lists/*
212
+
213
+ WORKDIR /warp
214
+
215
+ RUN git clone https://github.com/NVIDIA/warp.git . && \
216
+ git lfs pull && \
217
+ python3 -m pip install --break-system-packages numpy && \
218
+ python3 build_lib.py && \
219
+ python3 -m pip install --break-system-packages .
220
+
221
+ If we put the contents of this file in a file called ``Dockerfile ``, we can build an image using a command like:
222
+
223
+ .. code-block :: sh
224
+
225
+ docker build -t warp-github-clone:example .
226
+
227
+ After building the image, you can test it with:
228
+
229
+ .. code-block :: sh
230
+
231
+ docker run --rm --gpus all warp-github-clone:example python3 -c " import warp as wp; wp.init()"
232
+
233
+ The ``--rm `` flag tells Docker to remove the container after the command finishes.
234
+ This will output something like:
235
+
236
+ .. code-block :: text
237
+
238
+ ==========
239
+ == CUDA ==
240
+ ==========
241
+
242
+ CUDA Version 13.0.0
243
+
244
+ Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
245
+
246
+ This container image and its contents are governed by the NVIDIA Deep Learning Container License.
247
+ By pulling and using the container, you accept the terms and conditions of this license:
248
+ https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
249
+
250
+ A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
251
+
252
+ Warp 1.10.0.dev0 initialized:
253
+ CUDA Toolkit 13.0, Driver 13.0
254
+ Devices:
255
+ "cpu" : "x86_64"
256
+ "cuda:0" : "NVIDIA L40S" (47 GiB, sm_89, mempool enabled)
257
+ Kernel cache:
258
+ /root/.cache/warp/1.10.0.dev0
259
+
260
+ An interactive session can be started with:
261
+
262
+ .. code-block :: sh
263
+
264
+ docker run -it --rm --gpus all warp-github-clone:example
265
+
266
+ To build a modified version of Warp from your local repository, you can use the following Dockerfile as a starting
267
+ point.
268
+ Place it at the root of your repository.
269
+
270
+ .. code-block :: dockerfile
271
+
272
+ FROM nvidia/cuda:13.0.0-devel-ubuntu24.04
273
+
274
+ # Install dependencies
275
+ RUN apt-get update && apt-get install -y --no-install-recommends \
276
+ curl \
277
+ python3 \
278
+ python3-pip \
279
+ && rm -rf /var/lib/apt/lists/*
280
+
281
+ COPY warp /warp/warp
282
+ COPY deps /warp/deps
283
+ COPY tools/packman /warp/tools/packman
284
+ COPY build_lib.py build_llvm.py pyproject.toml setup.py VERSION.md /warp/
285
+
286
+ WORKDIR /warp
287
+
288
+ RUN python3 -m pip install --break-system-packages numpy && \
289
+ python3 build_lib.py && \
290
+ python3 -m pip install --break-system-packages .
291
+
292
+ The resulting image produced by either of the above Dockerfile examples can be quite large due to the inclusion of
293
+ various dependencies that are no longer needed once Warp has been built.
294
+
295
+ For production use, consider a multi-stage build employing both the ``devel `` and ``runtime `` CUDA container images
296
+ to reduce the image size significantly by excluding unnecessary build tools and development dependencies from the
297
+ runtime environment.
298
+
299
+ In the builder stage, we compile Warp similar to the previous examples, but we also build a wheel file.
300
+ The runtime stage uses the lighter ``nvidia/cuda:13.0.0-runtime-ubuntu24.04 `` base image and installs the wheel
301
+ produced by the builder stage into a Python virtual environment.
302
+
303
+ The following example also uses `uv <https://docs.astral.sh/uv/ >`__ for Python package management, creating virtual
304
+ environments, and building the wheel file.
305
+
306
+ .. code-block :: dockerfile
307
+
308
+ # Build stage
309
+ FROM nvidia/cuda:13.0.0-devel-ubuntu24.04 AS builder
310
+
311
+ COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
312
+
313
+ RUN apt-get update && apt-get install -y --no-install-recommends \
314
+ curl \
315
+ && rm -rf /var/lib/apt/lists/*
316
+
317
+ COPY warp /warp/warp
318
+ COPY deps /warp/deps
319
+ COPY tools/packman /warp/tools/packman
320
+ COPY build_lib.py build_llvm.py pyproject.toml setup.py VERSION.md /warp/
321
+
322
+ WORKDIR /warp
323
+
324
+ RUN uv venv && \
325
+ uv pip install numpy && \
326
+ uv run --no-project build_lib.py && \
327
+ uv build --wheel --out-dir /wheels
328
+
329
+ # Runtime stage
330
+ FROM nvidia/cuda:13.0.0-runtime-ubuntu24.04
331
+
332
+ COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
333
+
334
+ RUN uv venv /opt/venv
335
+ # Use the virtual environment automatically
336
+ ENV VIRTUAL_ENV=/opt/venv
337
+ # Place entry points in the environment at the front of the path
338
+ ENV PATH="/opt/venv/bin:$PATH"
339
+
340
+ RUN uv pip install numpy
341
+
342
+ # Copy and install the wheel from builder stage
343
+ COPY --from=builder /wheels/*.whl /tmp/
344
+ RUN uv pip install /tmp/*.whl && \
345
+ rm -rf /tmp/*.whl
346
+
347
+ After building the image with ``docker build -t warp-prod:example . ``, we can use ``docker image ls `` to compare the
348
+ image sizes.
349
+ ``warp-prod:example `` is about 3.18 GB, while ``warp-github-clone:example `` is 9.03 GB!
0 commit comments