Skip to content

Latest commit

 

History

History
228 lines (169 loc) · 6.63 KB

File metadata and controls

228 lines (169 loc) · 6.63 KB

CUDA-based JupyterLab Python docker stack

GPU accelerated, multi-arch (linux/amd64, linux/arm64/v8) docker images:

Images available for Python versions ≥ 3.11.1.

🔬 Check out jupyterlab/cuda/python/scipy at https://demo.cuda.jupyter.b-data.ch.

CUDA screenshot

Build chain

The same as the JupyterLab Python docker stack.

Features

The same as the JupyterLab Python docker stack plus

  • CUDA runtime, CUDA math libraries, NCCL and cuDNN
    • including development libraries and headers
  • TensortRT and TensorRT plugin libraries
    • including development libraries and headers

👉 See the CUDA Version Matrix for detailed information.

Subtags

The same as the JupyterLab Python docker stack.

Table of Contents

Prerequisites

The same as the JupyterLab Python docker stack plus

  • NVIDIA GPU
  • NVIDIA Linux driver
  • NVIDIA Container Toolkit

ℹ️ The host running the GPU accelerated images only requires the NVIDIA driver, the CUDA toolkit does not have to be installed.

Use driver version 535 (Long Term Support Branch) with NVIDIA Data Center GPUs or select NGC-Ready NVIDIA RTX boards to ensure forward compatibility until June 2026.

Install

To install the NVIDIA Container Toolkit, follow the instructions for your platform:

Usage

Build image (base)

latest:

cd base && docker build \
  --build-arg BASE_IMAGE=ubuntu \
  --build-arg BASE_IMAGE_TAG=22.04 \
  --build-arg BUILD_ON_IMAGE=glcr.b-data.ch/cuda/python/ver \
  --build-arg PYTHON_VERSION=3.13.1 \
  --build-arg CUDA_IMAGE_FLAVOR=devel \
  -t jupyterlab/cuda/python/base \
  -f latest.Dockerfile .

version:

cd base && docker build \
  --build-arg BASE_IMAGE=ubuntu \
  --build-arg BASE_IMAGE_TAG=22.04 \
  --build-arg BUILD_ON_IMAGE=glcr.b-data.ch/cuda/python/ver \
  --build-arg CUDA_IMAGE_FLAVOR=devel \
  -t jupyterlab/cuda/python/base:MAJOR.MINOR.PATCH \
  -f MAJOR.MINOR.PATCH.Dockerfile .

For MAJOR.MINOR.PATCH3.11.1.

Create home directory

Create an empty directory using docker:

docker run --rm \
  -v "${PWD}/jupyterlab-jovyan":/dummy \
  alpine chown 1000:100 /dummy

It will be bind mounted as the JupyterLab user's home directory and automatically populated.
Bind mounting a subfolder of the home directory is only possible for images with Python version ≥ 3.12.2.

Run container

self built:

docker run -it --rm \
  --gpus '"device=all"' \
  -p 8888:8888 \
  -u root \
  -v "${PWD}/jupyterlab-jovyan":/home/jovyan \
  -e NB_UID=$(id -u) \
  -e NB_GID=$(id -g) \
  -e CHOWN_HOME=yes \
  -e CHOWN_HOME_OPTS='-R' \
  jupyterlab/cuda/python/base[:MAJOR.MINOR.PATCH]

from the project's GitLab Container Registries:

docker run -it --rm \
  --gpus '"device=all"' \
  -p 8888:8888 \
  -u root \
  -v "${PWD}/jupyterlab-jovyan":/home/jovyan \
  -e NB_UID=$(id -u) \
  -e NB_GID=$(id -g) \
  -e CHOWN_HOME=yes \
  -e CHOWN_HOME_OPTS='-R' \
  IMAGE[:MAJOR[.MINOR[.PATCH]]]

IMAGE being one of

The use of the -v flag in the command mounts the empty directory on the host (${PWD}/jupyterlab-jovyan in the command) as /home/jovyan in the container.

-e NB_UID=$(id -u) -e NB_GID=$(id -g) instructs the startup script to switch the user ID and the primary group ID of ${NB_USER} to the user and group ID of the one executing the command.

-e CHOWN_HOME=yes -e CHOWN_HOME_OPTS='-R' instructs the startup script to recursively change the ${NB_USER} home directory owner and group to the current value of ${NB_UID} and ${NB_GID}.
ℹ️ This is only required for the first run.

The server logs appear in the terminal.

Using Podman (rootless mode, 3.11.6+)

Create an empty home directory:

mkdir "${PWD}/jupyterlab-root"

Use the following command to run the container as root:

podman run -it --rm \
  --device 'nvidia.com/gpu=all' \
  -p 8888:8888 \
  -u root \
  -v "${PWD}/jupyterlab-root":/home/root \
  -e NB_USER=root \
  -e NB_UID=0 \
  -e NB_GID=0 \
  -e NOTEBOOK_ARGS="--allow-root" \
  IMAGE[:MAJOR[.MINOR[.PATCH]]]

Using Docker Desktop

Creating a home directory might not be required. Also

docker run -it --rm \
  --gpus '"device=all"' \
  -p 8888:8888 \
  -v "${PWD}/jupyterlab-jovyan":/home/jovyan \
  IMAGE[:MAJOR[.MINOR[.PATCH]]]

might be sufficient.

Similar projects

What makes this project different:

  1. Multi-arch: linux/amd64, linux/arm64/v8
  2. Derived from nvidia/cuda:12.6.3-devel-ubuntu22.04
    • including development libraries and headers
  3. TensortRT and TensorRT plugin libraries
    • including development libraries and headers
  4. IDE: code-server next to JupyterLab
  5. Just Python – no Conda / Mamba

See CUDA Notes for tweaks, settings, etc.