Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 13 additions & 48 deletions applications/object_detection_torch/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,15 +15,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.


############################################################
# Base image
############################################################

ARG BASE_IMAGE
ARG GPU_TYPE

FROM ${BASE_IMAGE} as base
FROM ${BASE_IMAGE}

ARG DEBIAN_FRONTEND=noninteractive

Expand All @@ -38,47 +31,19 @@ COPY utilities /tmp/scripts/utilities/
RUN chmod +x /tmp/scripts/holohub
RUN /tmp/scripts/holohub setup && rm -rf /var/lib/apt/lists/*

FROM base as torch

# Install libraries
ARG GPU_TYPE
RUN if [ "${GPU_TYPE}" = "igpu" ]; then \
PYTORCH_WHEEL_VERSION="2.8.0"; \
TORCHVISION_WHEEL_VERSION="0.23.0"; \
INDEX_URL="https://pypi.jetson-ai-lab.io/jp6/cu129"; \
else \
CUDA_MAJOR=$(nvcc --version | grep -o "release [0-9]*" | awk '{print $2}'); \
if [ "$CUDA_MAJOR" = "13" ]; then \
# Reflect observed Torchvision nightly wheel dependencies for CUDA 13
if [ $(uname -m) = "aarch64" ]; then \
PYTORCH_WHEEL_VERSION="2.9.0"; \
TORCHVISION_WHEEL_VERSION="0.24.0"; \
INDEX_URL="https://download.pytorch.org/whl/test/cu130"; \
else \
PYTORCH_WHEEL_VERSION="2.9.0.dev20250829+cu130"; \
TORCHVISION_WHEEL_VERSION="0.24.0.dev20250829"; \
INDEX_URL="https://download.pytorch.org/whl/nightly/cu130"; \
fi; \
else \
PYTORCH_WHEEL_VERSION="2.8.0+cu129"; \
TORCHVISION_WHEEL_VERSION="0.23.0+cu129"; \
INDEX_URL="https://download.pytorch.org/whl/"; \
fi; \
fi; \
echo "Installing torch==${PYTORCH_WHEEL_VERSION} from $INDEX_URL"; \
python3 -m pip install --force-reinstall torch==${PYTORCH_WHEEL_VERSION} torchvision==${TORCHVISION_WHEEL_VERSION} torchaudio --index-url $INDEX_URL; \
if ! find /usr/local/lib/python3.12/dist-packages/torch -name libtorch_cuda.so | grep -q .; then \
echo "libtorch_cuda.so not found, torch installation failed"; \
exit 1; \
fi

RUN rm -rf /opt/libtorch/* && \
mkdir -p /opt/libtorch && \
LIBTORCH_PATH=$(python3 -c "import torch; print(torch.__path__[0])") && \
ln -sf "${LIBTORCH_PATH}/lib" /opt/libtorch/lib


# Set up Holoscan SDK container libtorch to be found with ldconfig for app C++ build and runtime
RUN echo $(python3 -c "import torch; print(torch.__path__[0])")/lib > /etc/ld.so.conf.d/libtorch.conf \
&& ldconfig \
&& ldconfig -p | grep -q "libtorch.so"
Comment on lines 35 to 37
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: this code assumes PyTorch is already installed in the base image, but torchvision installation happens later (line 41-49). If PyTorch isn't pre-installed or if there's a dependency between the two, this will fail with an ImportError.

Suggested change
RUN echo $(python3 -c "import torch; print(torch.__path__[0])")/lib > /etc/ld.so.conf.d/libtorch.conf \
&& ldconfig \
&& ldconfig -p | grep -q "libtorch.so"
# Verify PyTorch is available in base image
RUN python3 -c "import torch; print(f'PyTorch {torch.__version__} found')"
# Set up Holoscan SDK container libtorch to be found with ldconfig for app C++ build and runtime
RUN echo $(python3 -c "import torch; print(torch.__path__[0])")/lib > /etc/ld.so.conf.d/libtorch.conf \
&& ldconfig \
&& ldconfig -p | grep -q "libtorch.so"

Is PyTorch guaranteed to be pre-installed in the hsdk 3.10 base image, and are there any version compatibility requirements between PyTorch and torchvision?


ARG GPU_TYPE
ARG CUDA_MAJOR
RUN INDEX_URL=""; \
if [ "${GPU_TYPE}" = "igpu" ]; then \
INDEX_URL="https://pypi.jetson-ai-lab.io/jp6/cu126"; \
elif [ "${CUDA_MAJOR}" = "12" ]; then \
INDEX_URL="https://download.pytorch.org/whl/cu129"; \
elif [ "${CUDA_MAJOR}" = "13" ]; then \
INDEX_URL="https://download.pytorch.org/whl/cu130"; \
Comment on lines +43 to +47
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use cu126 for CUDA 12 instead of cu129 to ensure broader compatibility.

The current mapping of cu129 for CUDA 12 is problematic because cu129 is specifically for CUDA 12.9, while cu126 is compatible with LibCUDA 12.0 and above. Mapping CUDA_MAJOR="12" to cu129 unconditionally will fail for CUDA 12.0 through 12.8. PyTorch versions 2.7.x support cu118, cu126, and cu128, but not cu129, making cu129 unsuitable as a blanket CUDA 12 wheel. Consider using cu126 as the baseline for CUDA 12.x or implement more granular minor version checks to select the appropriate wheel variant (cu126, cu128, or cu129).

The cu130 mapping for CUDA 13 is correct.

🤖 Prompt for AI Agents
In applications/object_detection_torch/Dockerfile around lines 43 to 47, the
mapping for CUDA_MAJOR="12" currently points to the cu129 wheel which will fail
for CUDA 12.0–12.8; change the mapping so CUDA_MAJOR="12" uses the cu126 index
URL (or implement minor-version detection to select cu126/cu128/cu129 as
appropriate) — update the INDEX_URL assignment for the "12" branch to
"https://download.pytorch.org/whl/cu126" (or add logic to inspect CUDA minor
version and choose cu126/cu128/cu129 accordingly).

fi \
&& python3 -m pip install torchvision --no-cache-dir --index-url ${INDEX_URL}
Comment on lines +41 to +49
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Handle the case when INDEX_URL is empty.

If GPU_TYPE is not "igpu" and CUDA_MAJOR is neither "12" nor "13", INDEX_URL will remain an empty string. The subsequent pip install with --index-url "" may fail or exhibit unpredictable behavior.

Apply this diff to add a fallback or validation:

 ARG GPU_TYPE
 ARG CUDA_MAJOR
 RUN INDEX_URL=""; \
     if [ "${GPU_TYPE}" = "igpu" ]; then \
         INDEX_URL="https://pypi.jetson-ai-lab.io/jp6/cu126"; \
     elif [ "${CUDA_MAJOR}" = "12" ]; then \
         INDEX_URL="https://download.pytorch.org/whl/cu129"; \
     elif [ "${CUDA_MAJOR}" = "13" ]; then \
         INDEX_URL="https://download.pytorch.org/whl/cu130"; \
+    else \
+        echo "Error: Unsupported GPU_TYPE (${GPU_TYPE}) or CUDA_MAJOR (${CUDA_MAJOR})" && exit 1; \
     fi \
     && python3 -m pip install torchvision --no-cache-dir --index-url ${INDEX_URL}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
RUN INDEX_URL=""; \
if [ "${GPU_TYPE}" = "igpu" ]; then \
INDEX_URL="https://pypi.jetson-ai-lab.io/jp6/cu126"; \
elif [ "${CUDA_MAJOR}" = "12" ]; then \
INDEX_URL="https://download.pytorch.org/whl/cu129"; \
elif [ "${CUDA_MAJOR}" = "13" ]; then \
INDEX_URL="https://download.pytorch.org/whl/cu130"; \
fi \
&& python3 -m pip install torchvision --no-cache-dir --index-url ${INDEX_URL}
RUN INDEX_URL=""; \
if [ "${GPU_TYPE}" = "igpu" ]; then \
INDEX_URL="https://pypi.jetson-ai-lab.io/jp6/cu126"; \
elif [ "${CUDA_MAJOR}" = "12" ]; then \
INDEX_URL="https://download.pytorch.org/whl/cu129"; \
elif [ "${CUDA_MAJOR}" = "13" ]; then \
INDEX_URL="https://download.pytorch.org/whl/cu130"; \
else \
echo "Error: Unsupported GPU_TYPE (${GPU_TYPE}) or CUDA_MAJOR (${CUDA_MAJOR})" && exit 1; \
fi \
&& python3 -m pip install torchvision --no-cache-dir --index-url ${INDEX_URL}
🤖 Prompt for AI Agents
In applications/object_detection_torch/Dockerfile around lines 41 to 49,
INDEX_URL can be left empty which causes pip to be invoked with --index-url "";
change the logic to provide a safe fallback or omit the flag when INDEX_URL is
empty: set INDEX_URL to the default PyPI index (https://pypi.org/simple) if no
GPU/CUDA-specific URL is selected, or modify the final pip install invocation to
conditionally include --index-url only when INDEX_URL is non-empty (ensuring
proper quoting of the variable).

Comment on lines +41 to +49
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: if INDEX_URL is empty (e.g., GPU_TYPE is not "igpu" and CUDA_MAJOR is not "12" or "13"), pip will attempt to install from the default PyPI index, which may not have the correct CUDA version of torchvision. Consider adding an else clause or validation.

Suggested change
RUN INDEX_URL=""; \
if [ "${GPU_TYPE}" = "igpu" ]; then \
INDEX_URL="https://pypi.jetson-ai-lab.io/jp6/cu126"; \
elif [ "${CUDA_MAJOR}" = "12" ]; then \
INDEX_URL="https://download.pytorch.org/whl/cu129"; \
elif [ "${CUDA_MAJOR}" = "13" ]; then \
INDEX_URL="https://download.pytorch.org/whl/cu130"; \
fi \
&& python3 -m pip install torchvision --no-cache-dir --index-url ${INDEX_URL}
ARG GPU_TYPE
ARG CUDA_MAJOR
RUN INDEX_URL=""; \
if [ "${GPU_TYPE}" = "igpu" ]; then \
INDEX_URL="https://pypi.jetson-ai-lab.io/jp6/cu126"; \
elif [ "${CUDA_MAJOR}" = "12" ]; then \
INDEX_URL="https://download.pytorch.org/whl/cu129"; \
elif [ "${CUDA_MAJOR}" = "13" ]; then \
INDEX_URL="https://download.pytorch.org/whl/cu130"; \
else \
echo "Error: Unsupported GPU_TYPE=${GPU_TYPE} or CUDA_MAJOR=${CUDA_MAJOR}"; \
exit 1; \
fi \
&& python3 -m pip install torchvision --no-cache-dir --index-url ${INDEX_URL}