Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 19 additions & 31 deletions Dockerfiles/pcluster-amazonlinux-2/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -110,19 +110,29 @@ RUN curl -sOL https://efa-installer.amazonaws.com/aws-efa-installer-${EFA_INSTAL

# Bootstrap spack compiler installation into the eventual installation tree
# Defined in spack/share/spack/gitlab/cloud_pipelines/configs/config.yaml
# For x86: Add oneapi compiler into container. The stack can pick up the compilers by using ${SPACK_ROOT}/etc/spack/compilers.yaml
# [email protected] is the last version compatible with AL2 glibc 2.26
ARG SPACK_ROOT="/bootstrap-compilers/spack"
ARG SPACK_TAG="develop-2025-05-18"
ARG TARGETARCH
COPY Dockerfiles/pcluster-amazonlinux-2/${TARGETARCH}/packages.yaml /root/.spack/packages.yaml
RUN mkdir -p $(dirname "${SPACK_ROOT}") \
&& git clone --depth=1 -b ${SPACK_TAG} https://github.com/spack/spack "${SPACK_ROOT}" \
&& cp "${SPACK_ROOT}/share/spack/gitlab/cloud_pipelines/configs/config.yaml" "${SPACK_ROOT}/etc/spack/config.yaml" \
&& cp "${SPACK_ROOT}/share/spack/gitlab/cloud_pipelines/configs/config.yaml" "/bootstrap/cloud_pipelines-config.yaml" \
&& . "${SPACK_ROOT}/share/spack/setup-env.sh" \
&& spack compiler add \
&& spack install gcc@12 \
&& spack buildcache create -u /bootstrap-gcc-cache $(spack find --format '/{hash}') \
&& rm -rf $(dirname "${SPACK_ROOT}") /root/.spack
RUN mkdir -p $(dirname "${SPACK_ROOT}"); \
git clone --depth=1 -b ${SPACK_TAG} https://github.com/spack/spack "${SPACK_ROOT}"; \
cp "${SPACK_ROOT}/share/spack/gitlab/cloud_pipelines/configs/config.yaml" "${SPACK_ROOT}/etc/spack/config.yaml"; \
cp "${SPACK_ROOT}/share/spack/gitlab/cloud_pipelines/configs/config.yaml" "/bootstrap/cloud_pipelines-config.yaml"; \
. "${SPACK_ROOT}/share/spack/setup-env.sh"; \
spack compiler add; \
spack install gcc@12; \
if [[ "amd64" == "${TARGETARCH}" ]]; then \
spack install [email protected]; \
rm -rf $(find $(spack find --format '{prefix}' intel-oneapi-compilers) -mindepth 1 -type d -name 'conda_channel'); \
rm -rf $(find $(spack find --format '{prefix}' intel-oneapi-compilers) -mindepth 1 -type d -name 'debugger'); \
rm -rf $(find $(spack find --format '{prefix}' intel-oneapi-compilers) -mindepth 1 -type d -name '*32*'); \
rm -rf $(find $(spack find --format '{prefix}' intel-oneapi-compilers) -mindepth 1 -type d -name 'oclfpga'); \
fi; \
spack buildcache create --unsigned --private /bootstrap-gcc-cache $(spack find --format '/{hash}'); \
rm -rf $(dirname "${SPACK_ROOT}") /root/.spack


# Sign the buildcache
RUN --mount=type=secret,id=bootstrap_gcc_key \
Expand All @@ -138,28 +148,6 @@ RUN --mount=type=secret,id=bootstrap_gcc_key \
&& spack gpg publish --rebuild-index -m bootstrap-gcc-cache ${secretkey_fingerprint} \
&& rm -rf $(dirname "${SPACK_ROOT}") /root/.spack

# Add oneapi compiler into container. The stack can pick up the compilers by using ${SPACK_ROOT}/etc/spack/compilers.yaml
# [email protected] is the last version compatible with AL2 glibc 2.26
COPY Dockerfiles/pcluster-amazonlinux-2/${TARGETARCH}/packages.yaml /root/.spack/packages.yaml
RUN if [[ "amd64" == "${TARGETARCH}" ]]; then mkdir -p $(dirname "${SPACK_ROOT}") \
&& git clone --depth=1 -b ${SPACK_TAG} https://github.com/spack/spack.git "${SPACK_ROOT}" \
&& cp "${SPACK_ROOT}/share/spack/gitlab/cloud_pipelines/configs/config.yaml" "${SPACK_ROOT}/etc/spack/config.yaml" \
&& cp "${SPACK_ROOT}/share/spack/gitlab/cloud_pipelines/configs/config.yaml" "/bootstrap/cloud_pipelines-config.yaml" \
&& . "${SPACK_ROOT}/share/spack/setup-env.sh" \
&& spack load gcc@12 \
&& spack compiler add --scope site \
&& cd "${SPACK_ROOT}" \
&& spack install [email protected] \
&& . "$(spack location -i intel-oneapi-compilers)"/setvars.sh \
&& spack compiler add --scope site \
&& rm -rf $(find $(spack find --format '{prefix}' intel-oneapi-compilers) -mindepth 1 -type d -name 'conda_channel') \
&& rm -rf $(find $(spack find --format '{prefix}' intel-oneapi-compilers) -mindepth 1 -type d -name 'debugger') \
&& rm -rf $(find $(spack find --format '{prefix}' intel-oneapi-compilers) -mindepth 1 -type d -name '*32*') \
&& rm -rf $(find $(spack find --format '{prefix}' intel-oneapi-compilers) -mindepth 1 -type d -name 'oclfpga') \
&& rm -rf /opt/intel /tmp/* \
&& spack clean -a \
&& rm -rf /root/.spack ; fi

ENV PATH=/bootstrap/runner/view/bin:$PATH \
NVIDIA_VISIBLE_DEVICES=all \
NVIDIA_DRIVER_CAPABILITIES=compute,utility \
Expand Down
27 changes: 27 additions & 0 deletions Dockerfiles/pcluster-amazonlinux-2/Dockerfile.test.al2
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
FROM public.ecr.aws/amazonlinux/amazonlinux:2

RUN yum update -y \
&& yum install -y \
git python3 tar unzip gcc gcc-c++ gcc-gfortran make patch xz which \
&& yum clean all \
&& rm -rf /var/cache/yum/*

ARG SPACK_TAG=develop
RUN git clone --depth=1 -b ${SPACK_TAG} https://github.com/spack/spack /spack

ARG TEST_CACHE_TAR=cache.tar
COPY ${TEST_CACHE_TAR} /cache.tar
RUN mkdir /cache \
&& tar -C /cache -xvf /cache.tar \
&& rm /cache.tar

RUN ls /spack/share/spack/setup-env.sh
RUN . /spack/share/spack/setup-env.sh \
&& spack mirror add --signed boot /cache \
&& spack buildcache keys --install --trust --force

RUN source /spack/share/spack/setup-env.sh \
&& spack buildcache list -L \
&& spack install --use-buildcache package:only gcc@12

CMD ["/bin/bash"]
27 changes: 27 additions & 0 deletions Dockerfiles/pcluster-amazonlinux-2/Dockerfile.test.ubuntu
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
FROM public.ecr.aws/ubuntu/ubuntu:24.04

SHELL ["/bin/bash", "-c"]

RUN apt-get update && apt-get install -y git g++ make cmake unzip python3 python3-pip bash



ARG SPACK_TAG=develop
RUN git clone --depth=1 -b ${SPACK_TAG} https://github.com/spack/spack /spack

ARG TEST_CACHE_TAR=cache.tar
COPY ${TEST_CACHE_TAR} /cache.tar
RUN mkdir /cache \
&& tar -C /cache -xvf /cache.tar \
&& rm /cache.tar

RUN ls /spack/share/spack/setup-env.sh
RUN . /spack/share/spack/setup-env.sh \
&& spack mirror add --signed boot /cache \
&& spack buildcache keys --install --trust --force

RUN source /spack/share/spack/setup-env.sh \
&& spack buildcache list -L \
&& spack install --use-buildcache package:only gcc@12

CMD ["/bin/bash"]
80 changes: 80 additions & 0 deletions Dockerfiles/pcluster-amazonlinux-2/Readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# Stack: pcluster-amazonlinux-2

This directory contains dockerfiles to build the containers that the spack CI
uses to build and publish the pcluster-amazonlinux-2 stack.

One advantage of using Amazon Linux 2 is that it has one of the older glibc
versions (2.26), which increases the chance that binaries that link against
glibc may be portable to other linux distributions with newer versions of glibc.

# Testing the build process locally

You may locally test the entire process including key signing if you have an
existing gpg2 key, or you can make a testing key:

```
gpg2 --gen-key
fingerprint=$(gpg -K --with-fingerprint --with-colons | grep fpr | head)
export bootstrap_gcc_key=$(gpg --armor --export-secret-keys $fingerprint)
```

Now you can export the secret key in the environment, and build with a very
similar environment as the github runners:

```
docker build --secret id=bootstrap_gcc_key --output type=image,name=offline_test:latest --file Dockerfiles/pcluster-amazonlinux-2/Dockerfile .
```

# Multiplatform Docker Build

If necessary you can test multi-platform builds with a command like:

```
docker buildx build --file Dockerfiles/pcluster-amazonlinux-2/Dockerfile -t test2 --platform linux/arm64 -t multi-platform .
```

# Testing the images locally

Additionally, a few docker testfiles are provided to confirm the format or
compatibility of the resulting image and buildcache. First extract the
buildcache from the test container:

```
id=$(docker create -t offline_test)
docker cp $id:/bootstrap-gcc-cache /tmp/test-cache
docker rm -v $id
tar -C /tmp/test-cache -cf cache.tar .
rm -rf /tmp/test-cache
```
Then build the test images:
```
docker build --file Dockerfiles/pcluster-amazonlinux-2/Dockerfile.test.al2 .
docker build --file Dockerfiles/pcluster-amazonlinux-2/Dockerfile.test.ubuntu .
```

The included tests add the buildcache and install the compiler. You may wish to
use this as a starting point to compile a particular package.

# Testing the whole stack in spack CI

After the images are built by CI in the spack/gitlab-runners repo, they will be
published to the ghcr.io container repository. You can list the
pcluster-amazonlinux-2 images by visiting
https://github.com/spack/gitlab-runners/pkgs/container/pcluster-amazonlinux-2,
and searching for the image tagged with your PR number (e.g. pr-62).

Next a PR to spack/spack should be created, which points the CI towards this
image. For the example of PR #62, the image was found as
`ghcr.io/spack/pcluster-amazonlinux-2:pr-62`.

As of May 2025, these are the three relevant locations to change beneath `spack/share/spack/gitlab/cloud_pipelines`:

- [.gitlab-ci.yaml](https://github.com/spack/spack/blob/c7e6018acfbb82972e78de68f81ae400b26048c2/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml#L731) - In `.aws-pcluster-generate/image`
- [stacks/aws-pcluster-neoverse_v1/spack.yaml](https://github.com/spack/spack/blob/c7e6018acfbb82972e78de68f81ae400b26048c2/share/spack/gitlab/cloud_pipelines/stacks/aws-pcluster-neoverse_v1/spack.yaml#L20) - In `spack/ci/pipeline-gen/build-job/image`
- [stacks/aws-pcluster-x86_64_v4/spack.yaml](https://github.com/spack/spack/blob/c7e6018acfbb82972e78de68f81ae400b26048c2/share/spack/gitlab/cloud_pipelines/stacks/aws-pcluster-x86_64_v4/spack.yaml#L25) - In `spack/ci/pipeline-gen/build-job/image`

In all locations change the image to point to your tagged PR image.

# Tagging the final result

TODO: After everything passes, merge the PR, tag the next nightly, and point the stack's towards the tagged image.