Skip to content

Conversation

@jakewatson-bristol
Copy link
Collaborator

add a dev container with kind for a local k8s cluster

also adds just for running useful commands, once in the dev container run just and it will list the avalible args

just rebuild-cluster will rebuild the local kind cluster (including generate a new kubeconfig)
`just pulumi-init' will help people get pulumi set up (set up local login, create a stack, help user update the config for their needs)

let me know your thoughts

@craddm
Copy link
Contributor

craddm commented Jul 30, 2025

It failed on the postCreateCommand step for me:

[8948 ms] Start: Run in container: /bin/sh -c bash .devcontainer/post-create.sh
================================================================================
>>> Setting up K3s development configuration
================================================================================
cp: cannot stat '/kubeconfig/kubeconfig.yaml': No such file or directory

What's next:
    Try Docker Debug for seamless, persistent debugging tools in any container or image → docker debug 0d0858efc19b0e0347effd7aaafb7269744c86c59f49a8ddf87a2736ebc0710f
    Learn more at https://docs.docker.com/go/debug-cli/
[9242 ms] Stop (294 ms): Run in container: /bin/sh -c bash .devcontainer/post-create.sh
[9242 ms] postCreateCommand from devcontainer.json failed with exit code 1. Skipping any further user-provided commands.
Done. Press any key to close the terminal.

@awalford16
Copy link
Collaborator

Worth adding helm to the container?

@craddm
Copy link
Contributor

craddm commented Aug 8, 2025

@awalford16 I'd agree with that; but I also know @jakewatson-bristol has done some more work on this that isn't reflected in the PR at the moment because of some merging problems

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should delete this file

Comment on lines 36 to 43
# install KInD
RUN if [ "$(uname -m)" = "x86_64" ]; then \
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.29.0/kind-linux-amd64; \
elif [ "$(uname -m)" = "aarch64" ]; then \
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.29.0/kind-linux-arm64; \
fi && \
chmod +x ./kind && \
mv ./kind /usr/local/bin/kind
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we're not using Kind any more, we can delete this

chmod +x k3d-linux-* && \
mv k3d-linux-* /usr/local/bin/k3d

# Istall k9s
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Istall k9s
# Install k9s

tar xzvf hubble-linux-${HUBBLE_ARCH}.tar.gz -C /usr/local/bin && \
rm hubble-linux-*.tar.gz hubble-linux-*.tar.gz.sha256sum

# Install ArgoCD CLI
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Install ArgoCD CLI
# Install Argo Workflows CLI


# Install ArgoCD CLI
ARG ARGO_OS="linux"
RUN curl -sLO "https://github.com/argoproj/argo-workflows/releases/download/v3.7.0/argo-$ARGO_OS-amd64.gz"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We define the version as a constant at the start of the script, but don't use it here

Suggested change
RUN curl -sLO "https://github.com/argoproj/argo-workflows/releases/download/v3.7.0/argo-$ARGO_OS-amd64.gz"
RUN curl -sLO "https://github.com/argoproj/argo-workflows/releases/download/$ARGO_VERSION/argo-$ARGO_OS-amd64.gz"


# Application versions

ARG ARGO_VERSION='3.0.0'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
ARG ARGO_VERSION='3.0.0'
ARG ARGO_VERSION='3.7.0'

Comment on lines 14 to 51
# links:
# - k3s-server:kubernetes.default.svc.cluster.local
# networks:
# - devcontainer_network
# k3s-server:
# image: "rancher/k3s:${K3S_VERSION:-latest}"
# command:
# - server
# - "--flannel-backend=none" # Disable flannel (we use cilium)
# - "--disable=traefik" # Disable traefik ingress controller
# tmpfs:
# - /run
# - /var/run
# ulimits:
# nproc: 65535
# nofile:
# soft: 65535
# hard: 65535
# privileged: true
# restart: always
# environment:
# - K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
# - K3S_KUBECONFIG_MODE=666
# volumes:
# - k3s-server:/var/lib/rancher/k3s
# # This is just so that we get the kubeconfig file out
# - k8s-config:/output
# expose:
# - 6443 # Kubernetes API Server
# - 80 # Ingress controller port 80 - not used at the moment
# - 443 # Ingress controller port 443 - not used at the moment
# - 2746 # Argo-workflows UI

# ports:
# - 6443:6443 # Kubernetes API Server
# # - 80:80 # Ingress controller port 80
# # - 443:443 # Ingress controller port 443
# - 2746:2746 # Argo-workflows UI
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we get rid of all these commented lines?

print_message "DevContainer setup complete! 🎉"
echo "You can now use the following commands:"
echo "run 'source ~/.zshrc'"
echo "run 'k3d cluster create --config k3d-default.yaml && /workspace/infra/fridge/k3d-default.yaml && /workspace/scripts/Install-Cillium.sh' to create a new cluster"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: this doesn't work as there is no k3d-default.yaml.

Not sure what options were configured in the k3d-default.yaml - do we just need to disable Traefik or is there more?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah yes I forgot to track the file in git

does this also maybe make more sense to be moved into .devcontainer?

fzf \
python3.12-venv

# install yq
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need yq?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

useful for parsing json output which a lot of CLI tools let you output to

I do also realise that this can just be installed via apt so I'll at least change it to that

"workspaceFolder": "/workspace",
"service": "devcontainer",
"features": {
"ghcr.io/devcontainers-extra/features/kubectl-asdf": {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could keep it as kubectl-asdf or switch to the kubectl-helm-minikube one (appreciate that installs minikube, which we don't need); if we don't switch, we should install helm manually in the Dockerfile

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say delete this file or move it into the .devcontainer folder; but it shouldn't be here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment here; happy for these go in the .devcontainer folder

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, rather see this in the .devcontainer folder. Could see it in the future going in an infra/local folder (like the infra/aks/ folder, which is meant to build the cluster on AKS)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add installation of Helm if we're not doing it using a devcontainer feature

@craddm
Copy link
Contributor

craddm commented Oct 14, 2025

Closing this, as this proved unworkable for the development path we took

@craddm craddm closed this Oct 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants