Skip to content

Commit

Permalink
✨ Add Tilt setup (#16)
Browse files Browse the repository at this point in the history
Adding Tilt setup for local development

Signed-off-by: janiskemper <[email protected]>
  • Loading branch information
janiskemper committed Oct 5, 2023
1 parent 37d6f44 commit 4431f3e
Show file tree
Hide file tree
Showing 9 changed files with 444 additions and 31 deletions.
12 changes: 12 additions & 0 deletions .envrc.sample
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
export KUBECONFIG=$PWD/.mgt-cluster-kubeconfig.yaml
export K8S_VERSION=1-27
export GIT_PROVIDER_B64=Z2l0aHVi
export GIT_ACCESS_TOKEN_B64=mybase64encodedtoken
export GIT_ORG_NAME_B64=U292ZXJlaWduQ2xvdWRTdGFjaw==
export GIT_REPOSITORY_NAME_B64=Y2x1c3Rlci1zdGFja3M=
export EXP_CLUSTER_RESOURCE_SET=true
export EXP_MACHINE_POOL=true
export CLUSTER_TOPOLOGY=true
export EXP_RUNTIME_SDK=true
export EXP_MACHINE_SET_PREFLIGHT_CHECKS=true
export CLUSTER_NAME=test-dfkhje
18 changes: 18 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,12 @@ kustomize: $(KUSTOMIZE) ## Build a local copy of kustomize
$(KUSTOMIZE): # Build kustomize from tools folder.
go install sigs.k8s.io/kustomize/kustomize/[email protected]

TILT := $(abspath $(TOOLS_BIN_DIR)/tilt)
tilt: $(TILT) ## Build a local copy of tilt
$(TILT):
@mkdir -p $(TOOLS_BIN_DIR)
MINIMUM_TILT_VERSION=0.33.3 hack/ensure-tilt.sh

ENVSUBST := $(abspath $(TOOLS_BIN_DIR)/envsubst)
envsubst: $(ENVSUBST) ## Build a local copy of envsubst
$(ENVSUBST): # Build envsubst from tools folder.
Expand All @@ -105,6 +111,11 @@ setup-envtest: $(SETUP_ENVTEST) ## Build a local copy of setup-envtest
$(SETUP_ENVTEST): # Build setup-envtest from tools folder.
go install sigs.k8s.io/controller-runtime/tools/[email protected]

CTLPTL := $(abspath $(TOOLS_BIN_DIR)/ctlptl)
ctlptl: $(CTLPTL) ## Build a local copy of ctlptl
$(CTLPTL):
go install github.com/tilt-dev/ctlptl/cmd/[email protected]

CLUSTERCTL := $(abspath $(TOOLS_BIN_DIR)/clusterctl)
clusterctl: $(CLUSTERCTL) ## Build a local copy of clusterctl
$(CLUSTERCTL):
Expand Down Expand Up @@ -466,3 +477,10 @@ modules: generate-modules ## Update go.mod & go.sum
.PHONY: builder-image-push
builder-image-push: ## Build $(CONTROLLER_SHORT)-builder to a new version. For more information see README.
BUILDER_IMAGE=$(BUILDER_IMAGE) ./hack/upgrade-builder-image.sh

create-workload-cluster-docker: $(ENVSUBST) $(KUBECTL)
cat .cluster.yaml | $(ENVSUBST) - | $(KUBECTL) apply -f -

.PHONY: tilt-up
tilt-up: env-vars-for-wl-cluster $(ENVSUBST) $(KUBECTL) $(KUSTOMIZE) $(TILT) cluster ## Start a mgt-cluster & Tilt. Installs the CRDs and deploys the controllers
EXP_CLUSTER_RESOURCE_SET=true $(TILT) up --port=10351
44 changes: 43 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,46 @@ The first two are handled by this operator here. The node images, on the other h

## Implementing a provider integration

Further information and documentation on how to implement a provider integration will follow soon.
Further information and documentation on how to implement a provider integration will follow soon.

## Developing Cluster Stack Operator

Developing our operator is quite easy. First, you need to install some base requirements: Docker and Go. Second, you need to configure your environment variables. Then you can start developing with the local Kind cluster and the Tilt UI to create a workload cluster that is already pre-configured.

## Setting Tilt up
1. Install Docker and Go. We expect you to run on a Linux OS.
2. Create an ```.envrc``` file and specify the values you need. See the .envrc.sample for details.

## Developing with Tilt

<p align="center">
<img alt="tilt" src="./docs/pics/tilt.png" width=800px/>
</p>

Operator development requires a lot of iteration, and the “build, tag, push, update deployment” workflow can be very tedious. Tilt makes this process much simpler by watching for updates and automatically building and deploying them. To build a kind cluster and to start Tilt, run:

```shell
make tilt-up
```
> To access the Tilt UI please go to: `http://localhost:10350`

You should make sure that everything in the UI looks green. If not, e.g. if the clusterstack has not been synced, you can trigger the Tilt workflow again. In case of the clusterstack button this might be necessary, as it cannot be applied right after startup of the cluster and fails. Tilt unfortunately does not include a waiting period.

If everything is green, then you can already check for your clusterstack that has been deployed. You can use a tool like k9s to have a look at the management cluster and its custom resources.

In case your clusterstack shows that it is ready, you can deploy a workload cluster. This could be done through the Tilt UI, by pressing the button in the top right corner "Create Workload Cluster". This triggers the `make create-workload-cluster-docker`, which uses the environment variables and the cluster-template.

In case you want to change some code, you can do so and see that Tilt triggers on save. It will update the container of the operator automatically.

If you want to change something in your ClusterStack or Cluster custom resources, you can have a look at `.cluster.yaml` and `.clusterstack.yaml`, which Tilt uses.

To tear down the workload cluster press the "Delete Workload Cluster" button. After a few minutes the resources should be deleted.

To tear down the kind cluster, use:

```shell
$ make delete-bootstrap-cluster
```

If you have any trouble finding the right command, then you can use `make help` to get a list of all available make targets.
265 changes: 265 additions & 0 deletions Tiltfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,265 @@
# -*- mode: Python -*-
load("ext://uibutton", "cmd_button", "location", 'text_input')
load("ext://restart_process", "docker_build_with_restart")

kustomize_cmd = "./hack/tools/bin/kustomize"
envsubst_cmd = "./hack/tools/bin/envsubst"
sed_cmd = "sed 's/:=\"\"//g'"
tools_bin = "./hack/tools/bin"

#Add tools to path
os.putenv("PATH", os.getenv("PATH") + ":" + tools_bin)

update_settings(k8s_upsert_timeout_secs = 60) # on first tilt up, often can take longer than 30 seconds


settings = {
"allowed_contexts": [
"kind-cso",
],
"deploy_cert_manager": True,
"preload_images_for_kind": True,
"kind_cluster_name": "cso",
"capi_version": "v1.5.2",
"cert_manager_version": "v1.11.0",
"kustomize_substitutions": {
},
}

# global settings
settings.update(read_json(
"tilt-settings.json",
default = {},
))

if settings.get("trigger_mode") == "manual":
trigger_mode(TRIGGER_MODE_MANUAL)

if "allowed_contexts" in settings:
allow_k8s_contexts(settings.get("allowed_contexts"))

if "default_registry" in settings:
default_registry(settings.get("default_registry"))

# deploy CAPI
def deploy_capi():
version = settings.get("capi_version")
capi_uri = "https://github.com/kubernetes-sigs/cluster-api/releases/download/{}/cluster-api-components.yaml".format(version)
cmd = "curl -sSL {} | {} | kubectl apply -f -".format(capi_uri, envsubst_cmd)
local(cmd, quiet = True)
if settings.get("extra_args"):
extra_args = settings.get("extra_args")
if extra_args.get("core"):
core_extra_args = extra_args.get("core")
if core_extra_args:
for namespace in ["capi-system", "capi-webhook-system"]:
patch_args_with_extra_args(namespace, "capi-controller-manager", core_extra_args)
if extra_args.get("kubeadm-bootstrap"):
kb_extra_args = extra_args.get("kubeadm-bootstrap")
if kb_extra_args:
patch_args_with_extra_args("capi-kubeadm-bootstrap-system", "capi-kubeadm-bootstrap-controller-manager", kb_extra_args)

def deploy_capd():
version = settings.get("capi_version")
capd_uri = "https://github.com/kubernetes-sigs/cluster-api/releases/download/{}/infrastructure-components-development.yaml".format(version)
cmd = "curl -sSL {} | {} | kubectl apply -f -".format(capd_uri, envsubst_cmd)
local(cmd, quiet = True)


def prepare_environment():
local("kubectl create namespace cluster --dry-run=client -o yaml | kubectl apply -f -")

# if it's already present then don't copy
if not os.path.exists('.clusterstack.yaml'):
local("cp config/cso/clusterstack.yaml .clusterstack.yaml")

k8s_yaml('.clusterstack.yaml')

if not os.path.exists('.cluster.yaml'):
local("cp config/cso/cluster.yaml .cluster.yaml")

def patch_args_with_extra_args(namespace, name, extra_args):
args_str = str(local("kubectl get deployments {} -n {} -o jsonpath='{{.spec.template.spec.containers[0].args}}'".format(name, namespace)))
args_to_add = [arg for arg in extra_args if arg not in args_str]
if args_to_add:
args = args_str[1:-1].split()
args.extend(args_to_add)
patch = [{
"op": "replace",
"path": "/spec/template/spec/containers/0/args",
"value": args,
}]
local("kubectl patch deployment {} -n {} --type json -p='{}'".format(name, namespace, str(encode_json(patch)).replace("\n", "")))

# Users may define their own Tilt customizations in tilt.d. This directory is excluded from git and these files will
# not be checked in to version control.
def include_user_tilt_files():
user_tiltfiles = listdir("tilt.d")
for f in user_tiltfiles:
include(f)

def append_arg_for_container_in_deployment(yaml_stream, name, namespace, contains_image_name, args):
for item in yaml_stream:
if item["kind"] == "Deployment" and item.get("metadata").get("name") == name and item.get("metadata").get("namespace") == namespace:
containers = item.get("spec").get("template").get("spec").get("containers")
for container in containers:
if contains_image_name in container.get("name"):
container.get("args").extend(args)

def fixup_yaml_empty_arrays(yaml_str):
yaml_str = yaml_str.replace("conditions: null", "conditions: []")
return yaml_str.replace("storedVersions: null", "storedVersions: []")

## This should have the same versions as the Dockerfile
tilt_dockerfile_header_cso = """
FROM docker.io/alpine/helm:3.12.2 as helm
FROM docker.io/library/alpine:3.18.0 as tilt
WORKDIR /
COPY --from=helm --chown=root:root --chmod=755 /usr/bin/helm /usr/local/bin/helm
COPY manager .
"""

# Build CSO and add feature gates
def deploy_cso():
# yaml = str(kustomizesub("./hack/observability")) # build an observable kind deployment by default
yaml = str(kustomizesub("./config/default"))
local_resource(
name = "cso-components",
cmd = ["sh", "-ec", sed_cmd, yaml, "|", envsubst_cmd],
labels = ["CSO"],
)

# Forge the build command
ldflags = "-extldflags \"-static\" " + str(local("hack/version.sh")).rstrip("\n")
build_env = "CGO_ENABLED=0 GOOS=linux GOARCH=amd64"
build_cmd = "{build_env} go build -ldflags '{ldflags}' -o .tiltbuild/manager cmd/main.go".format(
build_env = build_env,
ldflags = ldflags,
)
# Set up a local_resource build of the provider's manager binary.
local_resource(
"cso-manager",
cmd = "mkdir -p .tiltbuild; " + build_cmd,
deps = ["api", "cmd", "config", "internal", "vendor", "pkg", "go.mod", "go.sum"],
labels = ["CSO"],
)

entrypoint = ["/manager"]
extra_args = settings.get("extra_args")
if extra_args:
entrypoint.extend(extra_args)

# Set up an image build for the provider. The live update configuration syncs the output from the local_resource
# build into the container.
docker_build_with_restart(
ref = "ghcr.io/sovereigncloudstack/cso-staging",
context = "./.tiltbuild/",
dockerfile_contents = tilt_dockerfile_header_cso,
target = "tilt",
entrypoint = entrypoint,
only = "manager",
live_update = [
sync(".tiltbuild/manager", "/manager"),
],
ignore = ["templates"],
)
k8s_yaml(blob(yaml))
k8s_resource(workload = "cso-controller-manager", labels = ["CSO"])
k8s_resource(
objects = [
"cso-system:namespace",
"clusterstackreleases.clusterstack.x-k8s.io:customresourcedefinition",
"clusterstacks.clusterstack.x-k8s.io:customresourcedefinition",
"cso-controller-manager:serviceaccount",
"cso-leader-election-role:role",
"cso-manager-role:clusterrole",
"cso-leader-election-rolebinding:rolebinding",
"cso-manager-rolebinding:clusterrolebinding",
"cso-serving-cert:certificate",
"cso-cluster-stack-variables:secret",
"cso-selfsigned-issuer:issuer",
#"cso-validating-webhook-configuration:validatingwebhookconfiguration",
],
new_name = "cso-misc",
labels = ["CSO"],
)

def clusterstack():
k8s_resource(objects = ["clusterstack:clusterstack"], new_name = "clusterstack", labels = ["CLUSTERSTACK"])

def base64_encode(to_encode):
encode_blob = local("echo '{}' | tr -d '\n' | base64 - | tr -d '\n'".format(to_encode), quiet = True)
return str(encode_blob)

def base64_encode_file(path_to_encode):
encode_blob = local("cat {} | tr -d '\n' | base64 - | tr -d '\n'".format(path_to_encode), quiet = True)
return str(encode_blob)

def read_file_from_path(path_to_read):
str_blob = local("cat {} | tr -d '\n'".format(path_to_read), quiet = True)
return str(str_blob)

def base64_decode(to_decode):
decode_blob = local("echo '{}' | base64 --decode -".format(to_decode), quiet = True)
return str(decode_blob)

def ensure_envsubst():
if not os.path.exists(envsubst_cmd):
local("make {}".format(os.path.abspath(envsubst_cmd)))

def ensure_kustomize():
if not os.path.exists(kustomize_cmd):
local("make {}".format(os.path.abspath(kustomize_cmd)))

def kustomizesub(folder):
yaml = local("hack/kustomize-sub.sh {}".format(folder), quiet = True)
return yaml

def waitforsystem():
local("kubectl wait --for=condition=ready --timeout=300s pod --all -n capi-kubeadm-bootstrap-system")
local("kubectl wait --for=condition=ready --timeout=300s pod --all -n capi-kubeadm-control-plane-system")
local("kubectl wait --for=condition=ready --timeout=300s pod --all -n capi-system")

def deploy_observability():
k8s_yaml(blob(str(local("{} build {}".format(kustomize_cmd, "./hack/observability/"), quiet = True))))

k8s_resource(workload = "promtail", extra_pod_selectors = [{"app": "promtail"}], labels = ["observability"])
k8s_resource(workload = "loki", extra_pod_selectors = [{"app": "loki"}], labels = ["observability"])
k8s_resource(workload = "grafana", port_forwards = "3000", extra_pod_selectors = [{"app": "grafana"}], labels = ["observability"])

##############################
# Actual work happens here
##############################
ensure_envsubst()
ensure_kustomize()

include_user_tilt_files()

load("ext://cert_manager", "deploy_cert_manager")

if settings.get("deploy_cert_manager"):
deploy_cert_manager()

if settings.get("deploy_observability"):
deploy_observability()

deploy_capi()

deploy_capd()

deploy_cso()

clusterstack()

waitforsystem()

prepare_environment()

## TODO
cmd_button(
"create workload cluster",
argv=["make", "create-workload-cluster-docker"],
location=location.NAV,
icon_name="add_circle",
)
26 changes: 26 additions & 0 deletions config/cso/cluster.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: "${CLUSTER_NAME}"
namespace: ${NAMESPACE}
spec:
clusterNetwork:
services:
cidrBlocks: ["10.128.0.0/12"]
pods:
cidrBlocks: ["192.168.0.0/16"]
serviceDomain: "cluster.local"
topology:
class: docker-ferrol-1-27-v1
controlPlane:
metadata: {}
replicas: 1
variables:
- name: imageRepository
value: ""
version: v1.27.3
workers:
machineDeployments:
- class: workeramd64
name: md-0
replicas: 1
Loading

0 comments on commit 4431f3e

Please sign in to comment.