diff --git a/.changes/unreleased/operator-Added-20260323-connect-crd.yaml b/.changes/unreleased/operator-Added-20260323-connect-crd.yaml new file mode 100644 index 000000000..36cefc7e8 --- /dev/null +++ b/.changes/unreleased/operator-Added-20260323-connect-crd.yaml @@ -0,0 +1,2 @@ +kind: Added +body: Add Pipeline CRD and controller for managing Redpanda Connect pipelines via the operator, gated by enterprise license validation. diff --git a/.changes/unreleased/operator-Changed-20260323-connect-default.yaml b/.changes/unreleased/operator-Changed-20260323-connect-default.yaml new file mode 100644 index 000000000..b318cb288 --- /dev/null +++ b/.changes/unreleased/operator-Changed-20260323-connect-default.yaml @@ -0,0 +1,17 @@ +project: operator +kind: Changed +body: | + The Redpanda Connect controller can now be enabled via the operator Helm chart + by setting `connectController.enabled: true`. The controller is disabled by default. + The enterprise license is configured at the operator level via + `enterprise.licenseSecretRef` in the Helm chart values. + + To enable the Connect controller: + + ``` + helm install redpanda-operator redpanda/operator \ + --set connectController.enabled=true \ + --set enterprise.licenseSecretRef.name=redpanda-license \ + --set enterprise.licenseSecretRef.key=license + ``` +time: 2026-03-23T15:00:00.000000-04:00 diff --git a/CLAUDE.md b/CLAUDE.md index d19ec424d..7d4e79f08 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -18,13 +18,25 @@ This is a Go monorepo using `go.work` with multiple modules: - **Task runner**: [go-task](https://taskfile.dev/) via `Taskfile.yml` with includes from `taskfiles/` - **CI**: Buildkite (`.buildkite/pipeline.yml` → `.buildkite/testsuite.yml`) -- **Nix**: `flake.nix` provides the dev environment. CI runs all commands inside a nix container via `ci/scripts/run-in-nix-docker.sh` +- **Nix**: `flake.nix` provides the dev environment. CI runs all commands inside a nix container via `ci/scripts/run-in-nix-docker.sh`. If `nix` is not on `$PATH`, it can be found at `/nix/var/nix/profiles/default/bin/nix`. - **Code generation**: Go source is transpiled to Helm templates via `gotohelm`, JSON schemas are produced by `gen schema`, and Go partials by `gen partial`. **Do not invoke these tools directly.** Instead, use `nix develop -c task generate` which runs all generators in the correct order and matches CI. For CRD/RBAC regeneration specifically, use `nix develop -c task k8s:generate`. +### IMPORTANT: Never hand-edit generated files + +The following files are **machine-generated** and must only be updated by running `nix develop -c task generate`: +- `operator/api/redpanda/v1alpha2/zz_generated.deepcopy.go` — DeepCopy functions +- `operator/api/redpanda/v1alpha2/testdata/crd-docs.adoc` — CRD reference documentation +- `operator/config/crd/bases/*.yaml` — CRD OpenAPI schemas +- `operator/config/rbac/bases/` — RBAC role definitions from kubebuilder markers +- `operator/chart/files/rbac/*.ClusterRole.yaml` — RBAC files copied from `config/rbac/itemized/` +- `operator/chart/templates/*.tpl` — Helm templates transpiled from Go source + +**Never** attempt to reconstruct these files from CI diffs or hand-edit them to match expected output. Always run `task generate` (via nix) to regenerate them. Note: `operator/config/rbac/itemized/*.yaml` files are generated by `controller-gen` for controllers listed in the `for` loop in `taskfiles/k8s.yml`. If a controller is not in that loop, its itemized RBAC is manually maintained. + ## CI Lint Flow The CI lint step (`taskfiles/ci.yml`) runs: -1. `task :generate` — regenerates ALL generated files (CRDs, RBAC, templates, schemas, partials, licenses, changelog, buildkite pipelines, then `lint-fix`) +1. `task :generate` — regenerates ALL generated files (CRDs, RBAC, templates, schemas, partials, licenses, changelog, buildkite pipelines, then `lint-fix` which runs `gci` import ordering) 2. `task :lint` — runs `golangci-lint run`, `helm lint --strict`, and `actionlint` 3. `git diff --exit-code` — fails if any generated file doesn't match what's committed @@ -34,15 +46,58 @@ The CI lint step (`taskfiles/ci.yml`) runs: - Changing kubebuilder RBAC markers without running `controller-gen` - Import ordering violations caught by `gci` formatter +### Local lint workflow + +Always use `task generate` (not `task k8s:generate`) as the final step before committing. The full `task generate` includes `lint-fix` which runs `gci` to fix import ordering. Running only `task k8s:generate` regenerates CRDs and RBAC but does **not** run `gci`, so generated files like `zz_generated.deprecations_test.go` will have incorrect import ordering that fails CI. + +**Correct order:** +```bash +# 1. Regenerate everything (CRDs, RBAC, templates, AND lint-fix/gci) +nix develop -c task generate + +# 2. Update golden files if needed (chart templates, controller tests) +nix develop -c go test ./operator/chart/... -run TestTemplate -update-golden +nix develop -c go test ./operator/internal/controller/pipeline/... -run TestRender_GoldenFiles -update-golden + +# 3. Run tests +nix develop -c go test ./operator/internal/controller/pipeline/... + +# 4. Verify lint passes +nix develop -c task lint +``` + +**Common mistake:** Running `task k8s:generate` then manually fixing `gci` import ordering, then `task k8s:generate` again (which re-generates the file and reverts the manual fix). Use `task generate` instead — it runs `k8s:generate` followed by `lint-fix` in the correct order. + ## Golden Test Files Multiple test suites use golden file comparison. To regenerate expected output instead of asserting, pass `-update-golden`: ```bash -nix develop -c go test ./path/to/... -update-golden +nix --extra-experimental-features 'nix-command flakes' develop -c go test ./path/to/... -update-golden ``` -Note: Chart template tests (`TestTemplate`) use `-update` instead of `-update-golden`. +### Chart template golden tests (`TestTemplate`) + +Chart template tests in `operator/chart/` compare rendered Helm output against `operator/chart/testdata/template-cases.golden.txtar`. The input test cases are defined in `operator/chart/testdata/template-cases.txtar`. + +**Regenerating the golden archive:** Use `-update-golden`. This handles both refreshing existing entries and creating golden entries for new test cases added to `template-cases.txtar`. +```bash +nix --extra-experimental-features 'nix-command flakes' develop -c go test ./operator/chart/... -run TestTemplate -update-golden +``` + +**After rebasing:** If the chart version changed (e.g., `v25.3.1` → `v26.1.1`), the golden file will have stale version strings everywhere. `-update-golden` rewrites the archive from scratch, so a single run is sufficient — but a bulk `sed` first can speed up the assertion path on large diffs: +```bash +sed -i '' 's/v25\.3\.1/v26.1.1/g; s/operator-25\.3\.1/operator-26.1.1/g' operator/chart/testdata/template-cases.golden.txtar +nix --extra-experimental-features 'nix-command flakes' develop -c go test ./operator/chart/... -run TestTemplate -update-golden +``` + +### Gotohelm transpiler limitations + +The `gotohelm` transpiler converts Go chart source (`operator/chart/*.go`) into Helm templates (`.tpl` files). It supports a subset of Go — notably, **`strings.Join` is not supported** and will cause a panic. Use manual string concatenation instead. Check the gotohelm source or existing chart code for supported functions. + +### RBAC file generation + +The `task k8s:generate` step regenerates chart RBAC files by running `rm chart/files/rbac/*.yaml` then re-splitting from `config/rbac/itemized/*.yaml`. If you add a new itemized RBAC file, you **must** also add it to the file list in `taskfiles/k8s.yml` (search for `chart/files/rbac/`), otherwise it will be deleted on every `task generate` run. ### Lifecycle golden tests @@ -186,12 +241,12 @@ Each releasable project has a changie key used in commands: 6. **Update golden test files** to reflect version changes: ```bash # Operator chart golden files - nix develop -c go test github.com/redpanda-data/redpanda-operator/operator/chart -run TestTemplate -update + nix develop -c go test github.com/redpanda-data/redpanda-operator/operator/chart -run TestTemplate -update-golden # Redpanda chart golden files - nix develop -c go test github.com/redpanda-data/redpanda-operator/charts/redpanda/... -run TestTemplate -update + nix develop -c go test github.com/redpanda-data/redpanda-operator/charts/redpanda/... -run TestTemplate -update-golden ``` - Note: The flag is `-update`, not `-update-golden`, for chart template tests. + Note: Chart template tests use `-update-golden` to regenerate the txtar archives. 7. **Run unit tests and lint** to verify: ```bash @@ -222,30 +277,130 @@ For a **charts/redpanda** release (e.g. `v25.1.4`): ## Common Commands -All commands should be run inside the nix devshell to ensure correct tool versions and environment variables. Prefix commands with `nix develop -c` or enter the shell with `nix develop`. +All commands should be run inside the nix devshell to ensure correct tool versions and environment variables. Since `nix develop` is an experimental command, you must enable it with `--extra-experimental-features 'nix-command flakes'`. For brevity, the examples below use the alias `nix develop` — prepend the flag if your system requires it. ```bash # Enter nix devshell (recommended for interactive work) -nix develop +nix --extra-experimental-features 'nix-command flakes' develop # Or prefix individual commands -nix develop -c go build ./operator/... +nix --extra-experimental-features 'nix-command flakes' develop -c go build ./operator/... # Build all -nix develop -c bash -c 'go build ./operator/... && go build ./charts/console/... && go build ./charts/redpanda/...' +nix --extra-experimental-features 'nix-command flakes' develop -c bash -c 'go build ./operator/... && go build ./charts/console/... && go build ./charts/redpanda/...' # Run unit tests (envtest is configured by the devshell) -nix develop -c task test:unit +nix --extra-experimental-features 'nix-command flakes' develop -c task test:unit # Run chart template tests -nix develop -c bash -c 'helm dep build charts/redpanda/chart && go test ./charts/redpanda/... -run TestTemplate' +nix --extra-experimental-features 'nix-command flakes' develop -c bash -c 'helm dep build charts/redpanda/chart && go test ./charts/redpanda/... -run TestTemplate' # Regenerate ALL generated files (preferred — matches CI) -nix develop -c task generate +nix --extra-experimental-features 'nix-command flakes' develop -c task generate # Run golangci-lint (v2 format) -nix develop -c task lint +nix --extra-experimental-features 'nix-command flakes' develop -c task lint # Update golden files (prefer -update-golden) -nix develop -c go test ./path/to/... -update-golden +nix --extra-experimental-features 'nix-command flakes' develop -c go test ./path/to/... -update-golden ``` + +## Creating a New CRD + +When adding a new Custom Resource Definition to the operator, follow this checklist to ensure it integrates properly with all repository conventions. + +### 1. Define Types (`operator/api/redpanda/v1alpha2/`) +- Define the CRD types in a `_types.go` file. +- Use **typed constants** for status phases (e.g., `type FooPhase string` with `const FooPhaseRunning FooPhase = "Running"`). +- Define **named constants** for all condition types and reasons (e.g., `FooConditionReady`, `FooReasonFailed`). Never use bare string literals for conditions. +- Register the type in `zz_generated.register.go` (or ensure code generation picks it up). +- Run `nix develop -c task k8s:generate` to regenerate CRD YAML, deep copy, and RBAC. + +### 2. Controller (`operator/internal/controller//`) +- Use **`kube.Ctl`** (from `common-go/kube`) as the primary client — not `client.Client` directly. +- Use **server-side apply (SSA)** via `ctl.Apply()` and `ctl.ApplyStatus()` instead of `CreateOrPatch` / `Update`. +- Use **`kube.Syncer`** for managing child resources. This handles ownership labels, GC, and SSA in one place. +- **Externalize resource rendering** to a `render` struct implementing `kube.Renderer` (with `Types()` and `Render()` methods) in a separate file (e.g., `render.go`). Avoid inlining Deployment/ConfigMap specs in the reconciler. +- **Never swallow status update errors.** Always return or propagate errors from `ApplyStatus`. +- Use the **`utils.StatusConditionConfigs`** helper for SSA-compatible condition merging. + +### 3. RBAC +- Add kubebuilder RBAC markers to the controller (e.g., `// +kubebuilder:rbac:groups="",resources=pods,verbs=get;list;watch`). +- Add the controller to the `controller-gen` RBAC generation loop in `taskfiles/k8s.yml` (search for the `for:` block with `NAME`/`PATH` entries). This generates `operator/config/rbac/itemized/.yaml` from kubebuilder RBAC markers. Also add the itemized file to the chart copy step (search for `chart/files/rbac/`) so it gets synced to `operator/chart/files/rbac/.ClusterRole.yaml`. +- Add the RBAC file to the file list in `taskfiles/k8s.yml` (search for `chart/files/rbac/`) — otherwise `task k8s:generate` will delete it on every run. +- Run `nix develop -c task k8s:generate` — this copies itemized RBAC files to `operator/chart/files/rbac/.ClusterRole.yaml` automatically. +- Add the RBAC file to the appropriate bundle in `operator/chart/rbac.go` (gated by a feature flag if applicable). +- **After any RBAC change**, regenerate chart golden files to pick up the new ClusterRole rules: + ```bash + nix develop -c go test ./operator/chart/... -run TestTemplate -update-golden + ``` + Failure to do this causes `TestTemplate` failures in CI for any test case that renders RBAC (e.g., `connect-controller-enabled`, `common-annotations`). + +### 4. CRD Installation +- Add the CRD to the `stableCRDs` (or `experimentalCRDs`) list in `operator/cmd/crd/crd.go`. +- Ensure the CRD accessor function exists in `operator/config/crd/bases/crds.go`. + +### 5. Helm Chart Integration +- Add any new values (e.g., feature flags) to `operator/chart/values.go`, `values.yaml`, and `values.schema.json`. +- Wire the flag to the operator Deployment args in `operator/chart/deployment.go`. +- Add at least one **template rendering test case** in `operator/chart/testdata/template-cases.txtar`. +- Run `nix develop -c task generate` to regenerate templates and partials. + +### 6. Controller Registration +- Register the controller in `operator/cmd/run/run.go`, gated behind a feature flag if applicable. +- Create `kube.Ctl` with the same pattern as other controllers (cache reader, field manager). + +### 7. Tests +- **Reconciler tests**: Use `kubetest.NewEnv()` with `controller.UnifiedScheme` to get a `*kube.Ctl` for tests. Test both the reconciler (apply CR, reconcile, check status/child resources) and the render logic. +- **License/validation tests**: If the feature is gated, test all validation paths. +- **Helm rendering tests**: Add test cases for the feature flag in `template-cases.txtar` and regenerate golden files. +- **Acceptance tests**: Add at least one `.feature` file in `acceptance/features/` with step definitions in `acceptance/steps/`. Register steps in `acceptance/steps/register.go`. Enable the feature in `acceptance/main_test.go`. + +### 8. Changelog +- Add a changie entry: `nix develop -c changie new -j operator` + +## ClusterRef — Connecting CRDs to Redpanda Clusters + +CRDs that need to communicate with a Redpanda cluster use the `ClusterSource` type (`operator/api/redpanda/v1alpha2/common.go`), which supports two modes: +- **`clusterRef`** — references an operator-managed `Redpanda` CR by name. The operator resolves broker addresses, TLS certificates, and SASL credentials automatically. +- **`staticConfiguration`** — explicit connection details (brokers, TLS, SASL) for clusters not managed by the operator. + +### How ClusterRef resolution works + +Controllers resolve a `clusterRef` by converting the Redpanda CR into a `RenderState` via `conversion.ConvertV2ToRenderState()`, then calling `state.AsStaticConfigSource()` to extract an `ir.StaticConfigurationSource` containing: +- Kafka broker addresses (internal DNS: `{fullname}-{ordinal}.{service}.{namespace}.svc.cluster.local:{port}`) +- TLS CA certificate references (from `{fullname}-{name}-root-certificate` or custom `secretRef`) +- SASL bootstrap user credentials (from `{fullname}-bootstrap-user` Secret) + +This is the same pattern used by the Console controller (`operator/internal/controller/console/controller.go:414-450`) and the Pipeline controller (`operator/internal/controller/pipeline/cluster.go`). + +### Watching referenced clusters + +Controllers should watch Redpanda CRs so that changes to a cluster trigger re-reconciliation of referencing resources. Two patterns exist: +- **Multicluster controllers** (Console, Topic, User, etc.): Use `controller.RegisterClusterSourceIndex()` from `operator/internal/controller/index.go` with `multicluster.Manager`. +- **Single-cluster controllers** (Pipeline): Use standard controller-runtime field indexing with `mgr.GetFieldIndexer().IndexField()` and `builder.Watches()` with `handler.EnqueueRequestsFromMapFunc()`. + +## ValueSource — Secrets and External Secret Providers + +The `ValueSource` type (`operator/api/redpanda/v1alpha2/common.go:187`) is the standard way to reference sensitive values across all CRDs. It supports four sources: + +| Source | Field | Description | +|--------|-------|-------------| +| Kubernetes Secret | `secretKeyRef` | References a key in a K8s Secret | +| ConfigMap | `configMapKeyRef` | References a key in a K8s ConfigMap | +| Inline | `inline` | Raw string value (avoid for production secrets) | +| External secret provider | `externalSecretRef` | AWS Secrets Manager, GCP Secret Manager, or Azure Key Vault via the operator's native `CloudExpander` (`pkg/secrets/secrets.go`) | + +### Cloud secret provider configuration + +The `CloudExpander` is configured at operator startup via CLI flags: +- `--cloud-secret-aws-region` — enables AWS Secrets Manager +- `--cloud-secret-gcp-project-id` — enables GCP Secret Manager +- `--cloud-secret-azure-key-vault-uri` — enables Azure Key Vault + +Controllers that need to resolve `ValueSource` at reconcile time (e.g., Topic, User, Role) receive a `CloudExpander` via the `ClientFactory`. Controllers that inject `ValueSource` into pod env vars (e.g., Pipeline) convert `secretKeyRef`/`configMapKeyRef`/`inline` directly to `corev1.EnvVar` fields — `externalSecretRef` requires ESO or the cloud expander to sync the value into a K8s Secret first. + +### When to use ValueSource vs corev1.SecretKeySelector + +- Use `ValueSource` for any field that holds a secret or sensitive value (passwords, tokens, API keys). This ensures compatibility with external secret providers. +- Use `corev1.SecretKeySelector` only for internal references where external secret support is not needed (e.g., referencing operator-created Secrets like the bootstrap user password). diff --git a/acceptance/features/console-upgrades.feature b/acceptance/features/console-upgrades.feature index 7b06d3afb..a68fd4cab 100644 --- a/acceptance/features/console-upgrades.feature +++ b/acceptance/features/console-upgrades.feature @@ -2,7 +2,7 @@ Feature: Upgrading the operator with Console installed @skip:gke @skip:aks @skip:eks Scenario: Console v2 to v3 no warnings - Given I helm install "redpanda-operator" "redpanda/operator" --version v25.1.3 with values: + Given I helm install "redpanda-operator" "redpanda/operator" --version v25.3.1 with values: """ image: repository: redpandadata/redpanda-operator @@ -49,7 +49,7 @@ Feature: Upgrading the operator with Console installed @skip:gke @skip:aks @skip:eks Scenario: Console v2 to v3 with warnings - Given I helm install "redpanda-operator" "redpanda/operator" --version v25.1.3 with values: + Given I helm install "redpanda-operator" "redpanda/operator" --version v25.3.1 with values: """ image: repository: redpandadata/redpanda-operator diff --git a/acceptance/features/operator-upgrades.feature b/acceptance/features/operator-upgrades.feature index 4531f335b..2505bdcea 100644 --- a/acceptance/features/operator-upgrades.feature +++ b/acceptance/features/operator-upgrades.feature @@ -1,8 +1,8 @@ @vcluster Feature: Upgrading the operator @skip:gke @skip:aks @skip:eks - Scenario: Operator upgrade from 25.2.2 - Given I helm install "redpanda-operator" "redpanda/operator" --version v25.2.2 with values: + Scenario: Operator upgrade from 25.3.1 + Given I helm install "redpanda-operator" "redpanda/operator" --version v25.3.1 with values: """ image: repository: redpandadata/redpanda-operator diff --git a/acceptance/features/pipeline-crds.feature b/acceptance/features/pipeline-crds.feature new file mode 100644 index 000000000..d18f4c8e3 --- /dev/null +++ b/acceptance/features/pipeline-crds.feature @@ -0,0 +1,263 @@ +@cluster:basic +Feature: Pipeline CRDs + Background: Cluster available + Given cluster "basic" is available + + Scenario: Create and run a Pipeline + When I apply Kubernetes manifest: + """ + --- + apiVersion: cluster.redpanda.com/v1alpha2 + kind: Pipeline + metadata: + name: demo-pipeline + spec: + configYaml: | + input: + generate: + mapping: 'root.message = "hello world"' + interval: "5s" + output: + stdout: {} + replicas: 1 + """ + Then pipeline "demo-pipeline" is successfully running + + Scenario: Delete a Pipeline + When I apply Kubernetes manifest: + """ + --- + apiVersion: cluster.redpanda.com/v1alpha2 + kind: Pipeline + metadata: + name: delete-pipeline + spec: + configYaml: | + input: + generate: + mapping: 'root.message = "hello"' + interval: "5s" + output: + stdout: {} + replicas: 1 + """ + And pipeline "delete-pipeline" is successfully running + When I delete the CRD pipeline "delete-pipeline" + Then pipeline "delete-pipeline" does not exist + + Scenario: Update a Pipeline config + When I apply Kubernetes manifest: + """ + --- + apiVersion: cluster.redpanda.com/v1alpha2 + kind: Pipeline + metadata: + name: update-pipeline + spec: + configYaml: | + input: + generate: + mapping: 'root.message = "original"' + interval: "5s" + output: + stdout: {} + replicas: 1 + """ + And pipeline "update-pipeline" is successfully running + When I apply Kubernetes manifest: + """ + --- + apiVersion: cluster.redpanda.com/v1alpha2 + kind: Pipeline + metadata: + name: update-pipeline + spec: + configYaml: | + input: + generate: + mapping: 'root.message = "updated"' + interval: "5s" + output: + stdout: {} + replicas: 1 + """ + Then pipeline "update-pipeline" is successfully running + + Scenario: Stop a Pipeline + When I apply Kubernetes manifest: + """ + --- + apiVersion: cluster.redpanda.com/v1alpha2 + kind: Pipeline + metadata: + name: stop-pipeline + spec: + configYaml: | + input: + generate: + mapping: 'root.message = "hello"' + interval: "5s" + output: + stdout: {} + replicas: 1 + """ + And pipeline "stop-pipeline" is successfully running + When I apply Kubernetes manifest: + """ + --- + apiVersion: cluster.redpanda.com/v1alpha2 + kind: Pipeline + metadata: + name: stop-pipeline + spec: + configYaml: | + input: + generate: + mapping: 'root.message = "hello"' + interval: "5s" + output: + stdout: {} + replicas: 1 + paused: true + """ + Then pipeline "stop-pipeline" is stopped + + Scenario: Resume a stopped Pipeline + When I apply Kubernetes manifest: + """ + --- + apiVersion: cluster.redpanda.com/v1alpha2 + kind: Pipeline + metadata: + name: resume-pipeline + spec: + configYaml: | + input: + generate: + mapping: 'root.message = "hello"' + interval: "5s" + output: + stdout: {} + replicas: 1 + """ + And pipeline "resume-pipeline" is successfully running + When I apply Kubernetes manifest: + """ + --- + apiVersion: cluster.redpanda.com/v1alpha2 + kind: Pipeline + metadata: + name: resume-pipeline + spec: + configYaml: | + input: + generate: + mapping: 'root.message = "hello"' + interval: "5s" + output: + stdout: {} + replicas: 1 + paused: true + """ + And pipeline "resume-pipeline" is stopped + When I apply Kubernetes manifest: + """ + --- + apiVersion: cluster.redpanda.com/v1alpha2 + kind: Pipeline + metadata: + name: resume-pipeline + spec: + configYaml: | + input: + generate: + mapping: 'root.message = "hello"' + interval: "5s" + output: + stdout: {} + replicas: 1 + paused: false + """ + Then pipeline "resume-pipeline" is successfully running + + Scenario: Invalid Pipeline config detected by lint + When I apply Kubernetes manifest: + """ + --- + apiVersion: cluster.redpanda.com/v1alpha2 + kind: Pipeline + metadata: + name: invalid-pipeline + spec: + configYaml: | + input: + not_a_real_input: + mapping: 'root = "broken"' + output: + stdout: {} + replicas: 1 + """ + Then pipeline "invalid-pipeline" has invalid config + + Scenario: Pipeline produces to Redpanda via clusterRef + Given I create topic "pipeline-produce-test" in cluster "basic" + When I apply Kubernetes manifest: + """ + --- + apiVersion: cluster.redpanda.com/v1alpha2 + kind: Pipeline + metadata: + name: producer-pipeline + spec: + cluster: + clusterRef: + name: basic + configYaml: | + input: + generate: + count: 0 + interval: "1s" + mapping: 'root.message = "hello from pipeline"' + output: + redpanda: + seed_brokers: + - "${RPK_BROKERS}" + tls: + enabled: ${RPK_TLS_ENABLED} + root_cas_file: "${RPK_TLS_ROOT_CAS_FILE}" + topic: "pipeline-produce-test" + replicas: 1 + """ + Then pipeline "producer-pipeline" is successfully running + And topic "pipeline-produce-test" has messages in cluster "basic" + + Scenario: Pipeline reads from Redpanda via clusterRef + Given I create topic "pipeline-consume-test" in cluster "basic" + And I produce messages to "pipeline-consume-test" in cluster "basic" + When I apply Kubernetes manifest: + """ + --- + apiVersion: cluster.redpanda.com/v1alpha2 + kind: Pipeline + metadata: + name: consumer-pipeline + spec: + cluster: + clusterRef: + name: basic + configYaml: | + input: + redpanda: + seed_brokers: + - "${RPK_BROKERS}" + tls: + enabled: ${RPK_TLS_ENABLED} + root_cas_file: "${RPK_TLS_ROOT_CAS_FILE}" + topics: + - "pipeline-consume-test" + consumer_group: "pipeline-consumer-group" + output: + drop: {} + replicas: 1 + """ + Then pipeline "consumer-pipeline" is successfully running diff --git a/acceptance/go.sum b/acceptance/go.sum index 3821ea69f..c90c3ec55 100644 --- a/acceptance/go.sum +++ b/acceptance/go.sum @@ -606,6 +606,8 @@ github.com/redpanda-data/common-go/goldenfile v0.0.0-20260109170727-1dd9f5d22ee1 github.com/redpanda-data/common-go/goldenfile v0.0.0-20260109170727-1dd9f5d22ee1/go.mod h1:V3OBV2kcF/BDDytUZuKvIygbaXoGPT5VO3KmMAz+mBM= github.com/redpanda-data/common-go/kube v0.0.0-20260408144400-efba9928bb27 h1:735zfoMDegKzXO+mipLeEJhjvoboMiOwlZiEfWTs9IY= github.com/redpanda-data/common-go/kube v0.0.0-20260408144400-efba9928bb27/go.mod h1:87/jKBvBse9m7PBwCxzISdxOpHblNKTqxZZNa1U1utM= +github.com/redpanda-data/common-go/license v0.0.0-20260120073450-935d3dd3d6c1 h1:6aPxMthcrAljux5bgqU78yHxM8BK1ITqh9G9H+s707U= +github.com/redpanda-data/common-go/license v0.0.0-20260120073450-935d3dd3d6c1/go.mod h1:F1fp8xVNS2UwWFosOjJ9+5jaEZnXSjB9AdHk2R9XlpI= github.com/redpanda-data/common-go/net v0.1.1-0.20240429123545-4da3d2b371f7 h1:MXLdjFdFjOtyuUR4TdVVsqFP8xnru2YDwzH9bJTUr1M= github.com/redpanda-data/common-go/net v0.1.1-0.20240429123545-4da3d2b371f7/go.mod h1:UJIi/yUxGOBYXUrfUsOkxfYxcb/ll7mZrwae/i+U2kc= github.com/redpanda-data/common-go/otelutil v0.0.0-20260413160920-df1679f86269 h1:tEFqrnhUNN08Ye6n1FxtFHqkrFRhA0PGs2917v4JAOk= diff --git a/acceptance/main_test.go b/acceptance/main_test.go index fb686e4c6..e96f989fa 100644 --- a/acceptance/main_test.go +++ b/acceptance/main_test.go @@ -25,6 +25,7 @@ import ( "github.com/redpanda-data/common-go/kube" "github.com/stretchr/testify/require" corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" "k8s.io/client-go/rest" "k8s.io/utils/ptr" @@ -96,6 +97,8 @@ var setupSuite = sync.OnceValues(func() (*framework.Suite, error) { "quay.io/jetstack/cert-manager-webhook:" + testutil.CertManagerVersion, "ghcr.io/loft-sh/kubernetes:v1.33.4", "ghcr.io/loft-sh/vcluster-pro:" + testutil.GetVClusterImageTag(), + // Connect image used by pipeline-crds feature. + redpandav1alpha2.PipelineDefaultImage, }...). WithSchemeFunctions(vectorizedv1alpha1.Install, redpandav1alpha1.Install, redpandav1alpha2.Install) @@ -225,25 +228,62 @@ func installSharedOperator(ctx context.Context, restConfig *rest.Config) error { return err } + values := operatorchart.PartialValues{ + LogLevel: ptr.To("trace"), + Image: &operatorchart.PartialImage{ + Tag: ptr.To(imageTag), + Repository: ptr.To(imageRepo), + }, + CRDs: &operatorchart.PartialCRDs{ + Enabled: ptr.To(true), + Experimental: ptr.To(true), + }, + VectorizedControllers: &operatorchart.PartialVectorizedControllers{ + Enabled: ptr.To(true), + }, + ConnectController: &operatorchart.PartialConnectController{ + Enabled: ptr.To(true), + }, + AdditionalCmdFlags: operatorCmdFlags(), + } + + // If an enterprise license is available, create a secret and configure + // the operator to use it. This is required for the Pipeline (Connect) + // controller which validates the license on every reconcile. + if license := os.Getenv(steps.LicenseEnvVar); license != "" { + c, err := client.New(restConfig, client.Options{}) + if err != nil { + return err + } + ns := &corev1.Namespace{ObjectMeta: metav1.ObjectMeta{Name: sharedOperatorNamespace}} + if err := c.Create(ctx, ns); err != nil && !strings.Contains(err.Error(), "already exists") { + return err + } + secret := &corev1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: "redpanda-license", + Namespace: sharedOperatorNamespace, + }, + Data: map[string][]byte{ + "redpanda.license": []byte(license), + }, + } + if err := c.Create(ctx, secret); err != nil && !strings.Contains(err.Error(), "already exists") { + return err + } + values.Enterprise = &operatorchart.PartialEnterprise{ + LicenseSecretRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{Name: "redpanda-license"}, + Key: "redpanda.license", + }, + } + } + _, err = helmClient.Install(ctx, "../operator/chart", helm.InstallOptions{ Name: "redpanda-operator", Namespace: sharedOperatorNamespace, CreateNamespace: true, - Values: operatorchart.PartialValues{ - LogLevel: ptr.To("trace"), - Image: &operatorchart.PartialImage{ - Tag: ptr.To(imageTag), - Repository: ptr.To(imageRepo), - }, - CRDs: &operatorchart.PartialCRDs{ - Enabled: ptr.To(true), - Experimental: ptr.To(true), - }, - VectorizedControllers: &operatorchart.PartialVectorizedControllers{ - Enabled: ptr.To(true), - }, - AdditionalCmdFlags: operatorCmdFlags(), - }, + Values: values, }) // Tolerate "already installed" errors from rerun-fails retries where // the operator was installed in the first run. diff --git a/acceptance/steps/helpers.go b/acceptance/steps/helpers.go index ae9a0bbc7..f714dda71 100644 --- a/acceptance/steps/helpers.go +++ b/acceptance/steps/helpers.go @@ -252,7 +252,6 @@ func (c *clusterClients) ExpectTopic(ctx context.Context, topic string) { t.Logf("Checking that topic %q exists in cluster %q", topic, c.cluster) c.checkTopic(ctx, topic, true, fmt.Sprintf("Topic %q does not exist in cluster %q", topic, c.cluster)) - t.Logf("Found topic %q in cluster %q", topic, c.cluster) } func (c *clusterClients) ExpectNoTopic(ctx context.Context, topic string) { @@ -260,7 +259,6 @@ func (c *clusterClients) ExpectNoTopic(ctx context.Context, topic string) { t.Logf("Checking that topic %q does not exist in cluster %q", topic, c.cluster) c.checkTopic(ctx, topic, false, fmt.Sprintf("Topic %q still exists in cluster %q", topic, c.cluster)) - t.Logf("Found no topic %q in cluster %q", topic, c.cluster) } // Enable experimental feature support. @@ -314,7 +312,7 @@ func (c *clusterClients) checkTopic(ctx context.Context, topic string, exists bo require.NoError(t, topics.Error()) return exists == topics.Has(topic) - }, 10*time.Second, 1*time.Second, message) { + }, 30*time.Second, 2*time.Second, message) { t.Errorf("Final list of topics: %v", topics.Names()) } } diff --git a/acceptance/steps/manifest.go b/acceptance/steps/manifest.go index 7464dc064..1f98c4537 100644 --- a/acceptance/steps/manifest.go +++ b/acceptance/steps/manifest.go @@ -103,6 +103,13 @@ func PatchManifest(t framework.TestingT, content string) string { return t.Namespace() } + // Pass through Redpanda Connect runtime env var interpolations + // (e.g., ${RPK_BROKERS}) that are resolved inside the container, + // not by the test framework. + if strings.HasPrefix(key, "RPK_") { + return match + } + t.Fatalf("unhandled expansion: %s", key) return "UNREACHABLE" }) diff --git a/acceptance/steps/pipelines.go b/acceptance/steps/pipelines.go new file mode 100644 index 000000000..8b853939b --- /dev/null +++ b/acceptance/steps/pipelines.go @@ -0,0 +1,131 @@ +// Copyright 2026 Redpanda Data, Inc. +// +// Use of this software is governed by the Business Source License +// included in the file licenses/BSL.md +// +// As of the Change Date specified in that file, in accordance with +// the Business Source License, use of this software will be governed +// by the Apache License, Version 2.0 + +package steps + +import ( + "context" + "time" + + "github.com/stretchr/testify/require" + "github.com/twmb/franz-go/pkg/kgo" + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + framework "github.com/redpanda-data/redpanda-operator/harpoon" + redpandav1alpha2 "github.com/redpanda-data/redpanda-operator/operator/api/redpanda/v1alpha2" +) + +func pipelineIsSuccessfullyRunning(ctx context.Context, t framework.TestingT, name string) { + var pipeline redpandav1alpha2.Pipeline + require.NoError(t, t.Get(ctx, t.ResourceKey(name), &pipeline)) + + waitForCondition(ctx, t, &pipeline, metav1.Condition{ + Type: redpandav1alpha2.PipelineConditionReady, + Status: metav1.ConditionTrue, + Reason: redpandav1alpha2.PipelineReasonRunning, + }, func() []metav1.Condition { + return pipeline.Status.Conditions + }) + + require.Equal(t, redpandav1alpha2.PipelinePhaseRunning, pipeline.Status.Phase) +} + +func pipelineIsStopped(ctx context.Context, t framework.TestingT, name string) { + var pipeline redpandav1alpha2.Pipeline + require.NoError(t, t.Get(ctx, t.ResourceKey(name), &pipeline)) + + waitForCondition(ctx, t, &pipeline, metav1.Condition{ + Type: redpandav1alpha2.PipelineConditionReady, + Status: metav1.ConditionTrue, + Reason: redpandav1alpha2.PipelineReasonPaused, + }, func() []metav1.Condition { + return pipeline.Status.Conditions + }) + + require.Equal(t, redpandav1alpha2.PipelinePhaseStopped, pipeline.Status.Phase) +} + +func iDeleteTheCRDPipeline(ctx context.Context, t framework.TestingT, name string) { + var pipeline redpandav1alpha2.Pipeline + + t.Logf("Deleting pipeline %q", name) + err := t.Get(ctx, t.ResourceKey(name), &pipeline) + if err != nil { + if apierrors.IsNotFound(err) { + t.Logf("Pipeline %q already deleted", name) + return + } + t.Fatalf("Error getting pipeline %q for deletion: %v", name, err) + } + + t.Logf("Found pipeline %q, deleting it", name) + require.NoError(t, t.Delete(ctx, &pipeline)) + t.Logf("Successfully deleted pipeline %q CRD", name) +} + +func pipelineDoesNotExist(ctx context.Context, t framework.TestingT, name string) { + var pipeline redpandav1alpha2.Pipeline + require.Eventually(t, func() bool { + err := t.Get(ctx, t.ResourceKey(name), &pipeline) + return apierrors.IsNotFound(err) + }, 2*time.Minute, 2*time.Second, "Pipeline %q should not exist", name) +} + +func pipelineHasInvalidConfig(ctx context.Context, t framework.TestingT, name string) { + var pipeline redpandav1alpha2.Pipeline + require.NoError(t, t.Get(ctx, t.ResourceKey(name), &pipeline)) + + waitForCondition(ctx, t, &pipeline, metav1.Condition{ + Type: redpandav1alpha2.PipelineConditionConfigValid, + Status: metav1.ConditionFalse, + Reason: redpandav1alpha2.PipelineReasonConfigInvalid, + }, func() []metav1.Condition { + return pipeline.Status.Conditions + }) +} + +func topicHasMessagesInCluster(ctx context.Context, t framework.TestingT, topic, cluster string) { + clients := clientsForCluster(ctx, cluster) + clients.ExpectTopic(ctx, topic) + + kafkaClient := clients.Kafka(ctx) + defer kafkaClient.Close() + + consumerClient, err := kgo.NewClient(append(kafkaClient.Opts(), + kgo.ConsumeTopics(topic), + kgo.ConsumeResetOffset(kgo.NewOffset().AtStart()), + )...) + require.NoError(t, err) + defer consumerClient.Close() + + t.Logf("Polling records from topic %q in cluster %q", topic, cluster) + require.Eventually(t, func() bool { + fetches := consumerClient.PollRecords(ctx, 1) + return len(fetches.Records()) > 0 + }, 2*time.Minute, 2*time.Second, "Topic %q in cluster %q should have messages", topic, cluster) + t.Logf("Found messages in topic %q", topic) +} + +func iProduceMessagesToTopicInCluster(ctx context.Context, t framework.TestingT, topic, cluster string) { + clients := clientsForCluster(ctx, cluster) + clients.ExpectTopic(ctx, topic) + + kafkaClient := clients.Kafka(ctx) + defer kafkaClient.Close() + + t.Logf("Producing test messages to topic %q in cluster %q", topic, cluster) + for i := range 5 { + require.NoError(t, kafkaClient.ProduceSync(ctx, &kgo.Record{ + Topic: topic, + Value: []byte("test-message-" + string(rune('0'+i))), + }).FirstErr()) + } + t.Logf("Produced 5 messages to topic %q", topic) +} diff --git a/acceptance/steps/register.go b/acceptance/steps/register.go index 493748a1d..9f3dd56f2 100644 --- a/acceptance/steps/register.go +++ b/acceptance/steps/register.go @@ -167,6 +167,15 @@ func init() { framework.RegisterStep(`^service "([^"]*)" should not have field managers:$`, checkResourceNoFieldManagers) framework.RegisterStep(`^cluster "([^"]*)" should have sync error:$`, checkClusterHasSyncError) + // Pipeline scenario steps + framework.RegisterStep(`^pipeline "([^"]*)" is successfully running$`, pipelineIsSuccessfullyRunning) + framework.RegisterStep(`^pipeline "([^"]*)" is stopped$`, pipelineIsStopped) + framework.RegisterStep(`^I delete the CRD pipeline "([^"]*)"$`, iDeleteTheCRDPipeline) + framework.RegisterStep(`^pipeline "([^"]*)" does not exist$`, pipelineDoesNotExist) + framework.RegisterStep(`^pipeline "([^"]*)" has invalid config$`, pipelineHasInvalidConfig) + framework.RegisterStep(`^topic "([^"]*)" has messages in cluster "([^"]*)"$`, topicHasMessagesInCluster) + framework.RegisterStep(`^I produce messages to "([^"]*)" in cluster "([^"]*)"$`, iProduceMessagesToTopicInCluster) + // Debug steps framework.RegisterStep(`^I become debuggable$`, sleepALongTime) } diff --git a/gen/go.sum b/gen/go.sum index d5733d6fe..d9bd10658 100644 --- a/gen/go.sum +++ b/gen/go.sum @@ -584,6 +584,8 @@ github.com/redpanda-data/common-go/goldenfile v0.0.0-20260109170727-1dd9f5d22ee1 github.com/redpanda-data/common-go/goldenfile v0.0.0-20260109170727-1dd9f5d22ee1/go.mod h1:V3OBV2kcF/BDDytUZuKvIygbaXoGPT5VO3KmMAz+mBM= github.com/redpanda-data/common-go/kube v0.0.0-20260408144400-efba9928bb27 h1:735zfoMDegKzXO+mipLeEJhjvoboMiOwlZiEfWTs9IY= github.com/redpanda-data/common-go/kube v0.0.0-20260408144400-efba9928bb27/go.mod h1:87/jKBvBse9m7PBwCxzISdxOpHblNKTqxZZNa1U1utM= +github.com/redpanda-data/common-go/license v0.0.0-20260120073450-935d3dd3d6c1 h1:6aPxMthcrAljux5bgqU78yHxM8BK1ITqh9G9H+s707U= +github.com/redpanda-data/common-go/license v0.0.0-20260120073450-935d3dd3d6c1/go.mod h1:F1fp8xVNS2UwWFosOjJ9+5jaEZnXSjB9AdHk2R9XlpI= github.com/redpanda-data/common-go/net v0.1.1-0.20240429123545-4da3d2b371f7 h1:MXLdjFdFjOtyuUR4TdVVsqFP8xnru2YDwzH9bJTUr1M= github.com/redpanda-data/common-go/net v0.1.1-0.20240429123545-4da3d2b371f7/go.mod h1:UJIi/yUxGOBYXUrfUsOkxfYxcb/ll7mZrwae/i+U2kc= github.com/redpanda-data/common-go/otelutil v0.0.0-20260413160920-df1679f86269 h1:tEFqrnhUNN08Ye6n1FxtFHqkrFRhA0PGs2917v4JAOk= diff --git a/operator/api/redpanda/v1alpha2/pipeline_types.go b/operator/api/redpanda/v1alpha2/pipeline_types.go new file mode 100644 index 000000000..a6656b328 --- /dev/null +++ b/operator/api/redpanda/v1alpha2/pipeline_types.go @@ -0,0 +1,404 @@ +// Copyright 2026 Redpanda Data, Inc. +// +// Use of this software is governed by the Business Source License +// included in the file licenses/BSL.md +// +// As of the Change Date specified in that file, in accordance with +// the Business Source License, use of this software will be governed +// by the Apache License, Version 2.0 + +package v1alpha2 + +import ( + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/utils/ptr" + + "github.com/redpanda-data/redpanda-operator/operator/pkg/functional" +) + +const ( + // PipelineDefaultImage is the default Redpanda Connect container image. + PipelineDefaultImage = "docker.redpanda.com/redpandadata/connect:4.87.0" +) + +// PipelinePhase describes the lifecycle phase of a Pipeline. +// +kubebuilder:validation:Enum=Pending;Provisioning;Running;Stopped;Unknown +type PipelinePhase string + +const ( + // PipelinePhasePending indicates the pipeline has been accepted but + // its Deployment has not yet been created. + PipelinePhasePending PipelinePhase = "Pending" + // PipelinePhaseProvisioning indicates the Deployment exists but not all + // replicas are ready. + PipelinePhaseProvisioning PipelinePhase = "Provisioning" + // PipelinePhaseRunning indicates all desired replicas are ready and + // processing data. + PipelinePhaseRunning PipelinePhase = "Running" + // PipelinePhaseStopped indicates the pipeline is paused (replicas scaled + // to zero). + PipelinePhaseStopped PipelinePhase = "Stopped" + // PipelinePhaseUnknown is used when the controller cannot determine the + // pipeline state. + PipelinePhaseUnknown PipelinePhase = "Unknown" +) + +// Pipeline condition types. +const ( + // PipelineConditionReady indicates whether the pipeline is fully + // reconciled and running. + PipelineConditionReady = "Ready" + // PipelineConditionConfigValid indicates whether the pipeline + // configuration passed lint validation. + PipelineConditionConfigValid = "ConfigValid" + // PipelineConditionClusterRef indicates whether the referenced + // Redpanda cluster was resolved successfully. + PipelineConditionClusterRef = "ClusterRef" +) + +// Pipeline condition reasons. +const ( + // PipelineReasonRunning means the pipeline is running with all replicas + // available. + PipelineReasonRunning = "Running" + // PipelineReasonProvisioning means the Deployment is being rolled out. + PipelineReasonProvisioning = "Provisioning" + // PipelineReasonPaused means the pipeline is intentionally stopped. + PipelineReasonPaused = "Paused" + // PipelineReasonLicenseInvalid means the enterprise license check failed. + PipelineReasonLicenseInvalid = "LicenseInvalid" + // PipelineReasonFailed means a reconciliation step failed. + PipelineReasonFailed = "Failed" + // PipelineReasonConfigValid means the config passed lint validation. + PipelineReasonConfigValid = "ConfigValid" + // PipelineReasonConfigInvalid means the config failed lint validation. + PipelineReasonConfigInvalid = "ConfigInvalid" + // PipelineReasonClusterRefResolved means the clusterRef was resolved successfully. + PipelineReasonClusterRefResolved = "ClusterRefResolved" + // PipelineReasonClusterRefInvalid means the clusterRef could not be found or resolved. + PipelineReasonClusterRefInvalid = "ClusterRefInvalid" + // PipelineReasonUserResolved means the userRef was resolved successfully and + // its password Secret was located. + PipelineReasonUserResolved = "UserResolved" + // PipelineReasonUserInvalid means the userRef could not be found or its + // password Secret was missing. + PipelineReasonUserInvalid = "UserInvalid" + // PipelineReasonValueSourcesResolved means every entry in spec.valueSources + // was bound successfully. + PipelineReasonValueSourcesResolved = "ValueSourcesResolved" + // PipelineReasonValueSourceInvalid means at least one entry in + // spec.valueSources could not be resolved. + PipelineReasonValueSourceInvalid = "ValueSourceInvalid" +) + +// Pipeline status condition types added with the v2 spec. +const ( + // PipelineConditionUserRef indicates whether the referenced User CR was + // resolved and had a usable password Secret. + PipelineConditionUserRef = "UserRef" + // PipelineConditionValueSourcesResolved indicates whether every + // spec.valueSources entry resolved to a backing value. + PipelineConditionValueSourcesResolved = "ValueSourcesResolved" +) + +// PipelineSpec defines the desired state of a Redpanda Connect pipeline. +// +// +kubebuilder:validation:XValidation:message="userRef must be empty when cluster.staticConfiguration is set",rule="!has(self.cluster) || !has(self.cluster.staticConfiguration) || !has(self.userRef)" +// +kubebuilder:validation:XValidation:message="userRef cannot be set without cluster.clusterRef",rule="!has(self.userRef) || (has(self.cluster) && has(self.cluster.clusterRef))" +type PipelineSpec struct { + // ConfigYAML is the user-supplied Redpanda Connect pipeline YAML. + // Reference cluster-bound or sensitive values from .valueSources via + // ${NAME} interpolation; the operator resolves them at render time. + // + // When .cluster is set, the operator inline-merges connection fields + // (seed_brokers, tls, sasl) into any `input.redpanda` and + // `output.redpanda` blocks in this YAML, derived from the resolved + // cluster connection and .userRef. Users only need to write the + // per-plugin fields (topic, key, consumer_group, etc.); brokers, TLS, + // and SASL are filled in by the operator. + // + // User-side keys win on conflict — set a key explicitly (for example, + // seed_brokers pointing at a different cluster) and the operator's + // generated value is skipped for that key. + // + // The merge targets the `redpanda` input/output plugins specifically. + // Any `redpanda_common` blocks the user authors are passed through + // unchanged — the operator does not inject connection fields into + // them. + // +kubebuilder:validation:Required + ConfigYAML string `json:"configYaml"` + + // DisplayName is a human-readable name for the pipeline. + // Maps to the pipeline display name when migrating to Redpanda Cloud. + // +optional + DisplayName string `json:"displayName,omitempty"` + + // Description is an optional description of what this pipeline does. + // Maps to the pipeline description when migrating to Redpanda Cloud. + // +optional + Description string `json:"description,omitempty"` + + // Tags are key-value pairs for organizing and filtering pipelines. + // Maps to pipeline tags when migrating to Redpanda Cloud. + // +optional + Tags map[string]string `json:"tags,omitempty"` + + // ConfigFiles defines additional configuration files to mount alongside + // the main pipeline configuration. Each entry maps a filename to its content. + // Files are mounted in the /config directory alongside connect.yaml. + // The key "connect.yaml" is reserved and cannot be used. + // Maps to pipeline config files when migrating to Redpanda Cloud. + // +optional + ConfigFiles map[string]string `json:"configFiles,omitempty"` + + // Replicas is the number of pipeline replicas to run. + // +kubebuilder:default=1 + // +kubebuilder:validation:Minimum=0 + // +optional + Replicas *int32 `json:"replicas,omitempty"` + + // Image is the container image for the Redpanda Connect deployment. + // +optional + Image *string `json:"image,omitempty"` + + // ServiceAccountName is the ServiceAccount to bind to the pipeline pod. + // When unset, the namespace's default ServiceAccount is used. + // + // Setting this is the recommended way to scope per-pipeline cloud-IAM + // trust (e.g. IRSA on EKS, Workload Identity on GKE, Pod Identity on + // AKS). Annotating the namespace's default SA works but grants every + // pipeline in the namespace the same role — naming a Pipeline-specific + // SA here keeps the trust boundary per-pipeline. + // + // The operator does NOT create the ServiceAccount; provision it + // (along with the appropriate cloud-IAM annotations) out-of-band. + // +optional + ServiceAccountName string `json:"serviceAccountName,omitempty"` + + // Paused stops the pipeline by scaling replicas to zero when set to true. + // +optional + Paused bool `json:"paused,omitempty"` + + // Resources defines the compute resource requirements for the pipeline pods. + // +optional + Resources *corev1.ResourceRequirements `json:"resources,omitempty"` + + // ValueSources is a list of named values the pipeline YAML can reference + // via ${NAME} interpolation. Each value is fetched at render time from + // inline / ConfigMap / Secret / ExternalSecret and projected into the + // pipeline pod as an environment variable. One named pull per entry — + // avoids the bag-of-Secrets env-splat pattern. + // + // Example: + // spec: + // valueSources: + // - name: S3_SECRET_KEY + // source: + // secretKeyRef: + // name: s3-creds + // key: secret_access_key + // configYaml: | + // output: + // aws_s3: + // bucket: my-bucket + // credentials: + // secret: ${S3_SECRET_KEY} + // + // See: https://docs.redpanda.com/redpanda-connect/configuration/secrets/ + // +optional + // +listType=map + // +listMapKey=name + ValueSources []NamedValueSource `json:"valueSources,omitempty"` + + // Annotations specifies additional annotations to apply to the pipeline pod + // template. These are merged with any operator-level commonAnnotations, with + // per-pipeline annotations taking precedence. Useful for integrations like + // Datadog autodiscovery that rely on pod annotations. + // +optional + Annotations map[string]string `json:"annotations,omitempty"` + + // Tolerations for the pipeline pods, allowing them to be scheduled on tainted nodes. + // +optional + Tolerations []corev1.Toleration `json:"tolerations,omitempty"` + + // NodeSelector constrains pipeline pods to nodes with matching labels. + // +optional + NodeSelector map[string]string `json:"nodeSelector,omitempty"` + + // TopologySpreadConstraints controls how pipeline pods are spread across + // topology domains such as availability zones. When Zones is specified, + // a default topology spread constraint is generated automatically. + // Any constraints specified here are used in addition to (or instead of) + // the auto-generated zone constraint. + // +optional + TopologySpreadConstraints []corev1.TopologySpreadConstraint `json:"topologySpreadConstraints,omitempty"` + + // Zones specifies the availability zones across which pipeline pods should + // be spread. When set, the controller configures: + // - A node affinity to schedule pods only on nodes in these zones + // - A topology spread constraint to distribute pods evenly across zones + // The zone label used is "topology.kubernetes.io/zone". + // +optional + Zones []string `json:"zones,omitempty"` + + // Budget configures a PodDisruptionBudget for the pipeline Deployment, + // protecting pipeline pods from voluntary disruptions such as node drains + // and cluster autoscaler evictions. When not set, no PDB is created. + // +optional + Budget *PipelineBudget `json:"budget,omitempty"` + + // ClusterSource declaratively binds the pipeline's redpanda input/output + // to a Redpanda cluster. Mirrors the ClusterSource pattern used by the + // User/Topic CRDs: + // + // - clusterRef: point at an existing Redpanda CR by name. The operator + // resolves the internal broker addresses + TLS material automatically; + // the SASL identity is taken from .userRef. + // - staticConfiguration: hard-code brokers, TLS, and SASL. The password + // is a ValueSource so it can come from inline / Secret / ConfigMap / + // ExternalSecret. + // + // When unset, the pipeline runs against whatever brokers the user wires + // inline in configYaml (e.g. an external Kafka, Confluent Cloud, etc.). + // +optional + ClusterSource *ClusterSource `json:"cluster,omitempty"` + + // UserRef binds the pipeline to a User CR. When set alongside + // .cluster.clusterRef, the operator reads the referenced User's + // password Secret + SASL mechanism and uses the User's metadata.name + // as the SASL username, emitting REDPANDA_SASL_USERNAME / _PASSWORD / + // _MECHANISM env vars in the pipeline pod and a `sasl:` block in the + // auto-generated `redpanda` config. + // + // Set this when the cluster the pipeline talks to has SASL enabled. + // On unauthenticated clusters (and in clusterRef-only modes that + // only need broker discovery), leave it empty. + // + // CEL restrictions: + // - userRef must NOT be set alongside .cluster.staticConfiguration — + // the static path carries its own inline SASL config. + // - userRef must NOT be set without .cluster.clusterRef — there's no + // cluster context to authenticate against otherwise. + // + // The referenced User CR is expected to live in the same namespace as + // the Pipeline and to declare ACLs scoped to the topics, schema + // subjects, and consumer groups this pipeline reads/writes. The + // operator does NOT auto-create or modify the User CR — ACL scoping + // stays an explicit, auditable user-controlled action. + // +optional + UserRef *PipelineUserRef `json:"userRef,omitempty"` +} + +// PipelineUserRef points at a User CR whose password Secret + SCRAM +// mechanism the pipeline will use to authenticate to Redpanda. +type PipelineUserRef struct { + // Name of the User CR (in the same namespace as the Pipeline). + // +kubebuilder:validation:Required + Name string `json:"name"` +} + +// NamedValueSource binds a name to a value provider so the pipeline YAML +// can reference it via ${NAME} interpolation. +type NamedValueSource struct { + // Name is the environment-variable name the pipeline YAML references. + // Must match standard env-var characters: [A-Z_][A-Z0-9_]*. + // +kubebuilder:validation:Pattern=`^[A-Z_][A-Z0-9_]*$` + // +kubebuilder:validation:MinLength=1 + Name string `json:"name"` + + // Source is the value provider. Exactly one of inline / configMapKeyRef + // / secretKeyRef / externalSecretRef must be set; the ValueSource + // XValidation rules enforce this. + Source ValueSource `json:"source"` +} + +// PipelineBudget configures a PodDisruptionBudget for the pipeline. +type PipelineBudget struct { + // MaxUnavailable defines the maximum number of pipeline pods that can be + // unavailable during a voluntary disruption. Defaults to 1 if not set. + // +kubebuilder:default=1 + // +kubebuilder:validation:Minimum=0 + MaxUnavailable int `json:"maxUnavailable"` +} + +// PipelineStatus defines the observed state of a Connect resource. +type PipelineStatus struct { + // ObservedGeneration is the last observed generation of the Connect resource. + // +optional + ObservedGeneration int64 `json:"observedGeneration,omitempty"` + + // Conditions holds the conditions for the Connect resource. + // +optional + Conditions []metav1.Condition `json:"conditions,omitempty"` + + // Phase describes the current phase of the pipeline lifecycle. + // +optional + Phase PipelinePhase `json:"phase,omitempty"` + + // Replicas is the number of desired replicas. + // +optional + Replicas int32 `json:"replicas,omitempty"` + + // ReadyReplicas is the number of ready pipeline pods. + // +optional + ReadyReplicas int32 `json:"readyReplicas,omitempty"` +} + +// Connect defines a Redpanda Connect pipeline managed by the operator. +// +kubebuilder:object:root=true +// +kubebuilder:subresource:status +// +kubebuilder:resource:path=pipelines,shortName=rpcn +// +kubebuilder:printcolumn:name="Ready",type="string",JSONPath=".status.conditions[?(@.type==\"Ready\")].status" +// +kubebuilder:printcolumn:name="Phase",type="string",JSONPath=".status.phase" +// +kubebuilder:printcolumn:name="Replicas",type="integer",JSONPath=".spec.replicas" +// +kubebuilder:printcolumn:name="Available",type="integer",JSONPath=".status.readyReplicas" +// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp" +// +kubebuilder:storageversion +type Pipeline struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata,omitempty"` + + // Spec defines the desired state of the Connect pipeline. + Spec PipelineSpec `json:"spec,omitempty"` + + // Status represents the current observed state of the Connect pipeline. + Status PipelineStatus `json:"status,omitempty"` +} + +// +kubebuilder:object:root=true + +// PipelineList contains a list of Connect resources. +type PipelineList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata,omitempty"` + Items []Pipeline `json:"items"` +} + +func (c *PipelineList) GetItems() []*Pipeline { + return functional.MapFn(ptr.To, c.Items) +} + +// GetClusterSource returns the cluster source reference if set. +func (c *Pipeline) GetClusterSource() *ClusterSource { + return c.Spec.ClusterSource +} + +// GetImage returns the configured image or the default. +func (c *Pipeline) GetImage() string { + if c.Spec.Image != nil && *c.Spec.Image != "" { + return *c.Spec.Image + } + return PipelineDefaultImage +} + +// GetReplicas returns the effective replica count, respecting the paused state. +func (c *Pipeline) GetReplicas() int32 { + if c.Spec.Paused { + return 0 + } + if c.Spec.Replicas != nil { + return *c.Spec.Replicas + } + return 1 +} diff --git a/operator/api/redpanda/v1alpha2/testdata/crd-docs.adoc b/operator/api/redpanda/v1alpha2/testdata/crd-docs.adoc index ba6a379ae..cb6e74643 100644 --- a/operator/api/redpanda/v1alpha2/testdata/crd-docs.adoc +++ b/operator/api/redpanda/v1alpha2/testdata/crd-docs.adoc @@ -15,6 +15,7 @@ .Resource Types - xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-console[$$Console$$] - xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-group[$$Group$$] +- xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipeline[$$Pipeline$$] - xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-redpanda[$$Redpanda$$] - xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-redpandarole[$$RedpandaRole$$] - xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-schema[$$Schema$$] @@ -619,6 +620,7 @@ ClusterSource defines how to connect to a particular Redpanda cluster. **** - xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-consolespec[$$ConsoleSpec$$] - xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-groupspec[$$GroupSpec$$] +- xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipelinespec[$$PipelineSpec$$] - xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-rolespec[$$RoleSpec$$] - xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-schemaspec[$$SchemaSpec$$] - xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-shadowlinkspec[$$ShadowLinkSpec$$] @@ -2143,6 +2145,34 @@ and `patternType` must be `literal` + | * | |=== +[id="{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-namedvaluesource"] +==== NamedValueSource + + + +NamedValueSource binds a name to a value provider so the pipeline YAML +can reference it via ${NAME} interpolation. + + + +.Appears In: +**** +- xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipelinespec[$$PipelineSpec$$] +**** + +[cols="20a,50a,15a,15a", options="header"] +|=== +| Field | Description | Default | Validation +| *`name`* __string__ | Name is the environment-variable name the pipeline YAML references. + +Must match standard env-var characters: [A-Z_][A-Z0-9_]*. + | | MinLength: 1 + +Pattern: `^[A-Z_][A-Z0-9_]*$` + + +| *`source`* __xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-valuesource[$$ValueSource$$]__ | Source is the value provider. Exactly one of inline / configMapKeyRef + +/ secretKeyRef / externalSecretRef must be set; the ValueSource + +XValidation rules enforce this. + | | +|=== + + [id="{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-oidcloginsecrets"] @@ -2289,6 +2319,295 @@ PersistentVolume configures configurations for a PersistentVolumeClaim to use to |=== +[id="{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipeline"] +==== Pipeline + + + +Connect defines a Redpanda Connect pipeline managed by the operator. + + + + + +[cols="20a,50a,15a,15a", options="header"] +|=== +| Field | Description | Default | Validation +| *`apiVersion`* __string__ | `cluster.redpanda.com/v1alpha2` | | +| *`kind`* __string__ | `Pipeline` | | +| *`kind`* __string__ | Kind is a string value representing the REST resource this object represents. + +Servers may infer this from the endpoint the client submits requests to. + +Cannot be updated. + +In CamelCase. + +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + | | Optional: \{} + + +| *`apiVersion`* __string__ | APIVersion defines the versioned schema of this representation of an object. + +Servers should convert recognized schemas to the latest internal value, and + +may reject unrecognized values. + +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + | | Optional: \{} + + +| *`metadata`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#objectmeta-v1-meta[$$ObjectMeta$$]__ | Refer to Kubernetes API documentation for fields of `metadata`. + | | +| *`spec`* __xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipelinespec[$$PipelineSpec$$]__ | Spec defines the desired state of the Connect pipeline. + | | +| *`status`* __xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipelinestatus[$$PipelineStatus$$]__ | Status represents the current observed state of the Connect pipeline. + | | +|=== + + +[id="{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipelinebudget"] +==== PipelineBudget + + + +PipelineBudget configures a PodDisruptionBudget for the pipeline. + + + +.Appears In: +**** +- xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipelinespec[$$PipelineSpec$$] +**** + +[cols="20a,50a,15a,15a", options="header"] +|=== +| Field | Description | Default | Validation +| *`maxUnavailable`* __integer__ | MaxUnavailable defines the maximum number of pipeline pods that can be + +unavailable during a voluntary disruption. Defaults to 1 if not set. + | 1 | Minimum: 0 + + +|=== + + +[id="{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipelinephase"] +==== PipelinePhase + +_Underlying type:_ _string_ + +PipelinePhase describes the lifecycle phase of a Pipeline. + +.Validation: +- Enum: [Pending Provisioning Running Stopped Unknown] + +.Appears In: +**** +- xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipelinestatus[$$PipelineStatus$$] +**** + + + +[id="{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipelinespec"] +==== PipelineSpec + + + +PipelineSpec defines the desired state of a Redpanda Connect pipeline. + + + +.Appears In: +**** +- xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipeline[$$Pipeline$$] +**** + +[cols="20a,50a,15a,15a", options="header"] +|=== +| Field | Description | Default | Validation +| *`configYaml`* __string__ | ConfigYAML is the user-supplied Redpanda Connect pipeline YAML. + +Reference cluster-bound or sensitive values from .valueSources via + +${NAME} interpolation; the operator resolves them at render time. + + +When .cluster is set, the operator inline-merges connection fields + +(seed_brokers, tls, sasl) into any `input.redpanda` and + +`output.redpanda` blocks in this YAML, derived from the resolved + +cluster connection and .userRef. Users only need to write the + +per-plugin fields (topic, key, consumer_group, etc.); brokers, TLS, + +and SASL are filled in by the operator. + + +User-side keys win on conflict — set a key explicitly (for example, + +seed_brokers pointing at a different cluster) and the operator's + +generated value is skipped for that key. + + +The merge targets the `redpanda` input/output plugins specifically. + +Any `redpanda_common` blocks the user authors are passed through + +unchanged — the operator does not inject connection fields into + +them. + | | Required: \{} + + +| *`displayName`* __string__ | DisplayName is a human-readable name for the pipeline. + +Maps to the pipeline display name when migrating to Redpanda Cloud. + | | Optional: \{} + + +| *`description`* __string__ | Description is an optional description of what this pipeline does. + +Maps to the pipeline description when migrating to Redpanda Cloud. + | | Optional: \{} + + +| *`tags`* __object (keys:string, values:string)__ | Tags are key-value pairs for organizing and filtering pipelines. + +Maps to pipeline tags when migrating to Redpanda Cloud. + | | Optional: \{} + + +| *`configFiles`* __object (keys:string, values:string)__ | ConfigFiles defines additional configuration files to mount alongside + +the main pipeline configuration. Each entry maps a filename to its content. + +Files are mounted in the /config directory alongside connect.yaml. + +The key "connect.yaml" is reserved and cannot be used. + +Maps to pipeline config files when migrating to Redpanda Cloud. + | | Optional: \{} + + +| *`replicas`* __integer__ | Replicas is the number of pipeline replicas to run. + | 1 | Minimum: 0 + +Optional: \{} + + +| *`image`* __string__ | Image is the container image for the Redpanda Connect deployment. + | | Optional: \{} + + +| *`serviceAccountName`* __string__ | ServiceAccountName is the ServiceAccount to bind to the pipeline pod. + +When unset, the namespace's default ServiceAccount is used. + + +Setting this is the recommended way to scope per-pipeline cloud-IAM + +trust (e.g. IRSA on EKS, Workload Identity on GKE, Pod Identity on + +AKS). Annotating the namespace's default SA works but grants every + +pipeline in the namespace the same role — naming a Pipeline-specific + +SA here keeps the trust boundary per-pipeline. + + +The operator does NOT create the ServiceAccount; provision it + +(along with the appropriate cloud-IAM annotations) out-of-band. + | | Optional: \{} + + +| *`paused`* __boolean__ | Paused stops the pipeline by scaling replicas to zero when set to true. + | | Optional: \{} + + +| *`resources`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#resourcerequirements-v1-core[$$ResourceRequirements$$]__ | Resources defines the compute resource requirements for the pipeline pods. + | | Optional: \{} + + +| *`valueSources`* __xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-namedvaluesource[$$NamedValueSource$$] array__ | ValueSources is a list of named values the pipeline YAML can reference + +via ${NAME} interpolation. Each value is fetched at render time from + +inline / ConfigMap / Secret / ExternalSecret and projected into the + +pipeline pod as an environment variable. One named pull per entry — + +avoids the bag-of-Secrets env-splat pattern. + + +Example: + +spec: + +valueSources: + +- name: S3_SECRET_KEY + +source: + +secretKeyRef: + +name: s3-creds + +key: secret_access_key + +configYaml: \| + +output: + +aws_s3: + +bucket: my-bucket + +credentials: + +secret: ${S3_SECRET_KEY} + + +See: https://docs.redpanda.com/redpanda-connect/configuration/secrets/ + | | Optional: \{} + + +| *`annotations`* __object (keys:string, values:string)__ | Annotations specifies additional annotations to apply to the pipeline pod + +template. These are merged with any operator-level commonAnnotations, with + +per-pipeline annotations taking precedence. Useful for integrations like + +Datadog autodiscovery that rely on pod annotations. + | | Optional: \{} + + +| *`tolerations`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#toleration-v1-core[$$Toleration$$] array__ | Tolerations for the pipeline pods, allowing them to be scheduled on tainted nodes. + | | Optional: \{} + + +| *`nodeSelector`* __object (keys:string, values:string)__ | NodeSelector constrains pipeline pods to nodes with matching labels. + | | Optional: \{} + + +| *`topologySpreadConstraints`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#topologyspreadconstraint-v1-core[$$TopologySpreadConstraint$$] array__ | TopologySpreadConstraints controls how pipeline pods are spread across + +topology domains such as availability zones. When Zones is specified, + +a default topology spread constraint is generated automatically. + +Any constraints specified here are used in addition to (or instead of) + +the auto-generated zone constraint. + | | Optional: \{} + + +| *`zones`* __string array__ | Zones specifies the availability zones across which pipeline pods should + +be spread. When set, the controller configures: + +- A node affinity to schedule pods only on nodes in these zones + +- A topology spread constraint to distribute pods evenly across zones + +The zone label used is "topology.kubernetes.io/zone". + | | Optional: \{} + + +| *`budget`* __xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipelinebudget[$$PipelineBudget$$]__ | Budget configures a PodDisruptionBudget for the pipeline Deployment, + +protecting pipeline pods from voluntary disruptions such as node drains + +and cluster autoscaler evictions. When not set, no PDB is created. + | | Optional: \{} + + +| *`cluster`* __xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-clustersource[$$ClusterSource$$]__ | ClusterSource declaratively binds the pipeline's redpanda input/output + +to a Redpanda cluster. Mirrors the ClusterSource pattern used by the + +User/Topic CRDs: + + +- clusterRef: point at an existing Redpanda CR by name. The operator + +resolves the internal broker addresses + TLS material automatically; + +the SASL identity is taken from .userRef. + +- staticConfiguration: hard-code brokers, TLS, and SASL. The password + +is a ValueSource so it can come from inline / Secret / ConfigMap / + +ExternalSecret. + + +When unset, the pipeline runs against whatever brokers the user wires + +inline in configYaml (e.g. an external Kafka, Confluent Cloud, etc.). + | | Optional: \{} + + +| *`userRef`* __xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipelineuserref[$$PipelineUserRef$$]__ | UserRef binds the pipeline to a User CR. When set alongside + +.cluster.clusterRef, the operator reads the referenced User's + +password Secret + SASL mechanism and uses the User's metadata.name + +as the SASL username, emitting REDPANDA_SASL_USERNAME / _PASSWORD / + +_MECHANISM env vars in the pipeline pod and a `sasl:` block in the + +auto-generated `redpanda` config. + + +Set this when the cluster the pipeline talks to has SASL enabled. + +On unauthenticated clusters (and in clusterRef-only modes that + +only need broker discovery), leave it empty. + + +CEL restrictions: + +- userRef must NOT be set alongside .cluster.staticConfiguration — + +the static path carries its own inline SASL config. + +- userRef must NOT be set without .cluster.clusterRef — there's no + +cluster context to authenticate against otherwise. + + +The referenced User CR is expected to live in the same namespace as + +the Pipeline and to declare ACLs scoped to the topics, schema + +subjects, and consumer groups this pipeline reads/writes. The + +operator does NOT auto-create or modify the User CR — ACL scoping + +stays an explicit, auditable user-controlled action. + | | Optional: \{} + + +|=== + + +[id="{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipelinestatus"] +==== PipelineStatus + + + +PipelineStatus defines the observed state of a Connect resource. + + + +.Appears In: +**** +- xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipeline[$$Pipeline$$] +**** + +[cols="20a,50a,15a,15a", options="header"] +|=== +| Field | Description | Default | Validation +| *`observedGeneration`* __integer__ | ObservedGeneration is the last observed generation of the Connect resource. + | | Optional: \{} + + +| *`conditions`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#condition-v1-meta[$$Condition$$] array__ | Conditions holds the conditions for the Connect resource. + | | Optional: \{} + + +| *`phase`* __xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipelinephase[$$PipelinePhase$$]__ | Phase describes the current phase of the pipeline lifecycle. + | | Enum: [Pending Provisioning Running Stopped Unknown] + +Optional: \{} + + +| *`replicas`* __integer__ | Replicas is the number of desired replicas. + | | Optional: \{} + + +| *`readyReplicas`* __integer__ | ReadyReplicas is the number of ready pipeline pods. + | | Optional: \{} + + +|=== + + +[id="{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipelineuserref"] +==== PipelineUserRef + + + +PipelineUserRef points at a User CR whose password Secret + SCRAM +mechanism the pipeline will use to authenticate to Redpanda. + + + +.Appears In: +**** +- xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-pipelinespec[$$PipelineSpec$$] +**** + +[cols="20a,50a,15a,15a", options="header"] +|=== +| Field | Description | Default | Validation +| *`name`* __string__ | Name of the User CR (in the same namespace as the Pipeline). + | | Required: \{} + + +|=== + + [id="{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-podantiaffinity"] ==== PodAntiAffinity @@ -4892,6 +5211,7 @@ ValueSource represents where a value can be pulled from - xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-kafkasaslawsmskiam[$$KafkaSASLAWSMskIam$$] - xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-kafkasaslgssapi[$$KafkaSASLGSSAPI$$] - xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-kafkasasloauthbearer[$$KafkaSASLOAuthBearer$$] +- xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-namedvaluesource[$$NamedValueSource$$] - xref:{anchor_prefix}-github-com-redpanda-data-redpanda-operator-operator-api-redpanda-v1alpha2-schemaregistrysasl[$$SchemaRegistrySASL$$] **** diff --git a/operator/api/redpanda/v1alpha2/zz_generated.deepcopy.go b/operator/api/redpanda/v1alpha2/zz_generated.deepcopy.go index b025e449d..b3a105921 100644 --- a/operator/api/redpanda/v1alpha2/zz_generated.deepcopy.go +++ b/operator/api/redpanda/v1alpha2/zz_generated.deepcopy.go @@ -2635,6 +2635,22 @@ func (in *NameFilter) DeepCopy() *NameFilter { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *NamedValueSource) DeepCopyInto(out *NamedValueSource) { + *out = *in + in.Source.DeepCopyInto(&out.Source) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NamedValueSource. +func (in *NamedValueSource) DeepCopy() *NamedValueSource { + if in == nil { + return nil + } + out := new(NamedValueSource) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *Networking) DeepCopyInto(out *Networking) { *out = *in @@ -2881,6 +2897,216 @@ func (in *PersistentVolume) DeepCopy() *PersistentVolume { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Pipeline) DeepCopyInto(out *Pipeline) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + in.Spec.DeepCopyInto(&out.Spec) + in.Status.DeepCopyInto(&out.Status) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Pipeline. +func (in *Pipeline) DeepCopy() *Pipeline { + if in == nil { + return nil + } + out := new(Pipeline) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *Pipeline) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PipelineBudget) DeepCopyInto(out *PipelineBudget) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PipelineBudget. +func (in *PipelineBudget) DeepCopy() *PipelineBudget { + if in == nil { + return nil + } + out := new(PipelineBudget) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PipelineList) DeepCopyInto(out *PipelineList) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ListMeta.DeepCopyInto(&out.ListMeta) + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]Pipeline, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PipelineList. +func (in *PipelineList) DeepCopy() *PipelineList { + if in == nil { + return nil + } + out := new(PipelineList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *PipelineList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PipelineSpec) DeepCopyInto(out *PipelineSpec) { + *out = *in + if in.Tags != nil { + in, out := &in.Tags, &out.Tags + *out = make(map[string]string, len(*in)) + for key, val := range *in { + (*out)[key] = val + } + } + if in.ConfigFiles != nil { + in, out := &in.ConfigFiles, &out.ConfigFiles + *out = make(map[string]string, len(*in)) + for key, val := range *in { + (*out)[key] = val + } + } + if in.Replicas != nil { + in, out := &in.Replicas, &out.Replicas + *out = new(int32) + **out = **in + } + if in.Image != nil { + in, out := &in.Image, &out.Image + *out = new(string) + **out = **in + } + if in.Resources != nil { + in, out := &in.Resources, &out.Resources + *out = new(v1.ResourceRequirements) + (*in).DeepCopyInto(*out) + } + if in.ValueSources != nil { + in, out := &in.ValueSources, &out.ValueSources + *out = make([]NamedValueSource, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.Annotations != nil { + in, out := &in.Annotations, &out.Annotations + *out = make(map[string]string, len(*in)) + for key, val := range *in { + (*out)[key] = val + } + } + if in.Tolerations != nil { + in, out := &in.Tolerations, &out.Tolerations + *out = make([]v1.Toleration, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.NodeSelector != nil { + in, out := &in.NodeSelector, &out.NodeSelector + *out = make(map[string]string, len(*in)) + for key, val := range *in { + (*out)[key] = val + } + } + if in.TopologySpreadConstraints != nil { + in, out := &in.TopologySpreadConstraints, &out.TopologySpreadConstraints + *out = make([]v1.TopologySpreadConstraint, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.Zones != nil { + in, out := &in.Zones, &out.Zones + *out = make([]string, len(*in)) + copy(*out, *in) + } + if in.Budget != nil { + in, out := &in.Budget, &out.Budget + *out = new(PipelineBudget) + **out = **in + } + if in.ClusterSource != nil { + in, out := &in.ClusterSource, &out.ClusterSource + *out = new(ClusterSource) + (*in).DeepCopyInto(*out) + } + if in.UserRef != nil { + in, out := &in.UserRef, &out.UserRef + *out = new(PipelineUserRef) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PipelineSpec. +func (in *PipelineSpec) DeepCopy() *PipelineSpec { + if in == nil { + return nil + } + out := new(PipelineSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PipelineStatus) DeepCopyInto(out *PipelineStatus) { + *out = *in + if in.Conditions != nil { + in, out := &in.Conditions, &out.Conditions + *out = make([]metav1.Condition, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PipelineStatus. +func (in *PipelineStatus) DeepCopy() *PipelineStatus { + if in == nil { + return nil + } + out := new(PipelineStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PipelineUserRef) DeepCopyInto(out *PipelineUserRef) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PipelineUserRef. +func (in *PipelineUserRef) DeepCopy() *PipelineUserRef { + if in == nil { + return nil + } + out := new(PipelineUserRef) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *PodAntiAffinity) DeepCopyInto(out *PodAntiAffinity) { *out = *in diff --git a/operator/api/redpanda/v1alpha2/zz_generated.deprecations_test.go b/operator/api/redpanda/v1alpha2/zz_generated.deprecations_test.go index 63f738191..f542b48df 100644 --- a/operator/api/redpanda/v1alpha2/zz_generated.deprecations_test.go +++ b/operator/api/redpanda/v1alpha2/zz_generated.deprecations_test.go @@ -189,6 +189,79 @@ func TestDeprecatedFieldWarnings(t *testing.T) { "field 'spec.cluster.staticConfiguration.schemaRegistry.sasl.token' is deprecated and set", }, }, + { + name: "Pipeline", + obj: &Pipeline{ + Spec: PipelineSpec{ + ClusterSource: ptr.To(ClusterSource{ + StaticConfiguration: ptr.To(StaticConfigurationSource{ + Admin: ptr.To(AdminAPISpec{ + SASL: ptr.To(AdminSASL{ + DeprecatedAuthToken: ptr.To(SecretKeyRef{}), + DeprecatedPassword: ptr.To(SecretKeyRef{}), + }), + TLS: ptr.To(CommonTLS{ + DeprecatedCaCert: ptr.To(SecretKeyRef{}), + DeprecatedCert: ptr.To(SecretKeyRef{}), + DeprecatedKey: ptr.To(SecretKeyRef{}), + }), + }), + Kafka: ptr.To(KafkaAPISpec{ + SASL: ptr.To(KafkaSASL{ + AWSMskIam: ptr.To(KafkaSASLAWSMskIam{ + DeprecatedSecretKey: ptr.To(SecretKeyRef{}), + DeprecatedSessionToken: ptr.To(SecretKeyRef{}), + }), + DeprecatedPassword: ptr.To(SecretKeyRef{}), + GSSAPIConfig: ptr.To(KafkaSASLGSSAPI{ + DeprecatedPassword: ptr.To(SecretKeyRef{}), + }), + OAUth: ptr.To(KafkaSASLOAuthBearer{ + DeprecatedToken: ptr.To(SecretKeyRef{}), + }), + }), + TLS: ptr.To(CommonTLS{ + DeprecatedCaCert: ptr.To(SecretKeyRef{}), + DeprecatedCert: ptr.To(SecretKeyRef{}), + DeprecatedKey: ptr.To(SecretKeyRef{}), + }), + }), + SchemaRegistry: ptr.To(SchemaRegistrySpec{ + SASL: ptr.To(SchemaRegistrySASL{ + DeprecatedAuthToken: ptr.To(SecretKeyRef{}), + DeprecatedPassword: ptr.To(SecretKeyRef{}), + }), + TLS: ptr.To(CommonTLS{ + DeprecatedCaCert: ptr.To(SecretKeyRef{}), + DeprecatedCert: ptr.To(SecretKeyRef{}), + DeprecatedKey: ptr.To(SecretKeyRef{}), + }), + }), + }), + }), + }, + }, + wantWarnings: []string{ + "field 'spec.cluster.staticConfiguration.kafka.tls.caCertSecretRef' is deprecated and set", + "field 'spec.cluster.staticConfiguration.kafka.tls.certSecretRef' is deprecated and set", + "field 'spec.cluster.staticConfiguration.kafka.tls.keySecretRef' is deprecated and set", + "field 'spec.cluster.staticConfiguration.kafka.sasl.oauth.tokenSecretRef' is deprecated and set", + "field 'spec.cluster.staticConfiguration.kafka.sasl.gssapi.passwordSecretRef' is deprecated and set", + "field 'spec.cluster.staticConfiguration.kafka.sasl.awsMskIam.secretKeySecretRef' is deprecated and set", + "field 'spec.cluster.staticConfiguration.kafka.sasl.awsMskIam.sessionTokenSecretRef' is deprecated and set", + "field 'spec.cluster.staticConfiguration.kafka.sasl.passwordSecretRef' is deprecated and set", + "field 'spec.cluster.staticConfiguration.admin.tls.caCertSecretRef' is deprecated and set", + "field 'spec.cluster.staticConfiguration.admin.tls.certSecretRef' is deprecated and set", + "field 'spec.cluster.staticConfiguration.admin.tls.keySecretRef' is deprecated and set", + "field 'spec.cluster.staticConfiguration.admin.sasl.passwordSecretRef' is deprecated and set", + "field 'spec.cluster.staticConfiguration.admin.sasl.token' is deprecated and set", + "field 'spec.cluster.staticConfiguration.schemaRegistry.tls.caCertSecretRef' is deprecated and set", + "field 'spec.cluster.staticConfiguration.schemaRegistry.tls.certSecretRef' is deprecated and set", + "field 'spec.cluster.staticConfiguration.schemaRegistry.tls.keySecretRef' is deprecated and set", + "field 'spec.cluster.staticConfiguration.schemaRegistry.sasl.passwordSecretRef' is deprecated and set", + "field 'spec.cluster.staticConfiguration.schemaRegistry.sasl.token' is deprecated and set", + }, + }, { name: "Redpanda", obj: &Redpanda{ diff --git a/operator/api/redpanda/v1alpha2/zz_generated.register.go b/operator/api/redpanda/v1alpha2/zz_generated.register.go index d2a7b519a..90690249b 100644 --- a/operator/api/redpanda/v1alpha2/zz_generated.register.go +++ b/operator/api/redpanda/v1alpha2/zz_generated.register.go @@ -60,6 +60,8 @@ func addKnownTypes(scheme *runtime.Scheme) error { &GroupList{}, &NodePool{}, &NodePoolList{}, + &Pipeline{}, + &PipelineList{}, &Redpanda{}, &RedpandaList{}, &RedpandaRole{}, diff --git a/operator/chart/README.md b/operator/chart/README.md index de6b644c5..68d9dea51 100644 --- a/operator/chart/README.md +++ b/operator/chart/README.md @@ -40,6 +40,12 @@ Sets the Kubernetes cluster domain. **Default:** `"cluster.local"` +### [commonAnnotations](https://artifacthub.io/packages/helm/redpanda-data/operator?modal=values&path=commonAnnotations) + +Additional annotations to add to all resources managed by the operator. Useful for satisfying OPA Gatekeeper RequiredAnnotations constraints. For example, `owner: "platform-team@example.com"`. + +**Default:** `{}` + ### [commonLabels](https://artifacthub.io/packages/helm/redpanda-data/operator?modal=values&path=commonLabels) Additional labels to add to all Kubernetes objects. For example, `my.k8s.service: redpanda-operator`. @@ -122,6 +128,28 @@ Sets the port for the webhook server to listen on. **Default:** `9443` +### [connectController](https://artifacthub.io/packages/helm/redpanda-data/operator?modal=values&path=connectController) + +Enables the Redpanda Connect controller for managing Connect pipeline CRs. Pipelines still require an enterprise license with the CONNECT product on each CR. + +**Default:** + +``` +{"enabled":false,"monitoring":{"enabled":false}} +``` + +### [connectController.monitoring](https://artifacthub.io/packages/helm/redpanda-data/operator?modal=values&path=connectController.monitoring) + +Monitoring configuration for Connect pipeline pods. + +**Default:** `{"enabled":false}` + +### [connectController.monitoring.enabled](https://artifacthub.io/packages/helm/redpanda-data/operator?modal=values&path=connectController.monitoring.enabled) + +Enables PodMonitor creation for all Connect pipelines. Requires the Prometheus Operator CRDs (monitoring.coreos.com) to be installed. + +**Default:** `false` + ### [crds](https://artifacthub.io/packages/helm/redpanda-data/operator?modal=values&path=crds) Flags to control CRD installation. diff --git a/operator/chart/deployment.go b/operator/chart/deployment.go index 7e1df747c..d5f9cff08 100644 --- a/operator/chart/deployment.go +++ b/operator/chart/deployment.go @@ -345,10 +345,49 @@ func operatorArguments(dot *helmette.Dot) []string { "--configurator-tag": containerTag(dot), "--configurator-base-image": values.Image.Repository, "--enable-vectorized-controllers": fmt.Sprintf("%t", values.VectorizedControllers.Enabled), + "--enable-connect": fmt.Sprintf("%t", values.ConnectController.Enabled), + "--connect-monitoring-enabled": fmt.Sprintf("%t", values.ConnectController.Monitoring.Enabled), + } + + if values.ConnectController.Monitoring.ScrapeInterval != "" { + defaults["--connect-monitoring-scrape-interval"] = values.ConnectController.Monitoring.ScrapeInterval + } + + if values.ConnectController.Image != nil && + values.ConnectController.Image.Repository != "" && + values.ConnectController.Image.Tag != "" { + defaults["--connect-default-image"] = fmt.Sprintf( + "%s:%s", + values.ConnectController.Image.Repository, + values.ConnectController.Image.Tag, + ) + } + + if len(values.ConnectController.Monitoring.Labels) > 0 { + labelArg := "" + for key, value := range helmette.SortedMap(values.ConnectController.Monitoring.Labels) { + if labelArg != "" { + labelArg = labelArg + "," + } + labelArg = labelArg + fmt.Sprintf("%s=%s", key, value) + } + defaults["--connect-monitoring-labels"] = labelArg } addLicenseFilePathArg(defaults, values) + if len(values.CommonAnnotations) > 0 { + // Build comma-separated key=value pairs for --common-annotations flag. + annotationArg := "" + for key, value := range helmette.SortedMap(values.CommonAnnotations) { + if annotationArg != "" { + annotationArg = annotationArg + "," + } + annotationArg = annotationArg + fmt.Sprintf("%s=%s", key, value) + } + defaults["--common-annotations"] = annotationArg + } + if values.Webhook.Enabled { defaults["--webhook-cert-path"] = webhookCertificatePath } diff --git a/operator/chart/files/rbac/pipeline.ClusterRole.yaml b/operator/chart/files/rbac/pipeline.ClusterRole.yaml new file mode 100644 index 000000000..d06bd68ca --- /dev/null +++ b/operator/chart/files/rbac/pipeline.ClusterRole.yaml @@ -0,0 +1,95 @@ +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: pipeline +rules: + - apiGroups: + - "" + resources: + - configmaps + - secrets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch + - apiGroups: + - "" + resources: + - pods + verbs: + - get + - list + - watch + - apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch + - apiGroups: + - cluster.redpanda.com + resources: + - pipelines + verbs: + - get + - list + - patch + - update + - watch + - apiGroups: + - cluster.redpanda.com + resources: + - pipelines/finalizers + verbs: + - update + - apiGroups: + - cluster.redpanda.com + resources: + - pipelines/status + verbs: + - get + - patch + - update + - apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch + - apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch + - apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch diff --git a/operator/chart/rbac.go b/operator/chart/rbac.go index 8a247a211..1a78f07ef 100644 --- a/operator/chart/rbac.go +++ b/operator/chart/rbac.go @@ -45,6 +45,7 @@ func rbacBundles(dot *helmette.Dot) []RBACBundle { "files/rbac/v1-manager.ClusterRole.yaml": values.VectorizedControllers.Enabled, "files/rbac/v1-manager.Role.yaml": values.VectorizedControllers.Enabled, "files/rbac/v2-manager.ClusterRole.yaml": true, + "files/rbac/pipeline.ClusterRole.yaml": values.ConnectController.Enabled, "files/rbac/multicluster-manager.ClusterRole.yaml": values.Multicluster.Enabled, }, }, diff --git a/operator/chart/templates/_deployment.go.tpl b/operator/chart/templates/_deployment.go.tpl index ce017ce15..f3a80fc5e 100644 --- a/operator/chart/templates/_deployment.go.tpl +++ b/operator/chart/templates/_deployment.go.tpl @@ -173,8 +173,40 @@ {{- range $_ := (list 1) -}} {{- $_is_returning := false -}} {{- $values := $dot.Values.AsMap -}} -{{- $defaults := (dict "--health-probe-bind-address" ":8081" "--metrics-bind-address" ":8443" "--leader-elect" "" "--enable-console" "true" "--log-level" $values.logLevel "--webhook-enabled" (printf "%t" $values.webhook.enabled) "--configurator-tag" (get (fromJson (include "operator.containerTag" (dict "a" (list $dot)))) "r") "--configurator-base-image" $values.image.repository "--enable-vectorized-controllers" (printf "%t" $values.vectorizedControllers.enabled)) -}} +{{- $defaults := (dict "--health-probe-bind-address" ":8081" "--metrics-bind-address" ":8443" "--leader-elect" "" "--enable-console" "true" "--log-level" $values.logLevel "--webhook-enabled" (printf "%t" $values.webhook.enabled) "--configurator-tag" (get (fromJson (include "operator.containerTag" (dict "a" (list $dot)))) "r") "--configurator-base-image" $values.image.repository "--enable-vectorized-controllers" (printf "%t" $values.vectorizedControllers.enabled) "--enable-connect" (printf "%t" $values.connectController.enabled) "--connect-monitoring-enabled" (printf "%t" $values.connectController.monitoring.enabled)) -}} +{{- if (ne $values.connectController.monitoring.scrapeInterval "") -}} +{{- $_ := (set $defaults "--connect-monitoring-scrape-interval" $values.connectController.monitoring.scrapeInterval) -}} +{{- end -}} +{{- if (and (and (ne (toJson $values.connectController.image) "null") (ne $values.connectController.image.repository "")) (ne $values.connectController.image.tag "")) -}} +{{- $_ := (set $defaults "--connect-default-image" (printf "%s:%s" $values.connectController.image.repository $values.connectController.image.tag)) -}} +{{- end -}} +{{- if (gt ((get (fromJson (include "_shims.len" (dict "a" (list $values.connectController.monitoring.labels)))) "r") | int) (0 | int)) -}} +{{- $labelArg := "" -}} +{{- range $key, $value := $values.connectController.monitoring.labels -}} +{{- if (ne $labelArg "") -}} +{{- $labelArg = (printf "%s%s" $labelArg ",") -}} +{{- end -}} +{{- $labelArg = (printf "%s%s" $labelArg (printf "%s=%s" $key $value)) -}} +{{- end -}} +{{- if $_is_returning -}} +{{- break -}} +{{- end -}} +{{- $_ := (set $defaults "--connect-monitoring-labels" $labelArg) -}} +{{- end -}} {{- $_ := (get (fromJson (include "operator.addLicenseFilePathArg" (dict "a" (list $defaults $values)))) "r") -}} +{{- if (gt ((get (fromJson (include "_shims.len" (dict "a" (list $values.commonAnnotations)))) "r") | int) (0 | int)) -}} +{{- $annotationArg := "" -}} +{{- range $key, $value := $values.commonAnnotations -}} +{{- if (ne $annotationArg "") -}} +{{- $annotationArg = (printf "%s%s" $annotationArg ",") -}} +{{- end -}} +{{- $annotationArg = (printf "%s%s" $annotationArg (printf "%s=%s" $key $value)) -}} +{{- end -}} +{{- if $_is_returning -}} +{{- break -}} +{{- end -}} +{{- $_ := (set $defaults "--common-annotations" $annotationArg) -}} +{{- end -}} {{- if $values.webhook.enabled -}} {{- $_ := (set $defaults "--webhook-cert-path" "/tmp/k8s-webhook-server/serving-certs") -}} {{- end -}} diff --git a/operator/chart/templates/_rbac.go.tpl b/operator/chart/templates/_rbac.go.tpl index f95eac695..45c551564 100644 --- a/operator/chart/templates/_rbac.go.tpl +++ b/operator/chart/templates/_rbac.go.tpl @@ -6,7 +6,7 @@ {{- range $_ := (list 1) -}} {{- $_is_returning := false -}} {{- $values := $dot.Values.AsMap -}} -{{- $bundles := (list (mustMergeOverwrite (dict "Enabled" false "Name" "" "Subject" "" "RuleFiles" (coalesce nil) "Annotations" (coalesce nil)) (dict "Name" (get (fromJson (include "operator.Fullname" (dict "a" (list $dot)))) "r") "Enabled" true "Subject" (get (fromJson (include "operator.ServiceAccountName" (dict "a" (list $dot)))) "r") "RuleFiles" (dict "files/rbac/console.ClusterRole.yaml" true "files/rbac/leader-election.ClusterRole.yaml" true "files/rbac/leader-election.Role.yaml" true "files/rbac/pvcunbinder.ClusterRole.yaml" true "files/rbac/pvcunbinder.Role.yaml" true "files/rbac/rack-awareness.ClusterRole.yaml" true "files/rbac/rpk-debug-bundle.Role.yaml" true "files/rbac/sidecar.Role.yaml" true "files/rbac/v1-manager.ClusterRole.yaml" $values.vectorizedControllers.enabled "files/rbac/v1-manager.Role.yaml" $values.vectorizedControllers.enabled "files/rbac/v2-manager.ClusterRole.yaml" true "files/rbac/multicluster-manager.ClusterRole.yaml" $values.multicluster.enabled))) (mustMergeOverwrite (dict "Enabled" false "Name" "" "Subject" "" "RuleFiles" (coalesce nil) "Annotations" (coalesce nil)) (dict "Name" (get (fromJson (include "operator.cleanForK8sWithSuffix" (dict "a" (list (get (fromJson (include "operator.Fullname" (dict "a" (list $dot)))) "r") "additional-controllers")))) "r") "Enabled" $values.rbac.createAdditionalControllerCRs "Subject" (get (fromJson (include "operator.ServiceAccountName" (dict "a" (list $dot)))) "r") "RuleFiles" (dict "files/rbac/decommission.ClusterRole.yaml" true "files/rbac/decommission.Role.yaml" true "files/rbac/node-watcher.ClusterRole.yaml" true "files/rbac/node-watcher.Role.yaml" true "files/rbac/old-decommission.ClusterRole.yaml" true "files/rbac/old-decommission.Role.yaml" true "files/rbac/pvcunbinder.ClusterRole.yaml" true "files/rbac/pvcunbinder.Role.yaml" true))) (mustMergeOverwrite (dict "Enabled" false "Name" "" "Subject" "" "RuleFiles" (coalesce nil) "Annotations" (coalesce nil)) (dict "Name" (get (fromJson (include "operator.CRDJobServiceAccountName" (dict "a" (list $dot)))) "r") "Enabled" (or $values.crds.enabled $values.crds.experimental) "Subject" (get (fromJson (include "operator.CRDJobServiceAccountName" (dict "a" (list $dot)))) "r") "Annotations" (dict "helm.sh/hook" "pre-install,pre-upgrade" "helm.sh/hook-delete-policy" "before-hook-creation,hook-succeeded,hook-failed" "helm.sh/hook-weight" "-10") "RuleFiles" (dict "files/rbac/crd-installation.ClusterRole.yaml" true)))) -}} +{{- $bundles := (list (mustMergeOverwrite (dict "Enabled" false "Name" "" "Subject" "" "RuleFiles" (coalesce nil) "Annotations" (coalesce nil)) (dict "Name" (get (fromJson (include "operator.Fullname" (dict "a" (list $dot)))) "r") "Enabled" true "Subject" (get (fromJson (include "operator.ServiceAccountName" (dict "a" (list $dot)))) "r") "RuleFiles" (dict "files/rbac/console.ClusterRole.yaml" true "files/rbac/leader-election.ClusterRole.yaml" true "files/rbac/leader-election.Role.yaml" true "files/rbac/pvcunbinder.ClusterRole.yaml" true "files/rbac/pvcunbinder.Role.yaml" true "files/rbac/rack-awareness.ClusterRole.yaml" true "files/rbac/rpk-debug-bundle.Role.yaml" true "files/rbac/sidecar.Role.yaml" true "files/rbac/v1-manager.ClusterRole.yaml" $values.vectorizedControllers.enabled "files/rbac/v1-manager.Role.yaml" $values.vectorizedControllers.enabled "files/rbac/v2-manager.ClusterRole.yaml" true "files/rbac/pipeline.ClusterRole.yaml" $values.connectController.enabled "files/rbac/multicluster-manager.ClusterRole.yaml" $values.multicluster.enabled))) (mustMergeOverwrite (dict "Enabled" false "Name" "" "Subject" "" "RuleFiles" (coalesce nil) "Annotations" (coalesce nil)) (dict "Name" (get (fromJson (include "operator.cleanForK8sWithSuffix" (dict "a" (list (get (fromJson (include "operator.Fullname" (dict "a" (list $dot)))) "r") "additional-controllers")))) "r") "Enabled" $values.rbac.createAdditionalControllerCRs "Subject" (get (fromJson (include "operator.ServiceAccountName" (dict "a" (list $dot)))) "r") "RuleFiles" (dict "files/rbac/decommission.ClusterRole.yaml" true "files/rbac/decommission.Role.yaml" true "files/rbac/node-watcher.ClusterRole.yaml" true "files/rbac/node-watcher.Role.yaml" true "files/rbac/old-decommission.ClusterRole.yaml" true "files/rbac/old-decommission.Role.yaml" true "files/rbac/pvcunbinder.ClusterRole.yaml" true "files/rbac/pvcunbinder.Role.yaml" true))) (mustMergeOverwrite (dict "Enabled" false "Name" "" "Subject" "" "RuleFiles" (coalesce nil) "Annotations" (coalesce nil)) (dict "Name" (get (fromJson (include "operator.CRDJobServiceAccountName" (dict "a" (list $dot)))) "r") "Enabled" (or $values.crds.enabled $values.crds.experimental) "Subject" (get (fromJson (include "operator.CRDJobServiceAccountName" (dict "a" (list $dot)))) "r") "Annotations" (dict "helm.sh/hook" "pre-install,pre-upgrade" "helm.sh/hook-delete-policy" "before-hook-creation,hook-succeeded,hook-failed" "helm.sh/hook-weight" "-10") "RuleFiles" (dict "files/rbac/crd-installation.ClusterRole.yaml" true)))) -}} {{- $bundles = (concat (default (list) $bundles) (list (mustMergeOverwrite (dict "Enabled" false "Name" "" "Subject" "" "RuleFiles" (coalesce nil) "Annotations" (coalesce nil)) (dict "Name" (get (fromJson (include "operator.MigrationJobServiceAccountName" (dict "a" (list $dot)))) "r") "Enabled" true "Subject" (get (fromJson (include "operator.MigrationJobServiceAccountName" (dict "a" (list $dot)))) "r") "Annotations" (dict "helm.sh/hook" "post-upgrade" "helm.sh/hook-delete-policy" "before-hook-creation,hook-succeeded,hook-failed" "helm.sh/hook-weight" "-10") "RuleFiles" (index $bundles (0 | int)).RuleFiles)))) -}} {{- $_is_returning = true -}} {{- (dict "r" $bundles) | toJson -}} diff --git a/operator/chart/testdata/template-cases.golden.txtar b/operator/chart/testdata/template-cases.golden.txtar index c9f0d6025..86f58d4da 100644 --- a/operator/chart/testdata/template-cases.golden.txtar +++ b/operator/chart/testdata/template-cases.golden.txtar @@ -776,6 +776,8 @@ spec: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-image-pull-policy=IfNotPresent - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -2173,6 +2175,8 @@ spec: - args: - --configurator-base-image=82 - --configurator-tag=UkFHNyv + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -3616,6 +3620,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -5190,6 +5196,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=true - --health-probe-bind-address=:8081 @@ -6911,6 +6919,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -8323,6 +8333,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -9865,6 +9877,8 @@ spec: - args: - --configurator-base-image=Fd0 - --configurator-tag=cOgp6ac + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -11499,6 +11513,8 @@ spec: - args: - --configurator-base-image=1F5 - --configurator-tag=82lZfTf2 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -12922,6 +12938,8 @@ spec: - args: - --configurator-base-image=Hol - --configurator-tag=8ePENyCLz + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -14346,6 +14364,8 @@ spec: - args: - --configurator-base-image=j - --configurator-tag=efh5i + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -15636,6 +15656,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -17084,6 +17106,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -18734,6 +18758,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=true - --health-probe-bind-address=:8081 @@ -20218,6 +20244,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -21630,6 +21658,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -23066,6 +23096,8 @@ spec: - args: - --configurator-base-image=YVfoVe - --configurator-tag=wqmCFiuJSq + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -23870,6 +23902,8 @@ spec: - args: - --configurator-base-image=5ZJUE - --configurator-tag=25V2E90p + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -24900,6 +24934,8 @@ spec: - args: - --configurator-base-image=hchsZ8 - --configurator-tag=9lUTX + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -26556,6 +26592,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -28136,6 +28174,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=true - --health-probe-bind-address=:8081 @@ -29088,6 +29128,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -30223,6 +30265,8 @@ spec: - args: - --configurator-base-image=T1Xvq - --configurator-tag=JedSySWkwU + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -31900,6 +31944,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=true - --health-probe-bind-address=:8081 @@ -33653,6 +33699,8 @@ spec: - args: - --configurator-base-image=cWng - --configurator-tag=afUH16iMvi + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -35204,6 +35252,8 @@ spec: - args: - --configurator-base-image=8ta - --configurator-tag=N5m1k + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=true - --health-probe-bind-address=:8081 @@ -36397,6 +36447,8 @@ spec: - args: - --configurator-base-image=OLVAhRBe - --configurator-tag=iR9tH + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=true - --health-probe-bind-address=:8081 @@ -37296,6 +37348,8 @@ spec: - args: - --configurator-base-image=Jy - --configurator-tag=QK8rajxXu7 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -38436,6 +38490,8 @@ spec: - args: - --configurator-base-image - --configurator-tag=yLRNE5 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -39572,6 +39628,8 @@ spec: - args: - --configurator-base-image=O - --configurator-tag=F0pU + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -41207,6 +41265,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -42803,6 +42863,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=true - --health-probe-bind-address=:8081 @@ -44057,6 +44119,8 @@ spec: - args: - --configurator-base-image=h6rx - --configurator-tag=yXNMX + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -45479,6 +45543,8 @@ spec: - args: - --configurator-base-image - --configurator-tag=vO + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -47058,6 +47124,8 @@ spec: - args: - --configurator-base-image=HJ - --configurator-tag=0ORIQruw + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -49045,6 +49113,8 @@ spec: - args: - --configurator-base-image=JoEOn0Quud0uJ - --configurator-tag=Jzj6fJDqK + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -50257,6 +50327,8 @@ spec: - args: - --configurator-base-image=bR - --configurator-tag=TxCthY7Ie + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -50887,6 +50959,8 @@ spec: - args: - --configurator-base-image - --configurator-tag=9dFbDt + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -52552,6 +52626,8 @@ spec: - args: - --configurator-base-image=HOES5h7c - --configurator-tag=blks9 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=true - --health-probe-bind-address=:8081 @@ -54153,6 +54229,8 @@ spec: - args: - --configurator-base-image=cbG - --configurator-tag=0q7KunZ1RCP + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -55420,6 +55498,8 @@ spec: - args: - --configurator-base-image=RF7Jmqe27 - --configurator-tag=bC + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -56652,6 +56732,8 @@ spec: - args: - --configurator-base-image=UqP - --configurator-tag=XrgP + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=true - --health-probe-bind-address=:8081 @@ -57473,6 +57555,8 @@ spec: - args: - --configurator-base-image=0HXtC - --configurator-tag=xSn6 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -59066,6 +59150,8 @@ spec: - args: - --configurator-base-image=95JHcsXy - --configurator-tag=YVZ + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -61239,6 +61325,8 @@ spec: - args: - --configurator-base-image=gkRs29P - --configurator-tag=kgK + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -62542,6 +62630,8 @@ spec: - args: - --configurator-base-image=UG - --configurator-tag=zsECOIlUnlWTDC + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -63739,6 +63829,8 @@ spec: - args: - --configurator-base-image=929tqf - --configurator-tag=VZB + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -64900,6 +64992,8 @@ spec: - args: - --configurator-base-image=FlBYj - --configurator-tag=fS + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -66325,6 +66419,8 @@ spec: - args: - --configurator-base-image=Iedc3BYXQ - --configurator-tag=UoBs + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=true - --health-probe-bind-address=:8081 @@ -68370,6 +68466,8 @@ spec: - args: - --configurator-base-image=zg2YXP - --configurator-tag=Cp + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -70645,6 +70743,8 @@ spec: - args: - --configurator-base-image=mLw - --configurator-tag=5QaeLH + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=true - --health-probe-bind-address=:8081 @@ -72786,6 +72886,8 @@ spec: - args: - --configurator-base-image=ZuVTlT - --configurator-tag=J + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -73824,7 +73926,7 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace --- testdata/crd-installation-experimental.yaml.golden -- +-- testdata/common-annotations.yaml.golden -- --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -74599,8 +74701,11 @@ spec: automountServiceAccountToken: false containers: - args: + - --common-annotations=environment=production,owner=platform-team@example.com - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -74673,24 +74778,6 @@ spec: apiVersion: v1 automountServiceAccountToken: false kind: ServiceAccount -metadata: - annotations: - helm.sh/hook: pre-install,pre-upgrade - helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed - helm.sh/hook-weight: "-10" - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: operator-crd-job - namespace: default ---- -# Source: operator/templates/entry-point.yaml -apiVersion: v1 -automountServiceAccountToken: false -kind: ServiceAccount metadata: annotations: helm.sh/hook: post-upgrade @@ -74708,32 +74795,6 @@ metadata: # Source: operator/templates/entry-point.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole -metadata: - annotations: - helm.sh/hook: pre-install,pre-upgrade - helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed - helm.sh/hook-weight: "-10" - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: operator-crd-job-default -rules: -- apiGroups: - - apiextensions.k8s.io - resources: - - customresourcedefinitions - verbs: - - create - - get - - patch - - update ---- -# Source: operator/templates/entry-point.yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole metadata: annotations: helm.sh/hook: post-upgrade @@ -75178,30 +75239,6 @@ rules: # Source: operator/templates/entry-point.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding -metadata: - annotations: - helm.sh/hook: pre-install,pre-upgrade - helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed - helm.sh/hook-weight: "-10" - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: operator-crd-job-default -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: operator-crd-job-default -subjects: -- kind: ServiceAccount - name: operator-crd-job - namespace: default ---- -# Source: operator/templates/entry-point.yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding metadata: annotations: helm.sh/hook: post-upgrade @@ -75226,73 +75263,6 @@ subjects: # Source: operator/templates/entry-point.yaml apiVersion: batch/v1 kind: Job -metadata: - annotations: - helm.sh/hook: pre-install,pre-upgrade - helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed - helm.sh/hook-weight: "-5" - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: operator-crds - namespace: default -spec: - template: - metadata: - annotations: {} - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/name: operator - spec: - automountServiceAccountToken: false - containers: - - args: - - crd - - --experimental - command: - - /redpanda-operator - image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 - imagePullPolicy: IfNotPresent - name: crd-installation - resources: {} - securityContext: - allowPrivilegeEscalation: false - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount - name: kube-api-access - readOnly: true - imagePullSecrets: [] - nodeSelector: {} - restartPolicy: OnFailure - serviceAccountName: operator-crd-job - terminationGracePeriodSeconds: 10 - tolerations: [] - volumes: - - name: kube-api-access - projected: - defaultMode: 420 - sources: - - serviceAccountToken: - expirationSeconds: 3607 - path: token - - configMap: - items: - - key: ca.crt - path: ca.crt - name: kube-root-ca.crt - - downwardAPI: - items: - - fieldRef: - apiVersion: v1 - fieldPath: metadata.namespace - path: namespace ---- -# Source: operator/templates/entry-point.yaml -apiVersion: batch/v1 -kind: Job metadata: annotations: helm.sh/hook: post-upgrade @@ -75355,7 +75325,7 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace --- testdata/crd-installation.yaml.golden -- +-- testdata/connect-controller-enabled.yaml.golden -- --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -75564,6 +75534,95 @@ rules: - patch - update - watch +- apiGroups: + - "" + resources: + - configmaps + - secrets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - pods + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines/status + verbs: + - get + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch - apiGroups: - "" resources: @@ -76132,6 +76191,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=true - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -76204,24 +76265,6 @@ spec: apiVersion: v1 automountServiceAccountToken: false kind: ServiceAccount -metadata: - annotations: - helm.sh/hook: pre-install,pre-upgrade - helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed - helm.sh/hook-weight: "-10" - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: operator-crd-job - namespace: default ---- -# Source: operator/templates/entry-point.yaml -apiVersion: v1 -automountServiceAccountToken: false -kind: ServiceAccount metadata: annotations: helm.sh/hook: post-upgrade @@ -76239,32 +76282,6 @@ metadata: # Source: operator/templates/entry-point.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole -metadata: - annotations: - helm.sh/hook: pre-install,pre-upgrade - helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed - helm.sh/hook-weight: "-10" - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: operator-crd-job-default -rules: -- apiGroups: - - apiextensions.k8s.io - resources: - - customresourcedefinitions - verbs: - - create - - get - - patch - - update ---- -# Source: operator/templates/entry-point.yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole metadata: annotations: helm.sh/hook: post-upgrade @@ -76412,6 +76429,95 @@ rules: - patch - update - watch +- apiGroups: + - "" + resources: + - configmaps + - secrets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - pods + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines/status + verbs: + - get + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch - apiGroups: - "" resources: @@ -76709,30 +76815,6 @@ rules: # Source: operator/templates/entry-point.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding -metadata: - annotations: - helm.sh/hook: pre-install,pre-upgrade - helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed - helm.sh/hook-weight: "-10" - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: operator-crd-job-default -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: operator-crd-job-default -subjects: -- kind: ServiceAccount - name: operator-crd-job - namespace: default ---- -# Source: operator/templates/entry-point.yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding metadata: annotations: helm.sh/hook: post-upgrade @@ -76757,72 +76839,6 @@ subjects: # Source: operator/templates/entry-point.yaml apiVersion: batch/v1 kind: Job -metadata: - annotations: - helm.sh/hook: pre-install,pre-upgrade - helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed - helm.sh/hook-weight: "-5" - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: operator-crds - namespace: default -spec: - template: - metadata: - annotations: {} - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/name: operator - spec: - automountServiceAccountToken: false - containers: - - args: - - crd - command: - - /redpanda-operator - image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 - imagePullPolicy: IfNotPresent - name: crd-installation - resources: {} - securityContext: - allowPrivilegeEscalation: false - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount - name: kube-api-access - readOnly: true - imagePullSecrets: [] - nodeSelector: {} - restartPolicy: OnFailure - serviceAccountName: operator-crd-job - terminationGracePeriodSeconds: 10 - tolerations: [] - volumes: - - name: kube-api-access - projected: - defaultMode: 420 - sources: - - serviceAccountToken: - expirationSeconds: 3607 - path: token - - configMap: - items: - - key: ca.crt - path: ca.crt - name: kube-root-ca.crt - - downwardAPI: - items: - - fieldRef: - apiVersion: v1 - fieldPath: metadata.namespace - path: namespace ---- -# Source: operator/templates/entry-point.yaml -apiVersion: batch/v1 -kind: Job metadata: annotations: helm.sh/hook: post-upgrade @@ -76885,7 +76901,7 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace --- testdata/default-values.yaml.golden -- +-- testdata/connect-controller-with-license.yaml.golden -- --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -77094,6 +77110,95 @@ rules: - patch - update - watch +- apiGroups: + - "" + resources: + - configmaps + - secrets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - pods + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines/status + verbs: + - get + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch - apiGroups: - "" resources: @@ -77662,10 +77767,13 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=true - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 - --leader-elect + - --license-file-path=/redpanda/license/license - --log-level=info - --metrics-bind-address=:8443 - --webhook-enabled=false @@ -77701,6 +77809,9 @@ spec: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access readOnly: true + - mountPath: /redpanda/license + name: license + readOnly: true ephemeralContainers: null imagePullSecrets: [] initContainers: [] @@ -77729,6 +77840,10 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace + - name: license + secret: + defaultMode: 420 + secretName: redpanda-license --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -77898,6 +78013,95 @@ rules: - patch - update - watch +- apiGroups: + - "" + resources: + - configmaps + - secrets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - pods + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines/status + verbs: + - get + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch - apiGroups: - "" resources: @@ -78281,7 +78485,7 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace --- testdata/disabled-service-account-automount-token-with-volume-overwrite.yaml.golden -- +-- testdata/connect-monitoring-enabled.yaml.golden -- --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -78490,6 +78694,95 @@ rules: - patch - update - watch +- apiGroups: + - "" + resources: + - configmaps + - secrets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - pods + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines/status + verbs: + - get + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch - apiGroups: - "" resources: @@ -79058,6 +79351,10 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=true + - --connect-monitoring-labels=team=platform + - --connect-monitoring-scrape-interval=30s + - --enable-connect=true - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -79095,7 +79392,8 @@ spec: allowPrivilegeEscalation: false volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount - name: kube-api-access-overwrite + name: kube-api-access + readOnly: true ephemeralContainers: null imagePullSecrets: [] initContainers: [] @@ -79124,24 +79422,6 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace - - name: kube-api-access-overwrite - projected: - defaultMode: 420 - sources: - - serviceAccountToken: - expirationSeconds: 666 - path: token - - configMap: - items: - - key: ca.crt - path: ca.crt - name: some-kube-root-ca-config-map.crt - - downwardAPI: - items: - - fieldRef: - apiVersion: v1 - fieldPath: metadata.namespace - path: namespace --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -79311,6 +79591,95 @@ rules: - patch - update - watch +- apiGroups: + - "" + resources: + - configmaps + - secrets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - pods + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines/status + verbs: + - get + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch - apiGroups: - "" resources: @@ -79694,11 +80063,11 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace --- testdata/enabled-service-account-automount-token-in-only-service-account-resource.yaml.golden -- +-- testdata/crd-installation-experimental.yaml.golden -- --- # Source: operator/templates/entry-point.yaml apiVersion: v1 -automountServiceAccountToken: true +automountServiceAccountToken: false kind: ServiceAccount metadata: annotations: null @@ -80471,6 +80840,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -80543,6 +80914,24 @@ spec: apiVersion: v1 automountServiceAccountToken: false kind: ServiceAccount +metadata: + annotations: + helm.sh/hook: pre-install,pre-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-crd-job + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +automountServiceAccountToken: false +kind: ServiceAccount metadata: annotations: helm.sh/hook: post-upgrade @@ -80560,6 +80949,32 @@ metadata: # Source: operator/templates/entry-point.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole +metadata: + annotations: + helm.sh/hook: pre-install,pre-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-crd-job-default +rules: +- apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + verbs: + - create + - get + - patch + - update +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole metadata: annotations: helm.sh/hook: post-upgrade @@ -81004,6 +81419,30 @@ rules: # Source: operator/templates/entry-point.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding +metadata: + annotations: + helm.sh/hook: pre-install,pre-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-crd-job-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-crd-job-default +subjects: +- kind: ServiceAccount + name: operator-crd-job + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding metadata: annotations: helm.sh/hook: post-upgrade @@ -81028,6 +81467,73 @@ subjects: # Source: operator/templates/entry-point.yaml apiVersion: batch/v1 kind: Job +metadata: + annotations: + helm.sh/hook: pre-install,pre-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-5" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-crds + namespace: default +spec: + template: + metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + spec: + automountServiceAccountToken: false + containers: + - args: + - crd + - --experimental + command: + - /redpanda-operator + image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 + imagePullPolicy: IfNotPresent + name: crd-installation + resources: {} + securityContext: + allowPrivilegeEscalation: false + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access + readOnly: true + imagePullSecrets: [] + nodeSelector: {} + restartPolicy: OnFailure + serviceAccountName: operator-crd-job + terminationGracePeriodSeconds: 10 + tolerations: [] + volumes: + - name: kube-api-access + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 3607 + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace +--- +# Source: operator/templates/entry-point.yaml +apiVersion: batch/v1 +kind: Job metadata: annotations: helm.sh/hook: post-upgrade @@ -81090,11 +81596,11 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace --- testdata/enabled-service-account-automount-token-in-service-account-resource.yaml.golden -- +-- testdata/crd-installation.yaml.golden -- --- # Source: operator/templates/entry-point.yaml apiVersion: v1 -automountServiceAccountToken: true +automountServiceAccountToken: false kind: ServiceAccount metadata: annotations: null @@ -81862,11 +82368,5756 @@ spec: app.kubernetes.io/instance: operator app.kubernetes.io/name: operator spec: - automountServiceAccountToken: true + automountServiceAccountToken: false containers: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false + - --enable-console=true + - --enable-vectorized-controllers=false + - --health-probe-bind-address=:8081 + - --leader-elect + - --log-level=info + - --metrics-bind-address=:8443 + - --webhook-enabled=false + command: + - /manager + env: [] + image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 + imagePullPolicy: IfNotPresent + livenessProbe: + httpGet: + path: /healthz/ + port: 8081 + initialDelaySeconds: 15 + periodSeconds: 20 + name: manager + ports: + - containerPort: 9443 + name: webhook-server + protocol: TCP + - containerPort: 8443 + name: https + protocol: TCP + readinessProbe: + httpGet: + path: /readyz + port: 8081 + initialDelaySeconds: 5 + periodSeconds: 10 + resources: {} + securityContext: + allowPrivilegeEscalation: false + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access + readOnly: true + ephemeralContainers: null + imagePullSecrets: [] + initContainers: [] + nodeSelector: {} + securityContext: + runAsUser: 65532 + serviceAccountName: operator + terminationGracePeriodSeconds: 10 + tolerations: [] + volumes: + - name: kube-api-access + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 3607 + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +automountServiceAccountToken: false +kind: ServiceAccount +metadata: + annotations: + helm.sh/hook: pre-install,pre-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-crd-job + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +automountServiceAccountToken: false +kind: ServiceAccount +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration-job + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: + helm.sh/hook: pre-install,pre-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-crd-job-default +rules: +- apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + verbs: + - create + - get + - patch + - update +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration-job-default +rules: +- apiGroups: + - "" + resources: + - configmaps + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles/status + verbs: + - get + - patch + - update +- apiGroups: + - monitoring.coreos.com + resources: + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - authentication.k8s.io + resources: + - tokenreviews + verbs: + - create +- apiGroups: + - authorization.k8s.io + resources: + - subjectaccessreviews + verbs: + - create +- apiGroups: + - "" + resources: + - configmaps + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + - pods + verbs: + - delete + - get + - list + - watch +- apiGroups: + - "" + resources: + - nodes + verbs: + - get +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - events + - limitranges + - persistentvolumeclaims + - pods + - pods/log + - replicationcontrollers + - resourcequotas + - serviceaccounts + - services + verbs: + - get + - list +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - pods + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - controllerrevisions + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + - statefulsets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - batch + resources: + - jobs + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cert-manager.io + resources: + - certificates + - issuers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + - nodepools + - redpandas + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups + - redpandaroles + - schemas + - shadowlinks + - stretchclusters + - topics + - users + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups/finalizers + - nodepools/finalizers + - redpandaroles/finalizers + - redpandas/finalizers + - schemas/finalizers + - shadowlinks/finalizers + - stretchclusters/finalizers + - topics/finalizers + - users/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - groups/status + - nodepools/status + - redpandaroles/status + - redpandas/status + - schemas/status + - shadowlinks/status + - stretchclusters/status + - topics/status + - users/status + verbs: + - get + - patch + - update +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - discovery.k8s.io + resources: + - endpointslices + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - multicluster.x-k8s.io + resources: + - serviceexports + - serviceimports + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterrolebindings + - clusterroles + - rolebindings + - roles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: + helm.sh/hook: pre-install,pre-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-crd-job-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-crd-job-default +subjects: +- kind: ServiceAccount + name: operator-crd-job + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration-job-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-migration-job-default +subjects: +- kind: ServiceAccount + name: operator-migration-job + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: batch/v1 +kind: Job +metadata: + annotations: + helm.sh/hook: pre-install,pre-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-5" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-crds + namespace: default +spec: + template: + metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + spec: + automountServiceAccountToken: false + containers: + - args: + - crd + command: + - /redpanda-operator + image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 + imagePullPolicy: IfNotPresent + name: crd-installation + resources: {} + securityContext: + allowPrivilegeEscalation: false + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access + readOnly: true + imagePullSecrets: [] + nodeSelector: {} + restartPolicy: OnFailure + serviceAccountName: operator-crd-job + terminationGracePeriodSeconds: 10 + tolerations: [] + volumes: + - name: kube-api-access + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 3607 + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace +--- +# Source: operator/templates/entry-point.yaml +apiVersion: batch/v1 +kind: Job +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-4" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration + namespace: default +spec: + template: + metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + spec: + automountServiceAccountToken: false + containers: + - args: + - migration + command: + - /redpanda-operator + image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 + imagePullPolicy: IfNotPresent + name: migration + resources: {} + securityContext: + allowPrivilegeEscalation: false + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access + readOnly: true + imagePullSecrets: [] + nodeSelector: {} + restartPolicy: OnFailure + serviceAccountName: operator-migration-job + terminationGracePeriodSeconds: 10 + tolerations: [] + volumes: + - name: kube-api-access + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 3607 + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace +-- testdata/default-values.yaml.golden -- +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +automountServiceAccountToken: false +kind: ServiceAccount +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +data: + controller_manager_config.yaml: |- + apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 + health: + healthProbeBindAddress: :8081 + kind: ControllerManagerConfig + leaderElection: + leaderElect: true + resourceName: aa9fc693.vectorized.io + metrics: + bindAddress: 127.0.0.1:8080 + webhook: + port: 9443 +kind: ConfigMap +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-config + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default-metrics-reader +rules: +- nonResourceURLs: + - /metrics + verbs: + - get +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default +rules: +- apiGroups: + - "" + resources: + - configmaps + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles/status + verbs: + - get + - patch + - update +- apiGroups: + - monitoring.coreos.com + resources: + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - authentication.k8s.io + resources: + - tokenreviews + verbs: + - create +- apiGroups: + - authorization.k8s.io + resources: + - subjectaccessreviews + verbs: + - create +- apiGroups: + - "" + resources: + - configmaps + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + - pods + verbs: + - delete + - get + - list + - watch +- apiGroups: + - "" + resources: + - nodes + verbs: + - get +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - events + - limitranges + - persistentvolumeclaims + - pods + - pods/log + - replicationcontrollers + - resourcequotas + - serviceaccounts + - services + verbs: + - get + - list +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - pods + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - controllerrevisions + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + - statefulsets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - batch + resources: + - jobs + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cert-manager.io + resources: + - certificates + - issuers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + - nodepools + - redpandas + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups + - redpandaroles + - schemas + - shadowlinks + - stretchclusters + - topics + - users + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups/finalizers + - nodepools/finalizers + - redpandaroles/finalizers + - redpandas/finalizers + - schemas/finalizers + - shadowlinks/finalizers + - stretchclusters/finalizers + - topics/finalizers + - users/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - groups/status + - nodepools/status + - redpandaroles/status + - redpandas/status + - schemas/status + - shadowlinks/status + - stretchclusters/status + - topics/status + - users/status + verbs: + - get + - patch + - update +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - discovery.k8s.io + resources: + - endpointslices + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - multicluster.x-k8s.io + resources: + - serviceexports + - serviceimports + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterrolebindings + - clusterroles + - rolebindings + - roles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-additional-controllers-default +rules: +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - patch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - delete + - get + - list + - watch +- apiGroups: + - "" + resources: + - pods + - secrets + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - statefulsets + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - configmaps + - nodes + - secrets + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - delete + - get + - list + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - delete + - get + - list + - patch + - update +- apiGroups: + - "" + resources: + - configmaps + - pods + - secrets + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - apps + resources: + - statefulsets/status + verbs: + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + - pods + verbs: + - delete + - get + - list + - watch +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default-metrics-reader +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-default-metrics-reader +subjects: +- kind: ServiceAccount + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-default +subjects: +- kind: ServiceAccount + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-additional-controllers-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-additional-controllers-default +subjects: +- kind: ServiceAccount + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +kind: Service +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-metrics-service + namespace: default +spec: + ports: + - name: https + port: 8443 + targetPort: https + selector: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator +--- +# Source: operator/templates/entry-point.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator + namespace: default +spec: + replicas: 1 + selector: + matchLabels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + strategy: + type: RollingUpdate + template: + metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + spec: + automountServiceAccountToken: false + containers: + - args: + - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator + - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false + - --enable-console=true + - --enable-vectorized-controllers=false + - --health-probe-bind-address=:8081 + - --leader-elect + - --log-level=info + - --metrics-bind-address=:8443 + - --webhook-enabled=false + command: + - /manager + env: [] + image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 + imagePullPolicy: IfNotPresent + livenessProbe: + httpGet: + path: /healthz/ + port: 8081 + initialDelaySeconds: 15 + periodSeconds: 20 + name: manager + ports: + - containerPort: 9443 + name: webhook-server + protocol: TCP + - containerPort: 8443 + name: https + protocol: TCP + readinessProbe: + httpGet: + path: /readyz + port: 8081 + initialDelaySeconds: 5 + periodSeconds: 10 + resources: {} + securityContext: + allowPrivilegeEscalation: false + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access + readOnly: true + ephemeralContainers: null + imagePullSecrets: [] + initContainers: [] + nodeSelector: {} + securityContext: + runAsUser: 65532 + serviceAccountName: operator + terminationGracePeriodSeconds: 10 + tolerations: [] + volumes: + - name: kube-api-access + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 3607 + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +automountServiceAccountToken: false +kind: ServiceAccount +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration-job + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration-job-default +rules: +- apiGroups: + - "" + resources: + - configmaps + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles/status + verbs: + - get + - patch + - update +- apiGroups: + - monitoring.coreos.com + resources: + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - authentication.k8s.io + resources: + - tokenreviews + verbs: + - create +- apiGroups: + - authorization.k8s.io + resources: + - subjectaccessreviews + verbs: + - create +- apiGroups: + - "" + resources: + - configmaps + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + - pods + verbs: + - delete + - get + - list + - watch +- apiGroups: + - "" + resources: + - nodes + verbs: + - get +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - events + - limitranges + - persistentvolumeclaims + - pods + - pods/log + - replicationcontrollers + - resourcequotas + - serviceaccounts + - services + verbs: + - get + - list +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - pods + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - controllerrevisions + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + - statefulsets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - batch + resources: + - jobs + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cert-manager.io + resources: + - certificates + - issuers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + - nodepools + - redpandas + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups + - redpandaroles + - schemas + - shadowlinks + - stretchclusters + - topics + - users + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups/finalizers + - nodepools/finalizers + - redpandaroles/finalizers + - redpandas/finalizers + - schemas/finalizers + - shadowlinks/finalizers + - stretchclusters/finalizers + - topics/finalizers + - users/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - groups/status + - nodepools/status + - redpandaroles/status + - redpandas/status + - schemas/status + - shadowlinks/status + - stretchclusters/status + - topics/status + - users/status + verbs: + - get + - patch + - update +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - discovery.k8s.io + resources: + - endpointslices + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - multicluster.x-k8s.io + resources: + - serviceexports + - serviceimports + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterrolebindings + - clusterroles + - rolebindings + - roles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration-job-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-migration-job-default +subjects: +- kind: ServiceAccount + name: operator-migration-job + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: batch/v1 +kind: Job +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-4" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration + namespace: default +spec: + template: + metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + spec: + automountServiceAccountToken: false + containers: + - args: + - migration + command: + - /redpanda-operator + image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 + imagePullPolicy: IfNotPresent + name: migration + resources: {} + securityContext: + allowPrivilegeEscalation: false + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access + readOnly: true + imagePullSecrets: [] + nodeSelector: {} + restartPolicy: OnFailure + serviceAccountName: operator-migration-job + terminationGracePeriodSeconds: 10 + tolerations: [] + volumes: + - name: kube-api-access + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 3607 + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace +-- testdata/disabled-service-account-automount-token-with-volume-overwrite.yaml.golden -- +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +automountServiceAccountToken: false +kind: ServiceAccount +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +data: + controller_manager_config.yaml: |- + apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 + health: + healthProbeBindAddress: :8081 + kind: ControllerManagerConfig + leaderElection: + leaderElect: true + resourceName: aa9fc693.vectorized.io + metrics: + bindAddress: 127.0.0.1:8080 + webhook: + port: 9443 +kind: ConfigMap +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-config + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default-metrics-reader +rules: +- nonResourceURLs: + - /metrics + verbs: + - get +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default +rules: +- apiGroups: + - "" + resources: + - configmaps + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles/status + verbs: + - get + - patch + - update +- apiGroups: + - monitoring.coreos.com + resources: + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - authentication.k8s.io + resources: + - tokenreviews + verbs: + - create +- apiGroups: + - authorization.k8s.io + resources: + - subjectaccessreviews + verbs: + - create +- apiGroups: + - "" + resources: + - configmaps + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + - pods + verbs: + - delete + - get + - list + - watch +- apiGroups: + - "" + resources: + - nodes + verbs: + - get +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - events + - limitranges + - persistentvolumeclaims + - pods + - pods/log + - replicationcontrollers + - resourcequotas + - serviceaccounts + - services + verbs: + - get + - list +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - pods + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - controllerrevisions + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + - statefulsets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - batch + resources: + - jobs + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cert-manager.io + resources: + - certificates + - issuers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + - nodepools + - redpandas + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups + - redpandaroles + - schemas + - shadowlinks + - stretchclusters + - topics + - users + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups/finalizers + - nodepools/finalizers + - redpandaroles/finalizers + - redpandas/finalizers + - schemas/finalizers + - shadowlinks/finalizers + - stretchclusters/finalizers + - topics/finalizers + - users/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - groups/status + - nodepools/status + - redpandaroles/status + - redpandas/status + - schemas/status + - shadowlinks/status + - stretchclusters/status + - topics/status + - users/status + verbs: + - get + - patch + - update +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - discovery.k8s.io + resources: + - endpointslices + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - multicluster.x-k8s.io + resources: + - serviceexports + - serviceimports + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterrolebindings + - clusterroles + - rolebindings + - roles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-additional-controllers-default +rules: +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - patch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - delete + - get + - list + - watch +- apiGroups: + - "" + resources: + - pods + - secrets + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - statefulsets + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - configmaps + - nodes + - secrets + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - delete + - get + - list + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - delete + - get + - list + - patch + - update +- apiGroups: + - "" + resources: + - configmaps + - pods + - secrets + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - apps + resources: + - statefulsets/status + verbs: + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + - pods + verbs: + - delete + - get + - list + - watch +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default-metrics-reader +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-default-metrics-reader +subjects: +- kind: ServiceAccount + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-default +subjects: +- kind: ServiceAccount + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-additional-controllers-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-additional-controllers-default +subjects: +- kind: ServiceAccount + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +kind: Service +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-metrics-service + namespace: default +spec: + ports: + - name: https + port: 8443 + targetPort: https + selector: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator +--- +# Source: operator/templates/entry-point.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator + namespace: default +spec: + replicas: 1 + selector: + matchLabels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + strategy: + type: RollingUpdate + template: + metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + spec: + automountServiceAccountToken: false + containers: + - args: + - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator + - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false + - --enable-console=true + - --enable-vectorized-controllers=false + - --health-probe-bind-address=:8081 + - --leader-elect + - --log-level=info + - --metrics-bind-address=:8443 + - --webhook-enabled=false + command: + - /manager + env: [] + image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 + imagePullPolicy: IfNotPresent + livenessProbe: + httpGet: + path: /healthz/ + port: 8081 + initialDelaySeconds: 15 + periodSeconds: 20 + name: manager + ports: + - containerPort: 9443 + name: webhook-server + protocol: TCP + - containerPort: 8443 + name: https + protocol: TCP + readinessProbe: + httpGet: + path: /readyz + port: 8081 + initialDelaySeconds: 5 + periodSeconds: 10 + resources: {} + securityContext: + allowPrivilegeEscalation: false + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-overwrite + ephemeralContainers: null + imagePullSecrets: [] + initContainers: [] + nodeSelector: {} + securityContext: + runAsUser: 65532 + serviceAccountName: operator + terminationGracePeriodSeconds: 10 + tolerations: [] + volumes: + - name: kube-api-access + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 3607 + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace + - name: kube-api-access-overwrite + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 666 + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: some-kube-root-ca-config-map.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +automountServiceAccountToken: false +kind: ServiceAccount +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration-job + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration-job-default +rules: +- apiGroups: + - "" + resources: + - configmaps + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles/status + verbs: + - get + - patch + - update +- apiGroups: + - monitoring.coreos.com + resources: + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - authentication.k8s.io + resources: + - tokenreviews + verbs: + - create +- apiGroups: + - authorization.k8s.io + resources: + - subjectaccessreviews + verbs: + - create +- apiGroups: + - "" + resources: + - configmaps + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + - pods + verbs: + - delete + - get + - list + - watch +- apiGroups: + - "" + resources: + - nodes + verbs: + - get +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - events + - limitranges + - persistentvolumeclaims + - pods + - pods/log + - replicationcontrollers + - resourcequotas + - serviceaccounts + - services + verbs: + - get + - list +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - pods + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - controllerrevisions + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + - statefulsets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - batch + resources: + - jobs + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cert-manager.io + resources: + - certificates + - issuers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + - nodepools + - redpandas + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups + - redpandaroles + - schemas + - shadowlinks + - stretchclusters + - topics + - users + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups/finalizers + - nodepools/finalizers + - redpandaroles/finalizers + - redpandas/finalizers + - schemas/finalizers + - shadowlinks/finalizers + - stretchclusters/finalizers + - topics/finalizers + - users/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - groups/status + - nodepools/status + - redpandaroles/status + - redpandas/status + - schemas/status + - shadowlinks/status + - stretchclusters/status + - topics/status + - users/status + verbs: + - get + - patch + - update +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - discovery.k8s.io + resources: + - endpointslices + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - multicluster.x-k8s.io + resources: + - serviceexports + - serviceimports + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterrolebindings + - clusterroles + - rolebindings + - roles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration-job-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-migration-job-default +subjects: +- kind: ServiceAccount + name: operator-migration-job + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: batch/v1 +kind: Job +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-4" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration + namespace: default +spec: + template: + metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + spec: + automountServiceAccountToken: false + containers: + - args: + - migration + command: + - /redpanda-operator + image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 + imagePullPolicy: IfNotPresent + name: migration + resources: {} + securityContext: + allowPrivilegeEscalation: false + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access + readOnly: true + imagePullSecrets: [] + nodeSelector: {} + restartPolicy: OnFailure + serviceAccountName: operator-migration-job + terminationGracePeriodSeconds: 10 + tolerations: [] + volumes: + - name: kube-api-access + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 3607 + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace +-- testdata/enabled-service-account-automount-token-in-only-service-account-resource.yaml.golden -- +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +automountServiceAccountToken: true +kind: ServiceAccount +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +data: + controller_manager_config.yaml: |- + apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 + health: + healthProbeBindAddress: :8081 + kind: ControllerManagerConfig + leaderElection: + leaderElect: true + resourceName: aa9fc693.vectorized.io + metrics: + bindAddress: 127.0.0.1:8080 + webhook: + port: 9443 +kind: ConfigMap +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-config + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default-metrics-reader +rules: +- nonResourceURLs: + - /metrics + verbs: + - get +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default +rules: +- apiGroups: + - "" + resources: + - configmaps + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles/status + verbs: + - get + - patch + - update +- apiGroups: + - monitoring.coreos.com + resources: + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - authentication.k8s.io + resources: + - tokenreviews + verbs: + - create +- apiGroups: + - authorization.k8s.io + resources: + - subjectaccessreviews + verbs: + - create +- apiGroups: + - "" + resources: + - configmaps + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + - pods + verbs: + - delete + - get + - list + - watch +- apiGroups: + - "" + resources: + - nodes + verbs: + - get +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - events + - limitranges + - persistentvolumeclaims + - pods + - pods/log + - replicationcontrollers + - resourcequotas + - serviceaccounts + - services + verbs: + - get + - list +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - pods + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - controllerrevisions + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + - statefulsets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - batch + resources: + - jobs + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cert-manager.io + resources: + - certificates + - issuers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + - nodepools + - redpandas + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups + - redpandaroles + - schemas + - shadowlinks + - stretchclusters + - topics + - users + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups/finalizers + - nodepools/finalizers + - redpandaroles/finalizers + - redpandas/finalizers + - schemas/finalizers + - shadowlinks/finalizers + - stretchclusters/finalizers + - topics/finalizers + - users/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - groups/status + - nodepools/status + - redpandaroles/status + - redpandas/status + - schemas/status + - shadowlinks/status + - stretchclusters/status + - topics/status + - users/status + verbs: + - get + - patch + - update +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - discovery.k8s.io + resources: + - endpointslices + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - multicluster.x-k8s.io + resources: + - serviceexports + - serviceimports + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterrolebindings + - clusterroles + - rolebindings + - roles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-additional-controllers-default +rules: +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - patch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - delete + - get + - list + - watch +- apiGroups: + - "" + resources: + - pods + - secrets + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - statefulsets + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - configmaps + - nodes + - secrets + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - delete + - get + - list + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - delete + - get + - list + - patch + - update +- apiGroups: + - "" + resources: + - configmaps + - pods + - secrets + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - apps + resources: + - statefulsets/status + verbs: + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + - pods + verbs: + - delete + - get + - list + - watch +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default-metrics-reader +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-default-metrics-reader +subjects: +- kind: ServiceAccount + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-default +subjects: +- kind: ServiceAccount + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-additional-controllers-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-additional-controllers-default +subjects: +- kind: ServiceAccount + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +kind: Service +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-metrics-service + namespace: default +spec: + ports: + - name: https + port: 8443 + targetPort: https + selector: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator +--- +# Source: operator/templates/entry-point.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator + namespace: default +spec: + replicas: 1 + selector: + matchLabels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + strategy: + type: RollingUpdate + template: + metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + spec: + automountServiceAccountToken: false + containers: + - args: + - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator + - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false + - --enable-console=true + - --enable-vectorized-controllers=false + - --health-probe-bind-address=:8081 + - --leader-elect + - --log-level=info + - --metrics-bind-address=:8443 + - --webhook-enabled=false + command: + - /manager + env: [] + image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 + imagePullPolicy: IfNotPresent + livenessProbe: + httpGet: + path: /healthz/ + port: 8081 + initialDelaySeconds: 15 + periodSeconds: 20 + name: manager + ports: + - containerPort: 9443 + name: webhook-server + protocol: TCP + - containerPort: 8443 + name: https + protocol: TCP + readinessProbe: + httpGet: + path: /readyz + port: 8081 + initialDelaySeconds: 5 + periodSeconds: 10 + resources: {} + securityContext: + allowPrivilegeEscalation: false + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access + readOnly: true + ephemeralContainers: null + imagePullSecrets: [] + initContainers: [] + nodeSelector: {} + securityContext: + runAsUser: 65532 + serviceAccountName: operator + terminationGracePeriodSeconds: 10 + tolerations: [] + volumes: + - name: kube-api-access + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 3607 + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +automountServiceAccountToken: false +kind: ServiceAccount +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration-job + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration-job-default +rules: +- apiGroups: + - "" + resources: + - configmaps + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles/status + verbs: + - get + - patch + - update +- apiGroups: + - monitoring.coreos.com + resources: + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - authentication.k8s.io + resources: + - tokenreviews + verbs: + - create +- apiGroups: + - authorization.k8s.io + resources: + - subjectaccessreviews + verbs: + - create +- apiGroups: + - "" + resources: + - configmaps + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + - pods + verbs: + - delete + - get + - list + - watch +- apiGroups: + - "" + resources: + - nodes + verbs: + - get +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - events + - limitranges + - persistentvolumeclaims + - pods + - pods/log + - replicationcontrollers + - resourcequotas + - serviceaccounts + - services + verbs: + - get + - list +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - pods + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - controllerrevisions + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + - statefulsets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - batch + resources: + - jobs + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cert-manager.io + resources: + - certificates + - issuers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + - nodepools + - redpandas + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups + - redpandaroles + - schemas + - shadowlinks + - stretchclusters + - topics + - users + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups/finalizers + - nodepools/finalizers + - redpandaroles/finalizers + - redpandas/finalizers + - schemas/finalizers + - shadowlinks/finalizers + - stretchclusters/finalizers + - topics/finalizers + - users/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - groups/status + - nodepools/status + - redpandaroles/status + - redpandas/status + - schemas/status + - shadowlinks/status + - stretchclusters/status + - topics/status + - users/status + verbs: + - get + - patch + - update +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - discovery.k8s.io + resources: + - endpointslices + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - multicluster.x-k8s.io + resources: + - serviceexports + - serviceimports + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterrolebindings + - clusterroles + - rolebindings + - roles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration-job-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-migration-job-default +subjects: +- kind: ServiceAccount + name: operator-migration-job + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: batch/v1 +kind: Job +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-4" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration + namespace: default +spec: + template: + metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + spec: + automountServiceAccountToken: false + containers: + - args: + - migration + command: + - /redpanda-operator + image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 + imagePullPolicy: IfNotPresent + name: migration + resources: {} + securityContext: + allowPrivilegeEscalation: false + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access + readOnly: true + imagePullSecrets: [] + nodeSelector: {} + restartPolicy: OnFailure + serviceAccountName: operator-migration-job + terminationGracePeriodSeconds: 10 + tolerations: [] + volumes: + - name: kube-api-access + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 3607 + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace +-- testdata/enabled-service-account-automount-token-in-service-account-resource.yaml.golden -- +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +automountServiceAccountToken: true +kind: ServiceAccount +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +data: + controller_manager_config.yaml: |- + apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 + health: + healthProbeBindAddress: :8081 + kind: ControllerManagerConfig + leaderElection: + leaderElect: true + resourceName: aa9fc693.vectorized.io + metrics: + bindAddress: 127.0.0.1:8080 + webhook: + port: 9443 +kind: ConfigMap +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-config + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default-metrics-reader +rules: +- nonResourceURLs: + - /metrics + verbs: + - get +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default +rules: +- apiGroups: + - "" + resources: + - configmaps + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles/status + verbs: + - get + - patch + - update +- apiGroups: + - monitoring.coreos.com + resources: + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - authentication.k8s.io + resources: + - tokenreviews + verbs: + - create +- apiGroups: + - authorization.k8s.io + resources: + - subjectaccessreviews + verbs: + - create +- apiGroups: + - "" + resources: + - configmaps + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + - pods + verbs: + - delete + - get + - list + - watch +- apiGroups: + - "" + resources: + - nodes + verbs: + - get +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - events + - limitranges + - persistentvolumeclaims + - pods + - pods/log + - replicationcontrollers + - resourcequotas + - serviceaccounts + - services + verbs: + - get + - list +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - pods + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - controllerrevisions + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + - statefulsets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - batch + resources: + - jobs + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cert-manager.io + resources: + - certificates + - issuers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + - nodepools + - redpandas + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups + - redpandaroles + - schemas + - shadowlinks + - stretchclusters + - topics + - users + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups/finalizers + - nodepools/finalizers + - redpandaroles/finalizers + - redpandas/finalizers + - schemas/finalizers + - shadowlinks/finalizers + - stretchclusters/finalizers + - topics/finalizers + - users/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - groups/status + - nodepools/status + - redpandaroles/status + - redpandas/status + - schemas/status + - shadowlinks/status + - stretchclusters/status + - topics/status + - users/status + verbs: + - get + - patch + - update +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - discovery.k8s.io + resources: + - endpointslices + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - multicluster.x-k8s.io + resources: + - serviceexports + - serviceimports + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterrolebindings + - clusterroles + - rolebindings + - roles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-additional-controllers-default +rules: +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - patch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - delete + - get + - list + - watch +- apiGroups: + - "" + resources: + - pods + - secrets + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - statefulsets + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - configmaps + - nodes + - secrets + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - delete + - get + - list + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - delete + - get + - list + - patch + - update +- apiGroups: + - "" + resources: + - configmaps + - pods + - secrets + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - apps + resources: + - statefulsets/status + verbs: + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + - pods + verbs: + - delete + - get + - list + - watch +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default-metrics-reader +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-default-metrics-reader +subjects: +- kind: ServiceAccount + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-default +subjects: +- kind: ServiceAccount + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-additional-controllers-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-additional-controllers-default +subjects: +- kind: ServiceAccount + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +kind: Service +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-metrics-service + namespace: default +spec: + ports: + - name: https + port: 8443 + targetPort: https + selector: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator +--- +# Source: operator/templates/entry-point.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator + namespace: default +spec: + replicas: 1 + selector: + matchLabels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + strategy: + type: RollingUpdate + template: + metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + spec: + automountServiceAccountToken: true + containers: + - args: + - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator + - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -83263,11 +89514,1419 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false + - --enable-console=true + - --enable-vectorized-controllers=false + - --health-probe-bind-address=:8081 + - --leader-elect + - --license-file-path=/redpanda/license/my-redpanda-license + - --log-level=info + - --metrics-bind-address=:8443 + - --webhook-enabled=false + command: + - /manager + env: [] + image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 + imagePullPolicy: IfNotPresent + livenessProbe: + httpGet: + path: /healthz/ + port: 8081 + initialDelaySeconds: 15 + periodSeconds: 20 + name: manager + ports: + - containerPort: 9443 + name: webhook-server + protocol: TCP + - containerPort: 8443 + name: https + protocol: TCP + readinessProbe: + httpGet: + path: /readyz + port: 8081 + initialDelaySeconds: 5 + periodSeconds: 10 + resources: {} + securityContext: + allowPrivilegeEscalation: false + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access + readOnly: true + - mountPath: /redpanda/license + name: license + readOnly: true + ephemeralContainers: null + imagePullSecrets: [] + initContainers: [] + nodeSelector: {} + securityContext: + runAsUser: 65532 + serviceAccountName: operator + terminationGracePeriodSeconds: 10 + tolerations: [] + volumes: + - name: kube-api-access + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 3607 + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace + - name: license + secret: + defaultMode: 420 + secretName: my-secret +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +automountServiceAccountToken: false +kind: ServiceAccount +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration-job + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration-job-default +rules: +- apiGroups: + - "" + resources: + - configmaps + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles/status + verbs: + - get + - patch + - update +- apiGroups: + - monitoring.coreos.com + resources: + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - authentication.k8s.io + resources: + - tokenreviews + verbs: + - create +- apiGroups: + - authorization.k8s.io + resources: + - subjectaccessreviews + verbs: + - create +- apiGroups: + - "" + resources: + - configmaps + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + - pods + verbs: + - delete + - get + - list + - watch +- apiGroups: + - "" + resources: + - nodes + verbs: + - get +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - events + - limitranges + - persistentvolumeclaims + - pods + - pods/log + - replicationcontrollers + - resourcequotas + - serviceaccounts + - services + verbs: + - get + - list +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - pods + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - controllerrevisions + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + - statefulsets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - batch + resources: + - jobs + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cert-manager.io + resources: + - certificates + - issuers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + - nodepools + - redpandas + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups + - redpandaroles + - schemas + - shadowlinks + - stretchclusters + - topics + - users + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups/finalizers + - nodepools/finalizers + - redpandaroles/finalizers + - redpandas/finalizers + - schemas/finalizers + - shadowlinks/finalizers + - stretchclusters/finalizers + - topics/finalizers + - users/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - groups/status + - nodepools/status + - redpandaroles/status + - redpandas/status + - schemas/status + - shadowlinks/status + - stretchclusters/status + - topics/status + - users/status + verbs: + - get + - patch + - update +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - discovery.k8s.io + resources: + - endpointslices + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - multicluster.x-k8s.io + resources: + - serviceexports + - serviceimports + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterrolebindings + - clusterroles + - rolebindings + - roles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration-job-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-migration-job-default +subjects: +- kind: ServiceAccount + name: operator-migration-job + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: batch/v1 +kind: Job +metadata: + annotations: + helm.sh/hook: post-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-4" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-migration + namespace: default +spec: + template: + metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + spec: + automountServiceAccountToken: false + containers: + - args: + - migration + command: + - /redpanda-operator + image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 + imagePullPolicy: IfNotPresent + name: migration + resources: {} + securityContext: + allowPrivilegeEscalation: false + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access + readOnly: true + imagePullSecrets: [] + nodeSelector: {} + restartPolicy: OnFailure + serviceAccountName: operator-migration-job + terminationGracePeriodSeconds: 10 + tolerations: [] + volumes: + - name: kube-api-access + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 3607 + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace +-- testdata/license.yaml.golden -- +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +automountServiceAccountToken: false +kind: ServiceAccount +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +data: + controller_manager_config.yaml: |- + apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 + health: + healthProbeBindAddress: :8081 + kind: ControllerManagerConfig + leaderElection: + leaderElect: true + resourceName: aa9fc693.vectorized.io + metrics: + bindAddress: 127.0.0.1:8080 + webhook: + port: 9443 +kind: ConfigMap +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-config + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default-metrics-reader +rules: +- nonResourceURLs: + - /metrics + verbs: + - get +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default +rules: +- apiGroups: + - "" + resources: + - configmaps + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles/status + verbs: + - get + - patch + - update +- apiGroups: + - monitoring.coreos.com + resources: + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - authentication.k8s.io + resources: + - tokenreviews + verbs: + - create +- apiGroups: + - authorization.k8s.io + resources: + - subjectaccessreviews + verbs: + - create +- apiGroups: + - "" + resources: + - configmaps + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + - pods + verbs: + - delete + - get + - list + - watch +- apiGroups: + - "" + resources: + - nodes + verbs: + - get +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - events + - limitranges + - persistentvolumeclaims + - pods + - pods/log + - replicationcontrollers + - resourcequotas + - serviceaccounts + - services + verbs: + - get + - list +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - configmaps + - endpoints + - pods + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - controllerrevisions + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + - statefulsets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - batch + resources: + - jobs + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cert-manager.io + resources: + - certificates + - issuers + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - consoles + - nodepools + - redpandas + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups + - redpandaroles + - schemas + - shadowlinks + - stretchclusters + - topics + - users + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - groups/finalizers + - nodepools/finalizers + - redpandaroles/finalizers + - redpandas/finalizers + - schemas/finalizers + - shadowlinks/finalizers + - stretchclusters/finalizers + - topics/finalizers + - users/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - groups/status + - nodepools/status + - redpandaroles/status + - redpandas/status + - schemas/status + - shadowlinks/status + - stretchclusters/status + - topics/status + - users/status + verbs: + - get + - patch + - update +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - discovery.k8s.io + resources: + - endpointslices + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + - servicemonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - multicluster.x-k8s.io + resources: + - serviceexports + - serviceimports + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - networking.k8s.io + resources: + - ingresses + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterrolebindings + - clusterroles + - rolebindings + - roles + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-additional-controllers-default +rules: +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - patch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - delete + - get + - list + - watch +- apiGroups: + - "" + resources: + - pods + - secrets + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - statefulsets + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - configmaps + - nodes + - secrets + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - delete + - get + - list + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - delete + - get + - list + - patch + - update +- apiGroups: + - "" + resources: + - configmaps + - pods + - secrets + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - apps + resources: + - statefulsets/status + verbs: + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + - pods + verbs: + - delete + - get + - list + - watch +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default-metrics-reader +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-default-metrics-reader +subjects: +- kind: ServiceAccount + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-default +subjects: +- kind: ServiceAccount + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-additional-controllers-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-additional-controllers-default +subjects: +- kind: ServiceAccount + name: operator + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +kind: Service +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-metrics-service + namespace: default +spec: + ports: + - name: https + port: 8443 + targetPort: https + selector: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator +--- +# Source: operator/templates/entry-point.yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator + namespace: default +spec: + replicas: 1 + selector: + matchLabels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + strategy: + type: RollingUpdate + template: + metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + spec: + automountServiceAccountToken: false + containers: + - args: + - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator + - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 - --leader-elect - - --license-file-path=/redpanda/license/my-redpanda-license + - --license-file-path=/redpanda/license/my-secret - --log-level=info - --metrics-bind-address=:8443 - --webhook-enabled=false @@ -83890,7 +91549,7 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace --- testdata/license.yaml.golden -- +-- testdata/monitoring-enabled.yaml.golden -- --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -84667,11 +92326,12 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 - --leader-elect - - --license-file-path=/redpanda/license/my-secret - --log-level=info - --metrics-bind-address=:8443 - --webhook-enabled=false @@ -84707,9 +92367,6 @@ spec: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access readOnly: true - - mountPath: /redpanda/license - name: license - readOnly: true ephemeralContainers: null imagePullSecrets: [] initContainers: [] @@ -84738,10 +92395,40 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace - - name: license - secret: - defaultMode: 420 - secretName: my-secret +--- +# Source: operator/templates/entry-point.yaml +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + annotations: null + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-metrics-monitor + namespace: default +spec: + endpoints: + - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token + path: /metrics + port: https + scheme: HTTPS + tlsConfig: + ca: {} + cert: {} + insecureSkipVerify: true + namespaceSelector: + matchNames: + - default + selector: + matchLabels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -85294,7 +92981,7 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace --- testdata/monitoring-enabled.yaml.golden -- +-- testdata/monitoring-with-labels-and-interval.yaml.golden -- --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -86071,6 +93758,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -86150,11 +93839,13 @@ metadata: app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 helm.sh/chart: operator-26.2.1-beta.1 + prometheus.io/scrape: "true" name: operator-metrics-monitor namespace: default spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token + interval: 15s path: /metrics port: https scheme: HTTPS @@ -86724,7 +94415,7 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace --- testdata/monitoring-with-labels-and-interval.yaml.golden -- +-- testdata/monitoring-with-labels.yaml.golden -- --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -87501,6 +95192,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -87579,14 +95272,14 @@ metadata: app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 + env: production helm.sh/chart: operator-26.2.1-beta.1 - prometheus.io/scrape: "true" + team: platform name: operator-metrics-monitor namespace: default spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token - interval: 15s path: /metrics port: https scheme: HTTPS @@ -88156,7 +95849,7 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace --- testdata/monitoring-with-labels.yaml.golden -- +-- testdata/monitoring-with-scrape-interval.yaml.golden -- --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -88933,6 +96626,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -89011,14 +96706,13 @@ metadata: app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 - env: production helm.sh/chart: operator-26.2.1-beta.1 - team: platform name: operator-metrics-monitor namespace: default spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token + interval: 60s path: /metrics port: https scheme: HTTPS @@ -89588,7 +97282,7 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace --- testdata/monitoring-with-scrape-interval.yaml.golden -- +-- testdata/multicluster service loadbalancer.yaml.golden -- --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -89797,6 +97491,16 @@ rules: - patch - update - watch +- apiGroups: + - "" + resources: + - secrets + - serviceaccounts + verbs: + - create + - get + - patch + - update - apiGroups: - "" resources: @@ -90333,6 +98037,32 @@ spec: app.kubernetes.io/name: operator --- # Source: operator/templates/entry-point.yaml +apiVersion: v1 +kind: Service +metadata: + annotations: + service.beta.kubernetes.io/aws-load-balancer-type: nlb + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator + namespace: default +spec: + ports: + - name: raft + port: 9443 + protocol: TCP + targetPort: 9443 + publishNotReadyAddresses: true + selector: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + type: LoadBalancer +--- +# Source: operator/templates/entry-point.yaml apiVersion: apps/v1 kind: Deployment metadata: @@ -90363,17 +98093,24 @@ spec: automountServiceAccountToken: false containers: - args: - - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - - --configurator-tag=v26.2.1-beta.1 - - --enable-console=true - - --enable-vectorized-controllers=false + - multicluster + - --base-image=docker.redpanda.com/redpandadata/redpanda-operator + - --base-tag=v26.2.1-beta.1 + - --ca-file=/tls/ca.crt + - --certificate-file=/tls/tls.crt - --health-probe-bind-address=:8081 - - --leader-elect + - --kubeconfig-name=operator + - --kubeconfig-namespace=default + - --kubernetes-api-address=https://dns.address.for.my.kubernetes.api.server:8080 - --log-level=info - --metrics-bind-address=:8443 - - --webhook-enabled=false + - --name=blue + - --private-key-file=/tls/tls.key + - --raft-address=0.0.0.0:9443 + - --peer=blue://blue.example.com:9443 + - --peer=west://west.example.com:9443 command: - - /manager + - /redpanda-operator env: [] image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 imagePullPolicy: IfNotPresent @@ -90404,6 +98141,9 @@ spec: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access readOnly: true + - mountPath: /tls + name: operator-multicluster-certificates + readOnly: true ephemeralContainers: null imagePullSecrets: [] initContainers: [] @@ -90432,41 +98172,34 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace + - name: operator-multicluster-certificates + secret: + items: + - key: tls.crt + path: tls.crt + - key: tls.key + path: tls.key + - key: ca.crt + path: ca.crt + secretName: operator-multicluster-certificates --- # Source: operator/templates/entry-point.yaml -apiVersion: monitoring.coreos.com/v1 -kind: ServiceMonitor +apiVersion: v1 +automountServiceAccountToken: false +kind: ServiceAccount metadata: - annotations: null + annotations: + helm.sh/hook: pre-install,pre-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" labels: app.kubernetes.io/instance: operator app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 helm.sh/chart: operator-26.2.1-beta.1 - name: operator-metrics-monitor + name: operator-crd-job namespace: default -spec: - endpoints: - - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token - interval: 60s - path: /metrics - port: https - scheme: HTTPS - tlsConfig: - ca: {} - cert: {} - insecureSkipVerify: true - namespaceSelector: - matchNames: - - default - selector: - matchLabels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -90489,6 +98222,32 @@ metadata: # Source: operator/templates/entry-point.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole +metadata: + annotations: + helm.sh/hook: pre-install,pre-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-crd-job-default +rules: +- apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + verbs: + - create + - get + - patch + - update +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole metadata: annotations: helm.sh/hook: post-upgrade @@ -90636,6 +98395,16 @@ rules: - patch - update - watch +- apiGroups: + - "" + resources: + - secrets + - serviceaccounts + verbs: + - create + - get + - patch + - update - apiGroups: - "" resources: @@ -90933,6 +98702,30 @@ rules: # Source: operator/templates/entry-point.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding +metadata: + annotations: + helm.sh/hook: pre-install,pre-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-10" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-crd-job-default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: operator-crd-job-default +subjects: +- kind: ServiceAccount + name: operator-crd-job + namespace: default +--- +# Source: operator/templates/entry-point.yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding metadata: annotations: helm.sh/hook: post-upgrade @@ -90957,6 +98750,73 @@ subjects: # Source: operator/templates/entry-point.yaml apiVersion: batch/v1 kind: Job +metadata: + annotations: + helm.sh/hook: pre-install,pre-upgrade + helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed + helm.sh/hook-weight: "-5" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator-crds + namespace: default +spec: + template: + metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/name: operator + spec: + automountServiceAccountToken: false + containers: + - args: + - crd + - --multicluster + command: + - /redpanda-operator + image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 + imagePullPolicy: IfNotPresent + name: crd-installation + resources: {} + securityContext: + allowPrivilegeEscalation: false + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access + readOnly: true + imagePullSecrets: [] + nodeSelector: {} + restartPolicy: OnFailure + serviceAccountName: operator-crd-job + terminationGracePeriodSeconds: 10 + tolerations: [] + volumes: + - name: kube-api-access + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + expirationSeconds: 3607 + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt + - downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + path: namespace +--- +# Source: operator/templates/entry-point.yaml +apiVersion: batch/v1 +kind: Job metadata: annotations: helm.sh/hook: post-upgrade @@ -91019,7 +98879,7 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace --- testdata/multicluster service loadbalancer.yaml.golden -- +-- testdata/multicluster service mcs.yaml.golden -- --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -91777,8 +99637,7 @@ spec: apiVersion: v1 kind: Service metadata: - annotations: - service.beta.kubernetes.io/aws-load-balancer-type: nlb + annotations: {} labels: app.kubernetes.io/instance: operator app.kubernetes.io/managed-by: Helm @@ -91797,7 +99656,7 @@ spec: selector: app.kubernetes.io/instance: operator app.kubernetes.io/name: operator - type: LoadBalancer + type: ClusterIP --- # Source: operator/templates/entry-point.yaml apiVersion: apps/v1 @@ -91846,6 +99705,7 @@ spec: - --raft-address=0.0.0.0:9443 - --peer=blue://blue.example.com:9443 - --peer=west://west.example.com:9443 + - --peer=east://east.example.com:9443 command: - /redpanda-operator env: [] @@ -91921,6 +99781,61 @@ spec: secretName: operator-multicluster-certificates --- # Source: operator/templates/entry-point.yaml +apiVersion: multicluster.x-k8s.io/v1alpha1 +kind: ServiceExport +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: operator + namespace: default +spec: {} +--- +# Source: operator/templates/entry-point.yaml +apiVersion: multicluster.x-k8s.io/v1alpha1 +kind: ServiceImport +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: west + namespace: default +spec: + ports: + - name: raft + port: 9443 + protocol: TCP + type: ClusterSetIP +--- +# Source: operator/templates/entry-point.yaml +apiVersion: multicluster.x-k8s.io/v1alpha1 +kind: ServiceImport +metadata: + annotations: {} + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: east + namespace: default +spec: + ports: + - name: raft + port: 9443 + protocol: TCP + type: ClusterSetIP +--- +# Source: operator/templates/entry-point.yaml apiVersion: v1 automountServiceAccountToken: false kind: ServiceAccount @@ -92616,7 +100531,7 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace --- testdata/multicluster service mcs.yaml.golden -- +-- testdata/multicluster service mesh.yaml.golden -- --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -93374,7 +101289,9 @@ spec: apiVersion: v1 kind: Service metadata: - annotations: {} + annotations: + service.cilium.io/global: "true" + service.cilium.io/shared: "false" labels: app.kubernetes.io/instance: operator app.kubernetes.io/managed-by: Helm @@ -93396,6 +101313,53 @@ spec: type: ClusterIP --- # Source: operator/templates/entry-point.yaml +apiVersion: v1 +kind: Service +metadata: + annotations: + service.cilium.io/affinity: west + service.cilium.io/global: "true" + service.cilium.io/shared: "false" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: west + namespace: default +spec: + ports: + - name: raft + port: 9443 + protocol: TCP + targetPort: 9443 + type: ClusterIP +--- +# Source: operator/templates/entry-point.yaml +apiVersion: v1 +kind: Service +metadata: + annotations: + service.cilium.io/global: "true" + service.cilium.io/shared: "false" + labels: + app.kubernetes.io/instance: operator + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: operator + app.kubernetes.io/version: v26.2.1-beta.1 + helm.sh/chart: operator-26.2.1-beta.1 + name: east + namespace: default +spec: + ports: + - name: raft + port: 9443 + protocol: TCP + targetPort: 9443 + type: ClusterIP +--- +# Source: operator/templates/entry-point.yaml apiVersion: apps/v1 kind: Deployment metadata: @@ -93518,61 +101482,6 @@ spec: secretName: operator-multicluster-certificates --- # Source: operator/templates/entry-point.yaml -apiVersion: multicluster.x-k8s.io/v1alpha1 -kind: ServiceExport -metadata: - annotations: {} - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: operator - namespace: default -spec: {} ---- -# Source: operator/templates/entry-point.yaml -apiVersion: multicluster.x-k8s.io/v1alpha1 -kind: ServiceImport -metadata: - annotations: {} - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: west - namespace: default -spec: - ports: - - name: raft - port: 9443 - protocol: TCP - type: ClusterSetIP ---- -# Source: operator/templates/entry-point.yaml -apiVersion: multicluster.x-k8s.io/v1alpha1 -kind: ServiceImport -metadata: - annotations: {} - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: east - namespace: default -spec: - ports: - - name: raft - port: 9443 - protocol: TCP - type: ClusterSetIP ---- -# Source: operator/templates/entry-point.yaml apiVersion: v1 automountServiceAccountToken: false kind: ServiceAccount @@ -94268,7 +102177,7 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace --- testdata/multicluster service mesh.yaml.golden -- +-- testdata/multicluster with license.yaml.golden -- --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -95023,80 +102932,6 @@ spec: app.kubernetes.io/name: operator --- # Source: operator/templates/entry-point.yaml -apiVersion: v1 -kind: Service -metadata: - annotations: - service.cilium.io/global: "true" - service.cilium.io/shared: "false" - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: operator - namespace: default -spec: - ports: - - name: raft - port: 9443 - protocol: TCP - targetPort: 9443 - publishNotReadyAddresses: true - selector: - app.kubernetes.io/instance: operator - app.kubernetes.io/name: operator - type: ClusterIP ---- -# Source: operator/templates/entry-point.yaml -apiVersion: v1 -kind: Service -metadata: - annotations: - service.cilium.io/affinity: west - service.cilium.io/global: "true" - service.cilium.io/shared: "false" - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: west - namespace: default -spec: - ports: - - name: raft - port: 9443 - protocol: TCP - targetPort: 9443 - type: ClusterIP ---- -# Source: operator/templates/entry-point.yaml -apiVersion: v1 -kind: Service -metadata: - annotations: - service.cilium.io/global: "true" - service.cilium.io/shared: "false" - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: east - namespace: default -spec: - ports: - - name: raft - port: 9443 - protocol: TCP - targetPort: 9443 - type: ClusterIP ---- -# Source: operator/templates/entry-point.yaml apiVersion: apps/v1 kind: Deployment metadata: @@ -95136,14 +102971,14 @@ spec: - --kubeconfig-name=operator - --kubeconfig-namespace=default - --kubernetes-api-address=https://dns.address.for.my.kubernetes.api.server:8080 + - --license-file-path=/redpanda/license/my-secret - --log-level=info - --metrics-bind-address=:8443 - --name=blue - --private-key-file=/tls/tls.key - --raft-address=0.0.0.0:9443 - - --peer=blue://blue.example.com:9443 - - --peer=west://west.example.com:9443 - - --peer=east://east.example.com:9443 + - --peer=west://some.dns.label:9443 + - --peer=east://some.other.dns.label:9443 command: - /redpanda-operator env: [] @@ -95179,6 +103014,9 @@ spec: - mountPath: /tls name: operator-multicluster-certificates readOnly: true + - mountPath: /redpanda/license + name: license + readOnly: true ephemeralContainers: null imagePullSecrets: [] initContainers: [] @@ -95217,6 +103055,10 @@ spec: - key: ca.crt path: ca.crt secretName: operator-multicluster-certificates + - name: license + secret: + defaultMode: 420 + secretName: my-secret --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -95914,7 +103756,7 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace --- testdata/multicluster with license.yaml.golden -- +-- testdata/multicluster.yaml.golden -- --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -96708,7 +104550,6 @@ spec: - --kubeconfig-name=operator - --kubeconfig-namespace=default - --kubernetes-api-address=https://dns.address.for.my.kubernetes.api.server:8080 - - --license-file-path=/redpanda/license/my-secret - --log-level=info - --metrics-bind-address=:8443 - --name=blue @@ -96751,9 +104592,6 @@ spec: - mountPath: /tls name: operator-multicluster-certificates readOnly: true - - mountPath: /redpanda/license - name: license - readOnly: true ephemeralContainers: null imagePullSecrets: [] initContainers: [] @@ -96792,10 +104630,6 @@ spec: - key: ca.crt path: ca.crt secretName: operator-multicluster-certificates - - name: license - secret: - defaultMode: 420 - secretName: my-secret --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -97493,7 +105327,8 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace --- testdata/multicluster.yaml.golden -- +-- testdata/multicluster_with_license.yaml.golden -- +head: illegal line count -- -1 --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -97506,7 +105341,7 @@ metadata: app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 + helm.sh/chart: operator-26.1.1 name: operator namespace: default --- @@ -97533,7 +105368,7 @@ metadata: app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 + helm.sh/chart: operator-26.1.1 name: operator-config namespace: default --- @@ -97547,7 +105382,7 @@ metadata: app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 + helm.sh/chart: operator-26.1.1 name: operator-default-metrics-reader rules: - nonResourceURLs: @@ -97565,7 +105400,7 @@ metadata: app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 + helm.sh/chart: operator-26.1.1 name: operator-default rules: - apiGroups: @@ -97702,16 +105537,6 @@ rules: - patch - update - watch -- apiGroups: - - "" - resources: - - secrets - - serviceaccounts - verbs: - - create - - get - - patch - - update - apiGroups: - "" resources: @@ -97770,7 +105595,6 @@ rules: - "" resources: - configmaps - - endpoints - pods - secrets - serviceaccounts @@ -97928,18 +105752,6 @@ rules: - patch - update - watch -- apiGroups: - - discovery.k8s.io - resources: - - endpointslices - verbs: - - create - - delete - - get - - list - - patch - - update - - watch - apiGroups: - monitoring.coreos.com resources: @@ -97953,19 +105765,6 @@ rules: - patch - update - watch -- apiGroups: - - multicluster.x-k8s.io - resources: - - serviceexports - - serviceimports - verbs: - - create - - delete - - get - - list - - patch - - update - - watch - apiGroups: - networking.k8s.io resources: @@ -98016,7 +105815,7 @@ metadata: app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 + helm.sh/chart: operator-26.1.1 name: operator-additional-controllers-default rules: - apiGroups: @@ -98165,27 +105964,6 @@ rules: # Source: operator/templates/entry-point.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding -metadata: - annotations: null - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: operator-default-metrics-reader -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: operator-default-metrics-reader -subjects: -- kind: ServiceAccount - name: operator - namespace: default ---- -# Source: operator/templates/entry-point.yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding metadata: annotations: {} labels: @@ -98193,7 +105971,7 @@ metadata: app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 + helm.sh/chart: operator-26.1.1 name: operator-default roleRef: apiGroup: rbac.authorization.k8s.io @@ -98214,7 +105992,7 @@ metadata: app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 + helm.sh/chart: operator-26.1.1 name: operator-additional-controllers-default roleRef: apiGroup: rbac.authorization.k8s.io @@ -98235,7 +106013,7 @@ metadata: app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 + helm.sh/chart: operator-26.1.1 name: operator-metrics-service namespace: default spec: @@ -98257,7 +106035,7 @@ metadata: app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 + helm.sh/chart: operator-26.1.1 name: operator namespace: default spec: @@ -98278,26 +106056,20 @@ spec: automountServiceAccountToken: false containers: - args: - - multicluster - - --base-image=docker.redpanda.com/redpandadata/redpanda-operator - - --base-tag=v26.2.1-beta.1 - - --ca-file=/tls/ca.crt - - --certificate-file=/tls/tls.crt + - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator + - --configurator-tag=v26.1.1 + - --enable-connect=false + - --enable-console=true + - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 - - --kubeconfig-name=operator - - --kubeconfig-namespace=default - - --kubernetes-api-address=https://dns.address.for.my.kubernetes.api.server:8080 + - --leader-elect - --log-level=info - --metrics-bind-address=:8443 - - --name=blue - - --private-key-file=/tls/tls.key - - --raft-address=0.0.0.0:9443 - - --peer=west://some.dns.label:9443 - - --peer=east://some.other.dns.label:9443 + - --webhook-enabled=false command: - - /redpanda-operator + - /manager env: [] - image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 + image: docker.redpanda.com/redpandadata/redpanda-operator:v26.1.1 imagePullPolicy: IfNotPresent livenessProbe: httpGet: @@ -98326,9 +106098,6 @@ spec: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access readOnly: true - - mountPath: /tls - name: operator-multicluster-certificates - readOnly: true ephemeralContainers: null imagePullSecrets: [] initContainers: [] @@ -98357,34 +106126,6 @@ spec: apiVersion: v1 fieldPath: metadata.namespace path: namespace - - name: operator-multicluster-certificates - secret: - items: - - key: tls.crt - path: tls.crt - - key: tls.key - path: tls.key - - key: ca.crt - path: ca.crt - secretName: operator-multicluster-certificates ---- -# Source: operator/templates/entry-point.yaml -apiVersion: v1 -automountServiceAccountToken: false -kind: ServiceAccount -metadata: - annotations: - helm.sh/hook: pre-install,pre-upgrade - helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed - helm.sh/hook-weight: "-10" - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: operator-crd-job - namespace: default --- # Source: operator/templates/entry-point.yaml apiVersion: v1 @@ -98400,39 +106141,13 @@ metadata: app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 + helm.sh/chart: operator-26.1.1 name: operator-migration-job namespace: default --- # Source: operator/templates/entry-point.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole -metadata: - annotations: - helm.sh/hook: pre-install,pre-upgrade - helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed - helm.sh/hook-weight: "-10" - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: operator-crd-job-default -rules: -- apiGroups: - - apiextensions.k8s.io - resources: - - customresourcedefinitions - verbs: - - create - - get - - patch - - update ---- -# Source: operator/templates/entry-point.yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole metadata: annotations: helm.sh/hook: post-upgrade @@ -98443,7 +106158,7 @@ metadata: app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 + helm.sh/chart: operator-26.1.1 name: operator-migration-job-default rules: - apiGroups: @@ -98580,16 +106295,6 @@ rules: - patch - update - watch -- apiGroups: - - "" - resources: - - secrets - - serviceaccounts - verbs: - - create - - get - - patch - - update - apiGroups: - "" resources: @@ -98648,7 +106353,6 @@ rules: - "" resources: - configmaps - - endpoints - pods - secrets - serviceaccounts @@ -98806,18 +106510,6 @@ rules: - patch - update - watch -- apiGroups: - - discovery.k8s.io - resources: - - endpointslices - verbs: - - create - - delete - - get - - list - - patch - - update - - watch - apiGroups: - monitoring.coreos.com resources: @@ -98831,19 +106523,6 @@ rules: - patch - update - watch -- apiGroups: - - multicluster.x-k8s.io - resources: - - serviceexports - - serviceimports - verbs: - - create - - delete - - get - - list - - patch - - update - - watch - apiGroups: - networking.k8s.io resources: @@ -98887,30 +106566,6 @@ rules: # Source: operator/templates/entry-point.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding -metadata: - annotations: - helm.sh/hook: pre-install,pre-upgrade - helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed - helm.sh/hook-weight: "-10" - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: operator-crd-job-default -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: operator-crd-job-default -subjects: -- kind: ServiceAccount - name: operator-crd-job - namespace: default ---- -# Source: operator/templates/entry-point.yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding metadata: annotations: helm.sh/hook: post-upgrade @@ -98921,7 +106576,7 @@ metadata: app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 + helm.sh/chart: operator-26.1.1 name: operator-migration-job-default roleRef: apiGroup: rbac.authorization.k8s.io @@ -98935,73 +106590,6 @@ subjects: # Source: operator/templates/entry-point.yaml apiVersion: batch/v1 kind: Job -metadata: - annotations: - helm.sh/hook: pre-install,pre-upgrade - helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded,hook-failed - helm.sh/hook-weight: "-5" - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/managed-by: Helm - app.kubernetes.io/name: operator - app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 - name: operator-crds - namespace: default -spec: - template: - metadata: - annotations: {} - labels: - app.kubernetes.io/instance: operator - app.kubernetes.io/name: operator - spec: - automountServiceAccountToken: false - containers: - - args: - - crd - - --multicluster - command: - - /redpanda-operator - image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 - imagePullPolicy: IfNotPresent - name: crd-installation - resources: {} - securityContext: - allowPrivilegeEscalation: false - volumeMounts: - - mountPath: /var/run/secrets/kubernetes.io/serviceaccount - name: kube-api-access - readOnly: true - imagePullSecrets: [] - nodeSelector: {} - restartPolicy: OnFailure - serviceAccountName: operator-crd-job - terminationGracePeriodSeconds: 10 - tolerations: [] - volumes: - - name: kube-api-access - projected: - defaultMode: 420 - sources: - - serviceAccountToken: - expirationSeconds: 3607 - path: token - - configMap: - items: - - key: ca.crt - path: ca.crt - name: kube-root-ca.crt - - downwardAPI: - items: - - fieldRef: - apiVersion: v1 - fieldPath: metadata.namespace - path: namespace ---- -# Source: operator/templates/entry-point.yaml -apiVersion: batch/v1 -kind: Job metadata: annotations: helm.sh/hook: post-upgrade @@ -99012,7 +106600,7 @@ metadata: app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: operator app.kubernetes.io/version: v26.2.1-beta.1 - helm.sh/chart: operator-26.2.1-beta.1 + helm.sh/chart: operator-26.1.1 name: operator-migration namespace: default spec: @@ -99029,7 +106617,7 @@ spec: - migration command: - /redpanda-operator - image: docker.redpanda.com/redpandadata/redpanda-operator:v26.2.1-beta.1 + image: docker.redpanda.com/redpandadata/redpanda-operator:v26.1.1 imagePullPolicy: IfNotPresent name: migration resources: {} @@ -99862,6 +107450,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 @@ -101368,6 +108958,8 @@ spec: - args: - --configurator-base-image=docker.redpanda.com/redpandadata/redpanda-operator - --configurator-tag=v26.2.1-beta.1 + - --connect-monitoring-enabled=false + - --enable-connect=false - --enable-console=true - --enable-vectorized-controllers=false - --health-probe-bind-address=:8081 diff --git a/operator/chart/testdata/template-cases.txtar b/operator/chart/testdata/template-cases.txtar index 08e527a9d..48348d6f1 100644 --- a/operator/chart/testdata/template-cases.txtar +++ b/operator/chart/testdata/template-cases.txtar @@ -137,6 +137,32 @@ enterprise: licenseSecretRef: name: my-secret +-- connect-controller-enabled -- +connectController: + enabled: true + +-- connect-controller-with-license -- +connectController: + enabled: true +enterprise: + licenseSecretRef: + name: redpanda-license + key: license + +-- common-annotations -- +commonAnnotations: + owner: "platform-team@example.com" + environment: "production" + +-- connect-monitoring-enabled -- +connectController: + enabled: true + monitoring: + enabled: true + scrapeInterval: "30s" + labels: + team: platform + -- multicluster service mesh -- crds: enabled: true diff --git a/operator/chart/values.go b/operator/chart/values.go index 7a258e305..8dfaebe46 100644 --- a/operator/chart/values.go +++ b/operator/chart/values.go @@ -123,6 +123,7 @@ type Values struct { PodLabels map[string]string `json:"podLabels"` AdditionalCmdFlags []string `json:"additionalCmdFlags"` CommonLabels map[string]string `json:"commonLabels"` + CommonAnnotations map[string]string `json:"commonAnnotations"` Monitoring MonitoringConfig `json:"monitoring"` WebhookSecretName string `json:"webhookSecretName"` PodTemplate *PodTemplateSpec `json:"podTemplate,omitempty"` @@ -130,6 +131,7 @@ type Values struct { ReadinessProbe *corev1.Probe `json:"readinessProbe,omitempty"` CRDs CRDs `json:"crds"` VectorizedControllers VectorizedControllers `json:"vectorizedControllers"` + ConnectController ConnectController `json:"connectController"` Multicluster Multicluster `json:"multicluster"` } @@ -137,6 +139,32 @@ type VectorizedControllers struct { Enabled bool `json:"enabled"` } +type ConnectMonitoringConfig struct { + Enabled bool `json:"enabled"` + ScrapeInterval string `json:"scrapeInterval,omitempty"` + Labels map[string]string `json:"labels,omitempty"` +} + +type ConnectController struct { + Enabled bool `json:"enabled"` + Monitoring ConnectMonitoringConfig `json:"monitoring"` + // Image overrides the Redpanda Connect image used for every Pipeline + // CR that doesn't pin its own .spec.image. Per-Pipeline .spec.image + // still wins; if neither is set, the operator falls back to the + // PipelineDefaultImage constant baked into the binary. + // + // Plumbed through to the operator via the --connect-default-image + // command-line flag. + Image *ConnectControllerImage `json:"image,omitempty"` +} + +// ConnectControllerImage is a chart-level default Redpanda Connect image +// applied to every Pipeline CR that doesn't pin its own .spec.image. +type ConnectControllerImage struct { + Repository string `json:"repository"` + Tag string `json:"tag"` +} + type CRDs struct { Enabled bool `json:"enabled"` Experimental bool `json:"experimental"` diff --git a/operator/chart/values.schema.json b/operator/chart/values.schema.json index 5fe021a81..1ae093281 100644 --- a/operator/chart/values.schema.json +++ b/operator/chart/values.schema.json @@ -611,6 +611,12 @@ "clusterDomain": { "type": "string" }, + "commonAnnotations": { + "additionalProperties": { + "type": "string" + }, + "type": "object" + }, "commonLabels": { "additionalProperties": { "type": "string" @@ -668,6 +674,45 @@ }, "type": "object" }, + "connectController": { + "additionalProperties": false, + "properties": { + "enabled": { + "type": "boolean" + }, + "image": { + "additionalProperties": false, + "properties": { + "repository": { + "type": "string" + }, + "tag": { + "type": "string" + } + }, + "type": "object" + }, + "monitoring": { + "additionalProperties": false, + "properties": { + "enabled": { + "type": "boolean" + }, + "labels": { + "additionalProperties": { + "type": "string" + }, + "type": "object" + }, + "scrapeInterval": { + "type": "string" + } + }, + "type": "object" + } + }, + "type": "object" + }, "crds": { "additionalProperties": false, "properties": { diff --git a/operator/chart/values.yaml b/operator/chart/values.yaml index 2cf301824..385298625 100644 --- a/operator/chart/values.yaml +++ b/operator/chart/values.yaml @@ -144,11 +144,37 @@ additionalCmdFlags: [] # For example, `my.k8s.service: redpanda-operator`. commonLabels: {} +# -- Additional annotations to add to all resources managed by the operator. +# Useful for satisfying OPA Gatekeeper RequiredAnnotations constraints. +# For example, `owner: "platform-team@example.com"`. +commonAnnotations: {} + # @ignored # Enables controllers for the Resources in the Vectorized group. vectorizedControllers: enabled: false +# -- Enables the Redpanda Connect controller for managing Connect pipeline CRs. +# Pipelines still require an enterprise license with the CONNECT product on each CR. +connectController: + enabled: false + # -- Default Redpanda Connect image applied to every Pipeline CR that + # does not pin its own `.spec.image`. Per-Pipeline `.spec.image` still + # wins; if neither is set, the operator falls back to the + # PipelineDefaultImage constant baked into the binary. + # image: + # repository: docker.redpanda.com/redpandadata/connect + # tag: "4.92.0" + # -- Monitoring configuration for Connect pipeline pods. + monitoring: + # -- Enables PodMonitor creation for all Connect pipelines. + # Requires the Prometheus Operator CRDs (monitoring.coreos.com) to be installed. + enabled: false + # -- Prometheus scrape interval for pipeline PodMonitors. + # scrapeInterval: "30s" + # -- Additional labels to add to pipeline PodMonitors. + # labels: {} + # -- Configuration for monitoring. monitoring: # -- Creates a ServiceMonitor that can be used by Prometheus-Operator or VictoriaMetrics-Operator to scrape the metrics. diff --git a/operator/chart/values_partial.gen.go b/operator/chart/values_partial.gen.go index f309dd1c9..fe7f0b0f6 100644 --- a/operator/chart/values_partial.gen.go +++ b/operator/chart/values_partial.gen.go @@ -41,6 +41,7 @@ type PartialValues struct { PodLabels map[string]string "json:\"podLabels,omitempty\"" AdditionalCmdFlags []string "json:\"additionalCmdFlags,omitempty\"" CommonLabels map[string]string "json:\"commonLabels,omitempty\"" + CommonAnnotations map[string]string "json:\"commonAnnotations,omitempty\"" Monitoring *PartialMonitoringConfig "json:\"monitoring,omitempty\"" WebhookSecretName *string "json:\"webhookSecretName,omitempty\"" PodTemplate *PartialPodTemplateSpec "json:\"podTemplate,omitempty\"" @@ -48,6 +49,7 @@ type PartialValues struct { ReadinessProbe *corev1.Probe "json:\"readinessProbe,omitempty\"" CRDs *PartialCRDs "json:\"crds,omitempty\"" VectorizedControllers *PartialVectorizedControllers "json:\"vectorizedControllers,omitempty\"" + ConnectController *PartialConnectController "json:\"connectController,omitempty\"" Multicluster *PartialMulticluster "json:\"multicluster,omitempty\"" } @@ -97,6 +99,12 @@ type PartialVectorizedControllers struct { Enabled *bool "json:\"enabled,omitempty\"" } +type PartialConnectController struct { + Enabled *bool "json:\"enabled,omitempty\"" + Monitoring *PartialConnectMonitoringConfig "json:\"monitoring,omitempty\"" + Image *PartialConnectControllerImage "json:\"image,omitempty\"" +} + type PartialMulticluster struct { Enabled *bool "json:\"enabled,omitempty\"" Service *PartialMulticlusterService "json:\"service,omitempty\"" @@ -131,6 +139,12 @@ type PartialLeaderElectionConfig struct { ResourceName *string "json:\"resourceName,omitempty\"" } +type PartialConnectMonitoringConfig struct { + Enabled *bool "json:\"enabled,omitempty\"" + ScrapeInterval *string "json:\"scrapeInterval,omitempty\"" + Labels map[string]string "json:\"labels,omitempty\"" +} + type PartialMulticlusterService struct { Enabled *bool "json:\"enabled,omitempty\"" Type *corev1.ServiceType "json:\"type,omitempty\" jsonschema:\"pattern=^(ClusterIP|LoadBalancer)$\"" @@ -144,6 +158,11 @@ type PartialMetadata struct { Annotations map[string]string "json:\"annotations,omitempty\"" } +type PartialConnectControllerImage struct { + Repository *string "json:\"repository,omitempty\"" + Tag *string "json:\"tag,omitempty\"" +} + type PartialPeer struct { Name *string "json:\"name,omitempty\" jsonschema:\"required\"" Address *string "json:\"address,omitempty\" jsonschema:\"required\"" diff --git a/operator/cmd/crd/crd.go b/operator/cmd/crd/crd.go index e1911dc34..90cbcb95f 100644 --- a/operator/cmd/crd/crd.go +++ b/operator/cmd/crd/crd.go @@ -32,6 +32,7 @@ import ( var ( stableCRDs = []*apiextensionsv1.CustomResourceDefinition{ crds.Console(), + crds.Pipeline(), crds.Redpanda(), crds.Group(), crds.Role(), diff --git a/operator/cmd/run/run.go b/operator/cmd/run/run.go index d2ba05444..7bd00d080 100644 --- a/operator/cmd/run/run.go +++ b/operator/cmd/run/run.go @@ -41,6 +41,7 @@ import ( "github.com/redpanda-data/redpanda-operator/operator/internal/controller/decommissioning" "github.com/redpanda-data/redpanda-operator/operator/internal/controller/nodewatcher" "github.com/redpanda-data/redpanda-operator/operator/internal/controller/olddecommission" + pipelinecontroller "github.com/redpanda-data/redpanda-operator/operator/internal/controller/pipeline" "github.com/redpanda-data/redpanda-operator/operator/internal/controller/pvcunbinder" redpandacontrollers "github.com/redpanda-data/redpanda-operator/operator/internal/controller/redpanda" vectorizedcontrollers "github.com/redpanda-data/redpanda-operator/operator/internal/controller/vectorized" @@ -85,6 +86,7 @@ type RunOptions struct { enableRedpandaControllers bool enableV2NodepoolController bool + enableConnectController bool enableConsoleController bool managerOptions ctrl.Options clusterDomain string @@ -116,6 +118,12 @@ type RunOptions struct { cloudSecretsEnabled bool cloudSecretsPrefix string cloudSecretsConfig pkgsecrets.ExpanderCloudConfiguration + licenseFilePath string + commonAnnotations map[string]string + connectMonitoringEnabled bool + connectMonitoringScrapeInterval string + connectMonitoringLabels map[string]string + connectDefaultImage string } func (o *RunOptions) BindFlags(cmd *cobra.Command) { @@ -138,7 +146,15 @@ func (o *RunOptions) BindFlags(cmd *cobra.Command) { cmd.Flags().StringVar(&o.metricsCertKey, "metrics-cert-key", "tls.key", "The name of the metrics server key file.") cmd.Flags().BoolVar(&o.webhookEnabled, "webhook-enabled", false, "Enable webhook Manager") + cmd.Flags().StringVar(&o.licenseFilePath, "license-file-path", "", "The path to the Redpanda enterprise license file") + cmd.Flags().StringToStringVar(&o.commonAnnotations, "common-annotations", nil, "Annotations to propagate to all operator-managed resources (key=value pairs)") + cmd.Flags().BoolVar(&o.connectMonitoringEnabled, "connect-monitoring-enabled", false, "Enable PodMonitor creation for Connect pipelines") + cmd.Flags().StringVar(&o.connectMonitoringScrapeInterval, "connect-monitoring-scrape-interval", "", "Prometheus scrape interval for Connect pipeline PodMonitors (e.g. 30s)") + cmd.Flags().StringToStringVar(&o.connectMonitoringLabels, "connect-monitoring-labels", nil, "Additional labels for Connect pipeline PodMonitors (key=value pairs)") + cmd.Flags().StringVar(&o.connectDefaultImage, "connect-default-image", "", "Default Redpanda Connect image (repo:tag) used when a Pipeline CR does not pin its own .spec.image. Per-Pipeline .spec.image wins; if neither is set, falls back to the operator binary's hardcoded PipelineDefaultImage.") + // Controller flags. + cmd.Flags().BoolVar(&o.enableConnectController, "enable-connect", false, "Specifies whether or not to enable the Redpanda Connect controller (requires enterprise license)") cmd.Flags().BoolVar(&o.enableConsoleController, "enable-console", true, "Specifies whether or not to enabled the redpanda Console controller") cmd.Flags().BoolVar(&o.enableV2NodepoolController, "enable-v2-nodepools", false, "Specifies whether or not to enabled the v2 nodepool controller") cmd.Flags().BoolVar(&o.enableVectorizedControllers, "enable-vectorized-controllers", false, "Specifies whether or not to enabled the legacy controllers for resources in the Vectorized Group (Also known as V1 operator mode)") @@ -452,6 +468,37 @@ func Run( return err } } + + // Connect Reconciler (enterprise feature, gated by license on each CR or operator-level license). + if opts.enableConnectController { + pipelineCtl, err := kube.FromRESTConfig(mgr.GetConfig(), kube.Options{ + Options: client.Options{ + Scheme: mgr.GetScheme(), + Cache: &client.CacheOptions{ + Reader: mgr.GetCache(), + }, + }, + FieldManager: string(lifecycle.DefaultFieldOwner), + }) + if err != nil { + return err + } + + if err := (&pipelinecontroller.Controller{ + Ctl: pipelineCtl, + LicenseFilePath: opts.licenseFilePath, + CommonAnnotations: opts.commonAnnotations, + DefaultImage: opts.connectDefaultImage, + Monitoring: pipelinecontroller.MonitoringConfig{ + Enabled: opts.connectMonitoringEnabled, + ScrapeInterval: opts.connectMonitoringScrapeInterval, + Labels: opts.connectMonitoringLabels, + }, + }).SetupWithManager(ctx, mgr, opts.namespace); err != nil { + setupLog.Error(err, "unable to create controller", "controller", "Pipeline") + return err + } + } } if err := redpandacontrollers.SetupShadowLinkController(ctx, mcmanager, cloudExpander, v1Controllers, v2Controllers, opts.namespace); err != nil { diff --git a/operator/config/crd/bases/cluster.redpanda.com_pipelines.yaml b/operator/config/crd/bases/cluster.redpanda.com_pipelines.yaml new file mode 100644 index 000000000..e9df7265f --- /dev/null +++ b/operator/config/crd/bases/cluster.redpanda.com_pipelines.yaml @@ -0,0 +1,3018 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.20.1 + name: pipelines.cluster.redpanda.com +spec: + group: cluster.redpanda.com + names: + kind: Pipeline + listKind: PipelineList + plural: pipelines + shortNames: + - rpcn + singular: pipeline + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .status.conditions[?(@.type=="Ready")].status + name: Ready + type: string + - jsonPath: .status.phase + name: Phase + type: string + - jsonPath: .spec.replicas + name: Replicas + type: integer + - jsonPath: .status.readyReplicas + name: Available + type: integer + - jsonPath: .metadata.creationTimestamp + name: Age + type: date + name: v1alpha2 + schema: + openAPIV3Schema: + description: Connect defines a Redpanda Connect pipeline managed by the operator. + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: Spec defines the desired state of the Connect pipeline. + properties: + annotations: + additionalProperties: + type: string + description: |- + Annotations specifies additional annotations to apply to the pipeline pod + template. These are merged with any operator-level commonAnnotations, with + per-pipeline annotations taking precedence. Useful for integrations like + Datadog autodiscovery that rely on pod annotations. + type: object + budget: + description: |- + Budget configures a PodDisruptionBudget for the pipeline Deployment, + protecting pipeline pods from voluntary disruptions such as node drains + and cluster autoscaler evictions. When not set, no PDB is created. + properties: + maxUnavailable: + default: 1 + description: |- + MaxUnavailable defines the maximum number of pipeline pods that can be + unavailable during a voluntary disruption. Defaults to 1 if not set. + minimum: 0 + type: integer + required: + - maxUnavailable + type: object + cluster: + description: |- + ClusterSource declaratively binds the pipeline's redpanda input/output + to a Redpanda cluster. Mirrors the ClusterSource pattern used by the + User/Topic CRDs: + + - clusterRef: point at an existing Redpanda CR by name. The operator + resolves the internal broker addresses + TLS material automatically; + the SASL identity is taken from .userRef. + - staticConfiguration: hard-code brokers, TLS, and SASL. The password + is a ValueSource so it can come from inline / Secret / ConfigMap / + ExternalSecret. + + When unset, the pipeline runs against whatever brokers the user wires + inline in configYaml (e.g. an external Kafka, Confluent Cloud, etc.). + properties: + clusterRef: + description: |- + ClusterRef is a reference to the cluster where the object should be created. + It is used in constructing the client created to configure a cluster. + This takes precedence over StaticConfigurationSource. + properties: + group: + description: |- + Group is used to override the object group that this reference points to. + If unspecified, defaults to "cluster.redpanda.com". + type: string + kind: + description: |- + Kind is used to override the object kind that this reference points to. + If unspecified, defaults to "Redpanda". + type: string + name: + description: Name specifies the name of the cluster being + referenced. + type: string + required: + - name + type: object + staticConfiguration: + description: StaticConfiguration holds connection parameters to + Kafka and Admin APIs. + properties: + admin: + description: |- + AdminAPISpec is the configuration information for communicating with the Admin + API of a Redpanda cluster where the object should be created. + properties: + sasl: + description: Defines authentication configuration settings + for Redpanda clusters that have authentication enabled. + properties: + authToken: + description: Specifies token for token-based authentication + (only used if no username/password are provided). + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to select + from. Must be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or + its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can be + set + rule: '!has(self.inline) || (has(self.inline) && + !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other field + can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field can + be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other field + can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + mechanism: + description: Specifies the SASL/SCRAM authentication + mechanism. + type: string + password: + description: Specifies the password. + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to select + from. Must be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or + its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can be + set + rule: '!has(self.inline) || (has(self.inline) && + !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other field + can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field can + be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other field + can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + passwordSecretRef: + description: 'Deprecated: use `password` instead' + properties: + key: + description: Key in Secret data to get value from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + token: + description: 'Deprecated: use `authToken` instead' + properties: + key: + description: Key in Secret data to get value from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + username: + description: Specifies the username. + type: string + required: + - mechanism + type: object + tls: + description: Defines TLS configuration settings for Redpanda + clusters that have TLS enabled. + properties: + caCert: + description: CaCert is the reference for certificate + authority used to establish TLS connection to Redpanda + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to select + from. Must be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or + its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can be + set + rule: '!has(self.inline) || (has(self.inline) && + !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other field + can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field can + be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other field + can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + caCertSecretRef: + description: 'Deprecated: replaced by "caCert".' + properties: + key: + description: Key in Secret data to get value from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + cert: + description: Cert is the reference for client public + certificate to establish mTLS connection to Redpanda + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to select + from. Must be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or + its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can be + set + rule: '!has(self.inline) || (has(self.inline) && + !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other field + can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field can + be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other field + can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + certSecretRef: + description: 'Deprecated: replaced by "cert".' + properties: + key: + description: Key in Secret data to get value from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + enabled: + description: |- + Enabled tells any connections derived from this configuration to leverage TLS even if no + certificate configuration is specified. It *only* is relevant if no other field is specified + in the TLS configuration block, as, for backwards compatibility reasons, any CA/Cert/Key-specification + results in attempting to create a connection using TLS - specifying "false" in such a case does + *not* disable TLS from being used. Leveraging this option is to support the use-case where a + connection is served by publically issued TLS certificates that don't require any additional certificate + specification. + type: boolean + insecureSkipTlsVerify: + description: InsecureSkipTLSVerify can skip verifying + Redpanda self-signed certificate when establish + TLS connection to Redpanda + type: boolean + key: + description: Key is the reference for client private + certificate to establish mTLS connection to Redpanda + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to select + from. Must be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or + its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can be + set + rule: '!has(self.inline) || (has(self.inline) && + !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other field + can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field can + be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other field + can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + keySecretRef: + description: 'Deprecated: replaced by "key".' + properties: + key: + description: Key in Secret data to get value from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + type: object + urls: + description: Specifies a list of broker addresses in the + format : + items: + type: string + type: array + required: + - urls + type: object + kafka: + description: |- + Kafka is the configuration information for communicating with the Kafka + API of a Redpanda cluster where the object should be created. + properties: + brokers: + description: Specifies a list of broker addresses in the + format : + items: + type: string + minItems: 1 + type: array + sasl: + description: Defines authentication configuration settings + for Redpanda clusters that have authentication enabled. + properties: + awsMskIam: + description: |- + KafkaSASLAWSMskIam is the config for AWS IAM SASL mechanism, + see: https://docs.aws.amazon.com/msk/latest/developerguide/iam-access-control.html + properties: + accessKey: + type: string + secretKey: + description: ValueSource represents where a value + can be pulled from + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to + select from. Must be a valid secret + key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can + be set + rule: '!has(self.inline) || (has(self.inline) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other + field can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field + can be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other + field can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + secretKeySecretRef: + description: 'Deprecated: use `secretKey` instead' + properties: + key: + description: Key in Secret data to get value + from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + sessionToken: + description: |- + SessionToken, if non-empty, is a session / security token to use for authentication. + See: https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to + select from. Must be a valid secret + key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can + be set + rule: '!has(self.inline) || (has(self.inline) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other + field can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field + can be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other + field can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + sessionTokenSecretRef: + description: 'Deprecated: use `sessionToken` instead' + properties: + key: + description: Key in Secret data to get value + from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + userAgent: + description: |- + UserAgent is the user agent to for the client to use when connecting + to Kafka, overriding the default "franz-go//". + + Setting a UserAgent allows authorizing based on the aws:UserAgent + condition key; see the following link for more details: + https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-useragent + type: string + required: + - accessKey + - userAgent + type: object + gssapi: + description: KafkaSASLGSSAPI represents the Kafka + Kerberos config. + properties: + authType: + type: string + enableFast: + description: |- + EnableFAST enables FAST, which is a pre-authentication framework for Kerberos. + It includes a mechanism for tunneling pre-authentication exchanges using armored KDC messages. + FAST provides increased resistance to passive password guessing attacks. + type: boolean + kerberosConfigPath: + type: string + keyTabPath: + type: string + password: + description: ValueSource represents where a value + can be pulled from + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to + select from. Must be a valid secret + key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can + be set + rule: '!has(self.inline) || (has(self.inline) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other + field can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field + can be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other + field can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + passwordSecretRef: + description: 'Deprecated: use `password` instead' + properties: + key: + description: Key in Secret data to get value + from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + realm: + type: string + serviceName: + type: string + username: + type: string + required: + - authType + - enableFast + - kerberosConfigPath + - keyTabPath + - realm + - serviceName + - username + type: object + mechanism: + description: Specifies the SASL/SCRAM authentication + mechanism. + type: string + oauth: + description: KafkaSASLOAuthBearer is the config struct + for the SASL OAuthBearer mechanism + properties: + token: + description: ValueSource represents where a value + can be pulled from + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to + select from. Must be a valid secret + key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can + be set + rule: '!has(self.inline) || (has(self.inline) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other + field can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field + can be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other + field can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + tokenSecretRef: + description: 'Deprecated: use `token` instead' + properties: + key: + description: Key in Secret data to get value + from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + type: object + password: + description: Specifies the password. + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to select + from. Must be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or + its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can be + set + rule: '!has(self.inline) || (has(self.inline) && + !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other field + can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field can + be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other field + can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + passwordSecretRef: + description: 'Deprecated: use `password` instead' + properties: + key: + description: Key in Secret data to get value from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + username: + description: Specifies the username. + type: string + required: + - mechanism + type: object + x-kubernetes-validations: + - message: username and password must be set when mechanism + is plain + rule: self.mechanism.lowerAscii() != 'plain' || (self.username + != "" && (has(self.passwordSecretRef) || has(self.password))) + - message: username and password must be set when mechanism + is sha-256 + rule: self.mechanism.lowerAscii() != 'scram-sha-256' + || (self.username != "" && (has(self.passwordSecretRef) + || has(self.password))) + - message: username and password must be set when mechanism + is sha-512 + rule: self.mechanism.lowerAscii() != 'scram-sha-512' + || (self.username != "" && (has(self.passwordSecretRef) + || has(self.password))) + - message: oauth must be set when mechanism is oauth + rule: self.mechanism.lowerAscii() != 'oauthbearer' || + has(self.oauth) + - message: gssapi must be set when mechanism is gssapi + rule: self.mechanism.lowerAscii() != 'gssapi' || has(self.gssapi) + - message: awsMskIam must be set when mechanism is aws_msk_iam + rule: self.mechanism.lowerAscii() != 'aws_msk_iam' || + has(self.awsMskIam) + tls: + description: Defines TLS configuration settings for Redpanda + clusters that have TLS enabled. + properties: + caCert: + description: CaCert is the reference for certificate + authority used to establish TLS connection to Redpanda + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to select + from. Must be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or + its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can be + set + rule: '!has(self.inline) || (has(self.inline) && + !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other field + can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field can + be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other field + can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + caCertSecretRef: + description: 'Deprecated: replaced by "caCert".' + properties: + key: + description: Key in Secret data to get value from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + cert: + description: Cert is the reference for client public + certificate to establish mTLS connection to Redpanda + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to select + from. Must be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or + its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can be + set + rule: '!has(self.inline) || (has(self.inline) && + !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other field + can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field can + be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other field + can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + certSecretRef: + description: 'Deprecated: replaced by "cert".' + properties: + key: + description: Key in Secret data to get value from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + enabled: + description: |- + Enabled tells any connections derived from this configuration to leverage TLS even if no + certificate configuration is specified. It *only* is relevant if no other field is specified + in the TLS configuration block, as, for backwards compatibility reasons, any CA/Cert/Key-specification + results in attempting to create a connection using TLS - specifying "false" in such a case does + *not* disable TLS from being used. Leveraging this option is to support the use-case where a + connection is served by publically issued TLS certificates that don't require any additional certificate + specification. + type: boolean + insecureSkipTlsVerify: + description: InsecureSkipTLSVerify can skip verifying + Redpanda self-signed certificate when establish + TLS connection to Redpanda + type: boolean + key: + description: Key is the reference for client private + certificate to establish mTLS connection to Redpanda + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to select + from. Must be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or + its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can be + set + rule: '!has(self.inline) || (has(self.inline) && + !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other field + can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field can + be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other field + can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + keySecretRef: + description: 'Deprecated: replaced by "key".' + properties: + key: + description: Key in Secret data to get value from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + type: object + required: + - brokers + type: object + schemaRegistry: + description: |- + SchemaRegistry is the configuration information for communicating with the Schema Registry + API of a Redpanda cluster where the object should be created. + properties: + sasl: + description: Defines authentication configuration settings + for Redpanda clusters that have authentication enabled. + properties: + authToken: + description: ValueSource represents where a value + can be pulled from + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to select + from. Must be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or + its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can be + set + rule: '!has(self.inline) || (has(self.inline) && + !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other field + can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field can + be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other field + can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + mechanism: + description: Specifies the SASL/SCRAM authentication + mechanism. + type: string + password: + description: Specifies the password. + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to select + from. Must be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or + its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can be + set + rule: '!has(self.inline) || (has(self.inline) && + !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other field + can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field can + be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other field + can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + passwordSecretRef: + description: 'Deprecated: use `password` instead' + properties: + key: + description: Key in Secret data to get value from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + token: + description: 'Deprecated: use `authToken` instead' + properties: + key: + description: Key in Secret data to get value from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + username: + description: Specifies the username. + type: string + required: + - mechanism + type: object + tls: + description: Defines TLS configuration settings for Redpanda + clusters that have TLS enabled. + properties: + caCert: + description: CaCert is the reference for certificate + authority used to establish TLS connection to Redpanda + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to select + from. Must be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or + its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can be + set + rule: '!has(self.inline) || (has(self.inline) && + !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other field + can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field can + be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other field + can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + caCertSecretRef: + description: 'Deprecated: replaced by "caCert".' + properties: + key: + description: Key in Secret data to get value from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + cert: + description: Cert is the reference for client public + certificate to establish mTLS connection to Redpanda + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to select + from. Must be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or + its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can be + set + rule: '!has(self.inline) || (has(self.inline) && + !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other field + can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field can + be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other field + can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + certSecretRef: + description: 'Deprecated: replaced by "cert".' + properties: + key: + description: Key in Secret data to get value from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + enabled: + description: |- + Enabled tells any connections derived from this configuration to leverage TLS even if no + certificate configuration is specified. It *only* is relevant if no other field is specified + in the TLS configuration block, as, for backwards compatibility reasons, any CA/Cert/Key-specification + results in attempting to create a connection using TLS - specifying "false" in such a case does + *not* disable TLS from being used. Leveraging this option is to support the use-case where a + connection is served by publically issued TLS certificates that don't require any additional certificate + specification. + type: boolean + insecureSkipTlsVerify: + description: InsecureSkipTLSVerify can skip verifying + Redpanda self-signed certificate when establish + TLS connection to Redpanda + type: boolean + key: + description: Key is the reference for client private + certificate to establish mTLS connection to Redpanda + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified + inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to select + from. Must be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or + its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, + or externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef) + - message: if inline is set no other field can be + set + rule: '!has(self.inline) || (has(self.inline) && + !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other field + can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) + || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field can + be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) + || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other field + can be set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + keySecretRef: + description: 'Deprecated: replaced by "key".' + properties: + key: + description: Key in Secret data to get value from + type: string + name: + description: |- + Name of the referent. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + required: + - name + type: object + type: object + urls: + description: Specifies a list of broker addresses in the + format : + items: + type: string + type: array + required: + - urls + type: object + type: object + type: object + x-kubernetes-validations: + - message: either clusterRef or staticConfiguration must be set + rule: has(self.clusterRef) || has(self.staticConfiguration) + configFiles: + additionalProperties: + type: string + description: |- + ConfigFiles defines additional configuration files to mount alongside + the main pipeline configuration. Each entry maps a filename to its content. + Files are mounted in the /config directory alongside connect.yaml. + The key "connect.yaml" is reserved and cannot be used. + Maps to pipeline config files when migrating to Redpanda Cloud. + type: object + configYaml: + description: |- + ConfigYAML is the user-supplied Redpanda Connect pipeline YAML. + Reference cluster-bound or sensitive values from .valueSources via + ${NAME} interpolation; the operator resolves them at render time. + + When .cluster is set, the operator inline-merges connection fields + (seed_brokers, tls, sasl) into any `input.redpanda` and + `output.redpanda` blocks in this YAML, derived from the resolved + cluster connection and .userRef. Users only need to write the + per-plugin fields (topic, key, consumer_group, etc.); brokers, TLS, + and SASL are filled in by the operator. + + User-side keys win on conflict — set a key explicitly (for example, + seed_brokers pointing at a different cluster) and the operator's + generated value is skipped for that key. + + The merge targets the `redpanda` input/output plugins specifically. + Any `redpanda_common` blocks the user authors are passed through + unchanged — the operator does not inject connection fields into + them. + type: string + description: + description: |- + Description is an optional description of what this pipeline does. + Maps to the pipeline description when migrating to Redpanda Cloud. + type: string + displayName: + description: |- + DisplayName is a human-readable name for the pipeline. + Maps to the pipeline display name when migrating to Redpanda Cloud. + type: string + image: + description: Image is the container image for the Redpanda Connect + deployment. + type: string + nodeSelector: + additionalProperties: + type: string + description: NodeSelector constrains pipeline pods to nodes with matching + labels. + type: object + paused: + description: Paused stops the pipeline by scaling replicas to zero + when set to true. + type: boolean + replicas: + default: 1 + description: Replicas is the number of pipeline replicas to run. + format: int32 + minimum: 0 + type: integer + resources: + description: Resources defines the compute resource requirements for + the pipeline pods. + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This field depends on the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + serviceAccountName: + description: |- + ServiceAccountName is the ServiceAccount to bind to the pipeline pod. + When unset, the namespace's default ServiceAccount is used. + + Setting this is the recommended way to scope per-pipeline cloud-IAM + trust (e.g. IRSA on EKS, Workload Identity on GKE, Pod Identity on + AKS). Annotating the namespace's default SA works but grants every + pipeline in the namespace the same role — naming a Pipeline-specific + SA here keeps the trust boundary per-pipeline. + + The operator does NOT create the ServiceAccount; provision it + (along with the appropriate cloud-IAM annotations) out-of-band. + type: string + tags: + additionalProperties: + type: string + description: |- + Tags are key-value pairs for organizing and filtering pipelines. + Maps to pipeline tags when migrating to Redpanda Cloud. + type: object + tolerations: + description: Tolerations for the pipeline pods, allowing them to be + scheduled on tainted nodes. + items: + description: |- + The pod this Toleration is attached to tolerates any taint that matches + the triple using the matching operator . + properties: + effect: + description: |- + Effect indicates the taint effect to match. Empty means match all taint effects. + When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. + type: string + key: + description: |- + Key is the taint key that the toleration applies to. Empty means match all taint keys. + If the key is empty, operator must be Exists; this combination means to match all values and all keys. + type: string + operator: + description: |- + Operator represents a key's relationship to the value. + Valid operators are Exists, Equal, Lt, and Gt. Defaults to Equal. + Exists is equivalent to wildcard for value, so that a pod can + tolerate all taints of a particular category. + Lt and Gt perform numeric comparisons (requires feature gate TaintTolerationComparisonOperators). + type: string + tolerationSeconds: + description: |- + TolerationSeconds represents the period of time the toleration (which must be + of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, + it is not set, which means tolerate the taint forever (do not evict). Zero and + negative values will be treated as 0 (evict immediately) by the system. + format: int64 + type: integer + value: + description: |- + Value is the taint value the toleration matches to. + If the operator is Exists, the value should be empty, otherwise just a regular string. + type: string + type: object + type: array + topologySpreadConstraints: + description: |- + TopologySpreadConstraints controls how pipeline pods are spread across + topology domains such as availability zones. When Zones is specified, + a default topology spread constraint is generated automatically. + Any constraints specified here are used in addition to (or instead of) + the auto-generated zone constraint. + items: + description: TopologySpreadConstraint specifies how to spread matching + pods among the given topology. + properties: + labelSelector: + description: |- + LabelSelector is used to find matching pods. + Pods that match this label selector are counted to determine the number of pods + in their corresponding topology domain. + properties: + matchExpressions: + description: matchExpressions is a list of label selector + requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the selector + applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select the pods over which + spreading will be calculated. The keys are used to lookup values from the + incoming pod labels, those key-value labels are ANDed with labelSelector + to select the group of existing pods over which spreading will be calculated + for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. + MatchLabelKeys cannot be set when LabelSelector isn't set. + Keys that don't exist in the incoming pod labels will + be ignored. A null or empty list means only match against labelSelector. + + This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). + items: + type: string + type: array + x-kubernetes-list-type: atomic + maxSkew: + description: |- + MaxSkew describes the degree to which pods may be unevenly distributed. + When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference + between the number of matching pods in the target topology and the global minimum. + The global minimum is the minimum number of matching pods in an eligible domain + or zero if the number of eligible domains is less than MinDomains. + For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same + labelSelector spread as 2/2/1: + In this case, the global minimum is 1. + | zone1 | zone2 | zone3 | + | P P | P P | P | + - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; + scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) + violate MaxSkew(1). + - if MaxSkew is 2, incoming pod can be scheduled onto any zone. + When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence + to topologies that satisfy it. + It's a required field. Default value is 1 and 0 is not allowed. + format: int32 + type: integer + minDomains: + description: |- + MinDomains indicates a minimum number of eligible domains. + When the number of eligible domains with matching topology keys is less than minDomains, + Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. + And when the number of eligible domains with matching topology keys equals or greater than minDomains, + this value has no effect on scheduling. + As a result, when the number of eligible domains is less than minDomains, + scheduler won't schedule more than maxSkew Pods to those domains. + If value is nil, the constraint behaves as if MinDomains is equal to 1. + Valid values are integers greater than 0. + When value is not nil, WhenUnsatisfiable must be DoNotSchedule. + + For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same + labelSelector spread as 2/2/2: + | zone1 | zone2 | zone3 | + | P P | P P | P P | + The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. + In this situation, new pod with the same labelSelector cannot be scheduled, + because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, + it will violate MaxSkew. + format: int32 + type: integer + nodeAffinityPolicy: + description: |- + NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector + when calculating pod topology spread skew. Options are: + - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. + - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. + + If this value is nil, the behavior is equivalent to the Honor policy. + type: string + nodeTaintsPolicy: + description: |- + NodeTaintsPolicy indicates how we will treat node taints when calculating + pod topology spread skew. Options are: + - Honor: nodes without taints, along with tainted nodes for which the incoming pod + has a toleration, are included. + - Ignore: node taints are ignored. All nodes are included. + + If this value is nil, the behavior is equivalent to the Ignore policy. + type: string + topologyKey: + description: |- + TopologyKey is the key of node labels. Nodes that have a label with this key + and identical values are considered to be in the same topology. + We consider each as a "bucket", and try to put balanced number + of pods into each bucket. + We define a domain as a particular instance of a topology. + Also, we define an eligible domain as a domain whose nodes meet the requirements of + nodeAffinityPolicy and nodeTaintsPolicy. + e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. + And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. + It's a required field. + type: string + whenUnsatisfiable: + description: |- + WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy + the spread constraint. + - DoNotSchedule (default) tells the scheduler not to schedule it. + - ScheduleAnyway tells the scheduler to schedule the pod in any location, + but giving higher precedence to topologies that would help reduce the + skew. + A constraint is considered "Unsatisfiable" for an incoming pod + if and only if every possible node assignment for that pod would violate + "MaxSkew" on some topology. + For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same + labelSelector spread as 3/1/1: + | zone1 | zone2 | zone3 | + | P P P | P | P | + If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled + to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies + MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler + won't make it *more* imbalanced. + It's a required field. + type: string + required: + - maxSkew + - topologyKey + - whenUnsatisfiable + type: object + type: array + userRef: + description: |- + UserRef binds the pipeline to a User CR. When set alongside + .cluster.clusterRef, the operator reads the referenced User's + password Secret + SASL mechanism and uses the User's metadata.name + as the SASL username, emitting REDPANDA_SASL_USERNAME / _PASSWORD / + _MECHANISM env vars in the pipeline pod and a `sasl:` block in the + auto-generated `redpanda` config. + + Set this when the cluster the pipeline talks to has SASL enabled. + On unauthenticated clusters (and in clusterRef-only modes that + only need broker discovery), leave it empty. + + CEL restrictions: + - userRef must NOT be set alongside .cluster.staticConfiguration — + the static path carries its own inline SASL config. + - userRef must NOT be set without .cluster.clusterRef — there's no + cluster context to authenticate against otherwise. + + The referenced User CR is expected to live in the same namespace as + the Pipeline and to declare ACLs scoped to the topics, schema + subjects, and consumer groups this pipeline reads/writes. The + operator does NOT auto-create or modify the User CR — ACL scoping + stays an explicit, auditable user-controlled action. + properties: + name: + description: Name of the User CR (in the same namespace as the + Pipeline). + type: string + required: + - name + type: object + valueSources: + description: |- + ValueSources is a list of named values the pipeline YAML can reference + via ${NAME} interpolation. Each value is fetched at render time from + inline / ConfigMap / Secret / ExternalSecret and projected into the + pipeline pod as an environment variable. One named pull per entry — + avoids the bag-of-Secrets env-splat pattern. + + Example: + spec: + valueSources: + - name: S3_SECRET_KEY + source: + secretKeyRef: + name: s3-creds + key: secret_access_key + configYaml: | + output: + aws_s3: + bucket: my-bucket + credentials: + secret: ${S3_SECRET_KEY} + + See: https://docs.redpanda.com/redpanda-connect/configuration/secrets/ + items: + description: |- + NamedValueSource binds a name to a value provider so the pipeline YAML + can reference it via ${NAME} interpolation. + properties: + name: + description: |- + Name is the environment-variable name the pipeline YAML references. + Must match standard env-var characters: [A-Z_][A-Z0-9_]*. + minLength: 1 + pattern: ^[A-Z_][A-Z0-9_]*$ + type: string + source: + description: |- + Source is the value provider. Exactly one of inline / configMapKeyRef + / secretKeyRef / externalSecretRef must be set; the ValueSource + XValidation rules enforce this. + properties: + configMapKeyRef: + description: |- + If the value is supplied by a kubernetes object reference, coordinates are embedded here. + For target values, the string value fetched from the source will be treated as + a raw string. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap or its key + must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + externalSecretRef: + description: |- + If the value is supplied by an external source, coordinates are embedded here. + Note: we interpret all fetched external secrets as raw string values + properties: + name: + type: string + required: + - name + type: object + x-kubernetes-map-type: atomic + inline: + description: Inline is the raw value specified inline. + type: string + secretKeyRef: + description: |- + Should the value be contained in a k8s secret rather than configmap, we can refer + to it here. + properties: + key: + description: The key of the secret to select from. Must + be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or its key must + be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + x-kubernetes-map-type: atomic + x-kubernetes-validations: + - message: one of inline, configMapKeyRef, secretKeyRef, or + externalSecretRef must be set + rule: has(self.inline) || has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.externalSecretRef) + - message: if inline is set no other field can be set + rule: '!has(self.inline) || (has(self.inline) && !(has(self.configMapKeyRef) + || has(self.secretKeyRef) || has(self.externalSecretRef)))' + - message: if configMapKeyRef is set no other field can be set + rule: '!has(self.configMapKeyRef) || (has(self.configMapKeyRef) + && !(has(self.inline) || has(self.secretKeyRef) || has(self.externalSecretRef)))' + - message: if secretKeyRef is set no other field can be set + rule: '!has(self.secretKeyRef) || (has(self.secretKeyRef) + && !(has(self.configMapKeyRef) || has(self.inline) || has(self.externalSecretRef)))' + - message: if externalSecretRef is set no other field can be + set + rule: '!has(self.externalSecretRef) || (has(self.externalSecretRef) + && !(has(self.configMapKeyRef) || has(self.secretKeyRef) + || has(self.inline)))' + required: + - name + - source + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + zones: + description: |- + Zones specifies the availability zones across which pipeline pods should + be spread. When set, the controller configures: + - A node affinity to schedule pods only on nodes in these zones + - A topology spread constraint to distribute pods evenly across zones + The zone label used is "topology.kubernetes.io/zone". + items: + type: string + type: array + required: + - configYaml + type: object + x-kubernetes-validations: + - message: userRef must be empty when cluster.staticConfiguration is set + rule: '!has(self.cluster) || !has(self.cluster.staticConfiguration) + || !has(self.userRef)' + - message: userRef cannot be set without cluster.clusterRef + rule: '!has(self.userRef) || (has(self.cluster) && has(self.cluster.clusterRef))' + status: + description: Status represents the current observed state of the Connect + pipeline. + properties: + conditions: + description: Conditions holds the conditions for the Connect resource. + items: + description: Condition contains details for one aspect of the current + state of this API Resource. + properties: + lastTransitionTime: + description: |- + lastTransitionTime is the last time the condition transitioned from one status to another. + This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + format: date-time + type: string + message: + description: |- + message is a human readable message indicating details about the transition. + This may be an empty string. + maxLength: 32768 + type: string + observedGeneration: + description: |- + observedGeneration represents the .metadata.generation that the condition was set based upon. + For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date + with respect to the current state of the instance. + format: int64 + minimum: 0 + type: integer + reason: + description: |- + reason contains a programmatic identifier indicating the reason for the condition's last transition. + Producers of specific condition types may define expected values and meanings for this field, + and whether the values are considered a guaranteed API. + The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - lastTransitionTime + - message + - reason + - status + - type + type: object + type: array + observedGeneration: + description: ObservedGeneration is the last observed generation of + the Connect resource. + format: int64 + type: integer + phase: + description: Phase describes the current phase of the pipeline lifecycle. + enum: + - Pending + - Provisioning + - Running + - Stopped + - Unknown + type: string + readyReplicas: + description: ReadyReplicas is the number of ready pipeline pods. + format: int32 + type: integer + replicas: + description: Replicas is the number of desired replicas. + format: int32 + type: integer + type: object + type: object + served: true + storage: true + subresources: + status: {} diff --git a/operator/config/crd/bases/crds.go b/operator/config/crd/bases/crds.go index 3d49d30bd..a5b081b47 100644 --- a/operator/config/crd/bases/crds.go +++ b/operator/config/crd/bases/crds.go @@ -83,6 +83,11 @@ func All() []*apiextensionsv1.CustomResourceDefinition { return ret } +// Connect returns the Connect CustomResourceDefinition. +func Pipeline() *apiextensionsv1.CustomResourceDefinition { + return mustT(ByName("pipelines.cluster.redpanda.com")) +} + // Redpanda returns the Redpanda CustomResourceDefinition. func Redpanda() *apiextensionsv1.CustomResourceDefinition { return mustT(ByName("redpandas.cluster.redpanda.com")) diff --git a/operator/config/crd/bases/crds_test.go b/operator/config/crd/bases/crds_test.go index ebaf5d9ff..895f6b745 100644 --- a/operator/config/crd/bases/crds_test.go +++ b/operator/config/crd/bases/crds_test.go @@ -20,6 +20,7 @@ import ( func TestCRDS(t *testing.T) { names := map[string]struct{}{ "clusters.redpanda.vectorized.io": {}, + "pipelines.cluster.redpanda.com": {}, "consoles.cluster.redpanda.com": {}, "consoles.redpanda.vectorized.io": {}, "groups.cluster.redpanda.com": {}, @@ -40,6 +41,7 @@ func TestCRDS(t *testing.T) { require.Equal(t, names, foundNames) + require.Equal(t, "pipelines.cluster.redpanda.com", crds.Pipeline().Name) require.Equal(t, "consoles.cluster.redpanda.com", crds.Console().Name) require.Equal(t, "groups.cluster.redpanda.com", crds.Group().Name) require.Equal(t, "nodepools.cluster.redpanda.com", crds.NodePool().Name) diff --git a/operator/config/rbac/bases/operator/role.yaml b/operator/config/rbac/bases/operator/role.yaml index da5c8445b..b63f441ab 100644 --- a/operator/config/rbac/bases/operator/role.yaml +++ b/operator/config/rbac/bases/operator/role.yaml @@ -164,6 +164,7 @@ rules: - consoles/status - groups/status - nodepools/status + - pipelines/status - redpandaroles/status - redpandas/status - schemas/status @@ -179,6 +180,7 @@ rules: - cluster.redpanda.com resources: - groups + - pipelines - redpandaroles - schemas - shadowlinks @@ -196,6 +198,7 @@ rules: resources: - groups/finalizers - nodepools/finalizers + - pipelines/finalizers - redpandaroles/finalizers - redpandas/finalizers - schemas/finalizers diff --git a/operator/config/rbac/itemized/pipeline.yaml b/operator/config/rbac/itemized/pipeline.yaml new file mode 100644 index 000000000..3824d5a81 --- /dev/null +++ b/operator/config/rbac/itemized/pipeline.yaml @@ -0,0 +1,95 @@ +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: pipeline +rules: +- apiGroups: + - "" + resources: + - configmaps + - secrets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - "" + resources: + - pods + verbs: + - get + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines/finalizers + verbs: + - update +- apiGroups: + - cluster.redpanda.com + resources: + - pipelines/status + verbs: + - get + - patch + - update +- apiGroups: + - cluster.redpanda.com + resources: + - redpandas + verbs: + - get + - list + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - podmonitors + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - update + - watch diff --git a/operator/internal/controller/pipeline/cluster.go b/operator/internal/controller/pipeline/cluster.go new file mode 100644 index 000000000..444f86c33 --- /dev/null +++ b/operator/internal/controller/pipeline/cluster.go @@ -0,0 +1,172 @@ +// Copyright 2026 Redpanda Data, Inc. +// +// Use of this software is governed by the Business Source License +// included in the file licenses/BSL.md +// +// As of the Change Date specified in that file, in accordance with +// the Business Source License, use of this software will be governed +// by the Apache License, Version 2.0 + +package pipeline + +import ( + "context" + "strings" + + "github.com/cockroachdb/errors" + "github.com/redpanda-data/common-go/kube" + corev1 "k8s.io/api/core/v1" + + redpandav1alpha2 "github.com/redpanda-data/redpanda-operator/operator/api/redpanda/v1alpha2" + "github.com/redpanda-data/redpanda-operator/operator/api/redpanda/v1alpha2/conversion" +) + +// clusterConnection holds the resolved connection details for a Redpanda cluster. +type clusterConnection struct { + // Brokers is the list of internal Kafka broker addresses (host:port). + Brokers []string + // TLS holds TLS configuration if the cluster has TLS enabled. + TLS *clusterTLS + // SASL holds SASL credentials if the cluster has authentication enabled. + SASL *clusterSASL +} + +// clusterTLS holds TLS configuration resolved from a Redpanda cluster. +type clusterTLS struct { + // CACertSecretRef points to the Secret and key containing the CA certificate. + CACertSecretRef *corev1.SecretKeySelector +} + +// clusterSASL holds the cluster's bootstrap user SASL credentials. +type clusterSASL struct { + Mechanism string + Username string + PasswordRef *corev1.SecretKeySelector +} + +// userCredentials holds the SASL identity a Pipeline authenticates as when +// it is bound to a Redpanda cluster via .userRef. Distinct from +// clusterSASL.bootstrap; this is a per-pipeline named SCRAM user managed by +// the User CRD with ACLs scoped to what the pipeline actually reads/writes. +type userCredentials struct { + Mechanism string + Username string + PasswordRef *corev1.SecretKeySelector +} + +// envVars returns the corev1 env-var projections for these credentials. +// Pipelines reference these as ${REDPANDA_SASL_USERNAME} etc. in their +// configYaml, and the operator-generated `redpanda` block in connect.yaml +// uses the same names so both paths converge on the same Secret backing. +func (uc *userCredentials) envVars() []corev1.EnvVar { + if uc == nil { + return nil + } + out := []corev1.EnvVar{ + {Name: "REDPANDA_SASL_USERNAME", Value: uc.Username}, + {Name: "REDPANDA_SASL_MECHANISM", Value: uc.Mechanism}, + } + if uc.PasswordRef != nil { + out = append(out, corev1.EnvVar{ + Name: "REDPANDA_SASL_PASSWORD", + ValueFrom: &corev1.EnvVarSource{ + SecretKeyRef: uc.PasswordRef, + }, + }) + } + return out +} + +// BrokersString returns the broker list as a comma-separated string. +func (c *clusterConnection) BrokersString() string { + return strings.Join(c.Brokers, ",") +} + +// resolveUserRef resolves the Pipeline's userRef to a SCRAM identity backed +// by the User CR's password Secret. Returns nil if no userRef is set. +// +// The referenced User CR must: +// - exist in the same namespace as the Pipeline +// - have spec.authentication populated +// - have spec.authentication.password.valueFrom.secretKeyRef set (inline +// plaintext passwords are rejected; production deployments must use a +// Secret-backed value so password rotation is auditable) +func resolveUserRef(ctx context.Context, ctl *kube.Ctl, pipeline *redpandav1alpha2.Pipeline) (*userCredentials, error) { + if pipeline.Spec.UserRef == nil { + return nil, nil + } + + ref := pipeline.Spec.UserRef + var user redpandav1alpha2.User + if err := ctl.Get(ctx, kube.ObjectKey{Name: ref.Name, Namespace: pipeline.Namespace}, &user); err != nil { + return nil, errors.Wrapf(err, "failed to resolve userRef %q", ref.Name) + } + + if user.Spec.Authentication == nil { + return nil, errors.Newf("userRef %q has no spec.authentication; the Pipeline cannot authenticate to Redpanda", ref.Name) + } + if user.Spec.Authentication.Password.ValueFrom == nil || user.Spec.Authentication.Password.ValueFrom.SecretKeyRef == nil { + return nil, errors.Newf("userRef %q has no spec.authentication.password.valueFrom.secretKeyRef; pipelines require a Secret-backed password for auditable rotation", ref.Name) + } + + mechanism := "SCRAM-SHA-512" + if t := user.Spec.Authentication.Type; t != nil && *t != "" { + mechanism = strings.ToUpper(string(*t)) + } + + return &userCredentials{ + Mechanism: mechanism, + Username: user.Name, + PasswordRef: user.Spec.Authentication.Password.ValueFrom.SecretKeyRef, + }, nil +} + +// resolveClusterSource resolves the Pipeline's clusterRef to connection details. +// Returns nil if no clusterRef is set. +func resolveClusterSource(ctx context.Context, ctl *kube.Ctl, pipeline *redpandav1alpha2.Pipeline) (*clusterConnection, error) { + if pipeline.Spec.ClusterSource == nil || pipeline.Spec.ClusterSource.ClusterRef == nil { + return nil, nil + } + + ref := pipeline.Spec.ClusterSource.ClusterRef + + var rp redpandav1alpha2.Redpanda + if err := ctl.Get(ctx, kube.ObjectKey{Name: ref.Name, Namespace: pipeline.Namespace}, &rp); err != nil { + return nil, errors.Wrapf(err, "failed to resolve clusterRef %q", ref.Name) + } + + // Convert the Redpanda CR to a RenderState, then extract connection details. + // This is the same pattern used by the Console controller. + state, err := conversion.ConvertV2ToRenderState(nil, &conversion.V2Defaulters{ + RedpandaImage: func(ri *redpandav1alpha2.RedpandaImage) *redpandav1alpha2.RedpandaImage { return ri }, + SidecarImage: func(ri *redpandav1alpha2.RedpandaImage) *redpandav1alpha2.RedpandaImage { return ri }, + }, &rp, nil) + if err != nil { + return nil, errors.Wrap(err, "failed to convert Redpanda CR to render state") + } + + cfg := state.AsStaticConfigSource() + + conn := &clusterConnection{} + if cfg.Kafka != nil { + conn.Brokers = cfg.Kafka.Brokers + + if cfg.Kafka.TLS != nil && cfg.Kafka.TLS.CaCert != nil && cfg.Kafka.TLS.CaCert.SecretKeyRef != nil { + conn.TLS = &clusterTLS{ + CACertSecretRef: cfg.Kafka.TLS.CaCert.SecretKeyRef, + } + } + + if cfg.Kafka.SASL != nil { + conn.SASL = &clusterSASL{ + Mechanism: string(cfg.Kafka.SASL.Mechanism), + Username: cfg.Kafka.SASL.Username, + } + if cfg.Kafka.SASL.Password != nil && cfg.Kafka.SASL.Password.SecretKeyRef != nil { + conn.SASL.PasswordRef = cfg.Kafka.SASL.Password.SecretKeyRef + } + } + } + + return conn, nil +} diff --git a/operator/internal/controller/pipeline/controller.go b/operator/internal/controller/pipeline/controller.go new file mode 100644 index 000000000..25e949643 --- /dev/null +++ b/operator/internal/controller/pipeline/controller.go @@ -0,0 +1,540 @@ +// Copyright 2026 Redpanda Data, Inc. +// +// Use of this software is governed by the Business Source License +// included in the file licenses/BSL.md +// +// As of the Change Date specified in that file, in accordance with +// the Business Source License, use of this software will be governed +// by the Apache License, Version 2.0 + +// Package pipeline implements the controller for the Pipeline CRD. +package pipeline + +import ( + "context" + "fmt" + "os" + "time" + + "github.com/cockroachdb/errors" + monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" + "github.com/redpanda-data/common-go/kube" + "github.com/redpanda-data/common-go/license" + "github.com/redpanda-data/common-go/otelutil/log" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + redpandav1alpha2 "github.com/redpanda-data/redpanda-operator/operator/api/redpanda/v1alpha2" + "github.com/redpanda-data/redpanda-operator/operator/pkg/utils" +) + +const ( + finalizerKey = "operator.redpanda.com/finalizer" + + // clusterRefIndexField is the field index key for looking up Pipelines + // that reference a given Redpanda cluster. + clusterRefIndexField = "spec.cluster.clusterRef.name" +) + +// MonitoringConfig holds the operator-level monitoring settings for Connect pipelines. +type MonitoringConfig struct { + Enabled bool + ScrapeInterval string + Labels map[string]string +} + +// Controller reconciles Pipeline resources. +type Controller struct { + Ctl *kube.Ctl + namespace string + // LicenseFilePath is the path to the operator-level enterprise license file, + // configured via enterprise.licenseSecretRef in the operator Helm chart values. + LicenseFilePath string + // CommonAnnotations are annotations from the operator Helm chart values + // that are propagated to all resources managed by the operator. + CommonAnnotations map[string]string + // DefaultImage is the operator-level Redpanda Connect image override. When + // set, it's used as the fallback for Pipeline CRs that don't pin their own + // .spec.image. Per-Pipeline .spec.image still wins. When empty, falls + // through to the binary's hardcoded PipelineDefaultImage constant. + // + // Plumbed from the operator chart's connectController.image.{repository,tag} + // values via the --connect-default-image flag. + DefaultImage string + // Monitoring holds the operator-level monitoring configuration for Connect pipelines. + Monitoring MonitoringConfig +} + +// +kubebuilder:rbac:groups=cluster.redpanda.com,resources=pipelines,verbs=get;list;watch;update;patch +// +kubebuilder:rbac:groups=cluster.redpanda.com,resources=pipelines/status,verbs=get;update;patch +// +kubebuilder:rbac:groups=cluster.redpanda.com,resources=pipelines/finalizers,verbs=update +// +kubebuilder:rbac:groups=cluster.redpanda.com,resources=redpandas,verbs=get;list;watch +// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete +// +kubebuilder:rbac:groups="",resources=configmaps,verbs=get;list;watch;create;update;patch;delete +// +kubebuilder:rbac:groups="",resources=pods,verbs=get;list;watch +// +kubebuilder:rbac:groups="",resources=secrets,verbs=get;list;watch;create;update;patch;delete +// +kubebuilder:rbac:groups=policy,resources=poddisruptionbudgets,verbs=get;list;watch;create;update;patch;delete +// +kubebuilder:rbac:groups=monitoring.coreos.com,resources=podmonitors,verbs=get;list;watch;create;update;patch;delete + +func (c *Controller) SetupWithManager(ctx context.Context, mgr ctrl.Manager, namespace string) error { + c.namespace = namespace + + // Index Pipelines by their clusterRef name so we can efficiently look up + // which Pipelines reference a given Redpanda cluster. + if err := mgr.GetFieldIndexer().IndexField(ctx, &redpandav1alpha2.Pipeline{}, clusterRefIndexField, func(o client.Object) []string { + pipeline := o.(*redpandav1alpha2.Pipeline) + if pipeline.Spec.ClusterSource != nil && pipeline.Spec.ClusterSource.ClusterRef != nil { + return []string{pipeline.Spec.ClusterSource.ClusterRef.Name} + } + return nil + }); err != nil { + return err + } + + builder := ctrl.NewControllerManagedBy(mgr). + For(&redpandav1alpha2.Pipeline{}) + + for _, t := range Types() { + // Skip PodMonitor watch if the CRD is not installed. If it gets + // installed during operator runtime, the operator must be restarted. + if _, ok := t.(*monitoringv1.PodMonitor); ok { + if c.skipPodMonitorWatchIfNotInstalled(ctx) { + continue + } + } + builder = builder.Owns(t) + } + + // Watch Redpanda clusters and re-reconcile any Pipelines that reference + // them. This ensures Pipelines pick up broker/TLS changes promptly. + builder = builder.Watches(&redpandav1alpha2.Redpanda{}, handler.EnqueueRequestsFromMapFunc( + func(ctx context.Context, o client.Object) []reconcile.Request { + var pipelineList redpandav1alpha2.PipelineList + if err := mgr.GetClient().List(ctx, &pipelineList, + client.InNamespace(o.GetNamespace()), + client.MatchingFields{clusterRefIndexField: o.GetName()}, + ); err != nil { + return nil + } + var requests []reconcile.Request + for i := range pipelineList.Items { + requests = append(requests, reconcile.Request{ + NamespacedName: client.ObjectKeyFromObject(&pipelineList.Items[i]), + }) + } + return requests + }, + )) + + return builder.Complete(c) +} + +func (c *Controller) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { + if c.namespace != "" && req.Namespace != c.namespace { + return ctrl.Result{}, nil + } + + pipeline, err := kube.Get[redpandav1alpha2.Pipeline](ctx, c.Ctl, req.NamespacedName) + if err != nil { + if apierrors.IsNotFound(err) { + return ctrl.Result{}, nil + } + return ctrl.Result{}, err + } + + // Handle deletion: clean up owned resources and remove finalizer. + if !pipeline.DeletionTimestamp.IsZero() { + if controllerutil.RemoveFinalizer(pipeline, finalizerKey) { + syncer, err := c.syncerFor(pipeline, nil, nil, nil) + if err != nil { + return ctrl.Result{}, err + } + if _, err := syncer.DeleteAll(ctx); err != nil { + return ctrl.Result{}, err + } + // NB: Apply can't be used to remove finalizers. + if err := c.Ctl.Update(ctx, pipeline); err != nil { + // The object may have been fully deleted between Get and + // Update. This is benign — the finalizer is already gone. + if apierrors.IsNotFound(err) || apierrors.IsConflict(err) { + return ctrl.Result{}, nil + } + return ctrl.Result{}, err + } + } + return ctrl.Result{}, nil + } + + // Add finalizer if missing. Use Update (not Apply/SSA) to avoid taking + // ownership of spec fields, which would conflict with user-side SSA. + if controllerutil.AddFinalizer(pipeline, finalizerKey) { + if err := c.Ctl.Update(ctx, pipeline); err != nil { + return ctrl.Result{}, err + } + } + + // Resolve cluster connection details if clusterRef is set. + clusterConn, err := resolveClusterSource(ctx, c.Ctl, pipeline) + if err != nil { + log.Error(ctx, err, "failed to resolve clusterRef") + if statusErr := c.applyStatus(ctx, pipeline, redpandav1alpha2.PipelinePhasePending, []metav1.Condition{ + { + Type: redpandav1alpha2.PipelineConditionReady, + Status: metav1.ConditionFalse, + Reason: redpandav1alpha2.PipelineReasonClusterRefInvalid, + Message: err.Error(), + }, + { + Type: redpandav1alpha2.PipelineConditionClusterRef, + Status: metav1.ConditionFalse, + Reason: redpandav1alpha2.PipelineReasonClusterRefInvalid, + Message: err.Error(), + }, + }); statusErr != nil { + return ctrl.Result{}, statusErr + } + if err := c.deleteManagedResources(ctx, pipeline); err != nil { + return ctrl.Result{}, err + } + return ctrl.Result{RequeueAfter: 30 * time.Second}, nil + } + + // Resolve the named SCRAM user when .userRef is set. The pipeline + // authenticates as this identity instead of using the cluster's + // bootstrap superuser. + userCreds, err := resolveUserRef(ctx, c.Ctl, pipeline) + if err != nil { + log.Error(ctx, err, "failed to resolve userRef") + if statusErr := c.applyStatus(ctx, pipeline, redpandav1alpha2.PipelinePhasePending, []metav1.Condition{ + { + Type: redpandav1alpha2.PipelineConditionReady, + Status: metav1.ConditionFalse, + Reason: redpandav1alpha2.PipelineReasonUserInvalid, + Message: err.Error(), + }, + { + Type: redpandav1alpha2.PipelineConditionUserRef, + Status: metav1.ConditionFalse, + Reason: redpandav1alpha2.PipelineReasonUserInvalid, + Message: err.Error(), + }, + }); statusErr != nil { + return ctrl.Result{}, statusErr + } + if err := c.deleteManagedResources(ctx, pipeline); err != nil { + return ctrl.Result{}, err + } + return ctrl.Result{RequeueAfter: 30 * time.Second}, nil + } + + // Validate license before proceeding. + if err := c.validateLicense(); err != nil { + log.Error(ctx, err, "license validation failed") + if statusErr := c.applyStatus(ctx, pipeline, redpandav1alpha2.PipelinePhasePending, []metav1.Condition{{ + Type: redpandav1alpha2.PipelineConditionReady, + Status: metav1.ConditionFalse, + Reason: redpandav1alpha2.PipelineReasonLicenseInvalid, + Message: err.Error(), + }}); statusErr != nil { + return ctrl.Result{}, statusErr + } + if err := c.deleteManagedResources(ctx, pipeline); err != nil { + return ctrl.Result{}, err + } + // Requeue to retry license check periodically. + return ctrl.Result{RequeueAfter: 1 * time.Minute}, nil + } + + // Read the raw license bytes so the renderer can mirror them into a + // Pipeline-owned Secret. The Connect runtime has its own enterprise-license + // gate (independent of the operator's gate above), and reading the + // REDPANDA_LICENSE env var from this Secret unblocks enterprise inputs + // like mysql_cdc without forcing the user to wire the license up twice. + licenseContent, err := os.ReadFile(c.LicenseFilePath) + if err != nil { + return ctrl.Result{}, errors.Wrap(err, "reading license file for pipeline pod") + } + + // Sync all child resources (ConfigMap, Deployment, license Secret) via SSA. + syncer, err := c.syncerFor(pipeline, clusterConn, userCreds, licenseContent) + if err != nil { + return ctrl.Result{}, err + } + + objs, err := syncer.Sync(ctx) + if err != nil { + log.Error(ctx, err, "failed to sync resources") + if statusErr := c.applyStatus(ctx, pipeline, redpandav1alpha2.PipelinePhaseUnknown, []metav1.Condition{{ + Type: redpandav1alpha2.PipelineConditionReady, + Status: metav1.ConditionFalse, + Reason: redpandav1alpha2.PipelineReasonFailed, + Message: FormatConditionMessage("Resource", err.Error()), + }}); statusErr != nil { + return ctrl.Result{}, statusErr + } + return ctrl.Result{}, err + } + + // Derive status from the synced Deployment. + phase, conditions := c.deriveStatus(ctx, pipeline, objs) + + // Add ClusterRef condition when a clusterRef is configured. + if clusterConn != nil { + conditions = append(conditions, metav1.Condition{ + Type: redpandav1alpha2.PipelineConditionClusterRef, + Status: metav1.ConditionTrue, + Reason: redpandav1alpha2.PipelineReasonClusterRefResolved, + Message: "Cluster connection resolved successfully", + }) + } + + // Add UserRef condition when a userRef is configured. + if userCreds != nil { + conditions = append(conditions, metav1.Condition{ + Type: redpandav1alpha2.PipelineConditionUserRef, + Status: metav1.ConditionTrue, + Reason: redpandav1alpha2.PipelineReasonUserResolved, + Message: fmt.Sprintf("Resolved SCRAM user %q with mechanism %s", userCreds.Username, userCreds.Mechanism), + }) + } + + if err := c.applyStatus(ctx, pipeline, phase, conditions); err != nil { + return ctrl.Result{}, err + } + + // Use a shorter requeue when the pipeline is not yet running so we can + // detect init-container lint failures quickly. Once running, use a + // longer interval for periodic drift detection. + requeueAfter := 5 * time.Minute + if phase != redpandav1alpha2.PipelinePhaseRunning && phase != redpandav1alpha2.PipelinePhaseStopped { + requeueAfter = 15 * time.Second + } + return ctrl.Result{RequeueAfter: requeueAfter}, nil +} + +func (c *Controller) syncerFor(pipeline *redpandav1alpha2.Pipeline, clusterConn *clusterConnection, userCreds *userCredentials, licenseContent []byte) (*kube.Syncer, error) { + gvk, err := kube.GVKFor(c.Ctl.Scheme(), pipeline) + if err != nil { + return nil, err + } + + labels := Labels(pipeline) + + return &kube.Syncer{ + Ctl: c.Ctl, + Namespace: pipeline.Namespace, + Renderer: &render{ + pipeline: pipeline, + labels: labels, + commonAnnotations: c.CommonAnnotations, + defaultImage: c.DefaultImage, + monitoring: c.Monitoring, + clusterConn: clusterConn, + userCredentials: userCreds, + licenseContent: licenseContent, + }, + Owner: *metav1.NewControllerRef(pipeline, gvk), + OwnershipLabels: labels, + }, nil +} + +func (c *Controller) deleteManagedResources(ctx context.Context, pipeline *redpandav1alpha2.Pipeline) error { + syncer, err := c.syncerFor(pipeline, nil, nil, nil) + if err != nil { + return err + } + + _, err = syncer.DeleteAll(ctx) + return err +} + +func (c *Controller) validateLicense() error { + if c.LicenseFilePath == "" { + return errors.New("no license configured: set enterprise.licenseSecretRef in the operator Helm chart values") + } + + l, err := license.ReadLicense(c.LicenseFilePath) + if err != nil { + return errors.Wrap(err, "failed to read license") + } + + if err := license.CheckExpiration(l.Expires()); err != nil { + return errors.Wrap(err, "license expired") + } + + if !l.AllowsEnterpriseFeatures() { + return errors.New("license does not allow enterprise features") + } + + if !l.IncludesProduct(license.ProductConnect) { + return errors.New("license does not include Redpanda Connect") + } + + return nil +} + +// deriveStatus examines the synced objects to determine the Pipeline phase and +// conditions. It returns both a Ready condition and, when applicable, a +// ConfigValid condition based on the lint init container status. +func (c *Controller) deriveStatus(ctx context.Context, pipeline *redpandav1alpha2.Pipeline, objs []kube.Object) (redpandav1alpha2.PipelinePhase, []metav1.Condition) { + for _, obj := range objs { + dp, ok := obj.(*appsv1.Deployment) + if !ok { + continue + } + + pipeline.Status.Replicas = dp.Status.Replicas + pipeline.Status.ReadyReplicas = dp.Status.ReadyReplicas + + switch { + case pipeline.Spec.Paused: + return redpandav1alpha2.PipelinePhaseStopped, []metav1.Condition{{ + Type: redpandav1alpha2.PipelineConditionReady, + Status: metav1.ConditionTrue, + Reason: redpandav1alpha2.PipelineReasonPaused, + Message: "Pipeline is paused", + }} + case dp.Status.ReadyReplicas == dp.Status.Replicas && dp.Status.Replicas > 0: + return redpandav1alpha2.PipelinePhaseRunning, []metav1.Condition{ + { + Type: redpandav1alpha2.PipelineConditionReady, + Status: metav1.ConditionTrue, + Reason: redpandav1alpha2.PipelineReasonRunning, + Message: "Pipeline is running", + }, + { + Type: redpandav1alpha2.PipelineConditionConfigValid, + Status: metav1.ConditionTrue, + Reason: redpandav1alpha2.PipelineReasonConfigValid, + Message: "Configuration passed lint validation", + }, + } + default: + conditions := []metav1.Condition{{ + Type: redpandav1alpha2.PipelineConditionReady, + Status: metav1.ConditionFalse, + Reason: redpandav1alpha2.PipelineReasonProvisioning, + Message: "Pipeline is starting up", + }} + + healthy, msg := c.checkInitContainerStatus(ctx, dp) + if !healthy { + conditions = append(conditions, metav1.Condition{ + Type: redpandav1alpha2.PipelineConditionConfigValid, + Status: metav1.ConditionFalse, + Reason: redpandav1alpha2.PipelineReasonConfigInvalid, + Message: msg, + }) + } else { + conditions = append(conditions, metav1.Condition{ + Type: redpandav1alpha2.PipelineConditionConfigValid, + Status: metav1.ConditionTrue, + Reason: redpandav1alpha2.PipelineReasonConfigValid, + Message: "Configuration passed lint validation", + }) + } + return redpandav1alpha2.PipelinePhaseProvisioning, conditions + } + } + + // Deployment not yet observed in sync results. + pipeline.Status.Replicas = 0 + pipeline.Status.ReadyReplicas = 0 + return redpandav1alpha2.PipelinePhasePending, []metav1.Condition{{ + Type: redpandav1alpha2.PipelineConditionReady, + Status: metav1.ConditionFalse, + Reason: redpandav1alpha2.PipelineReasonProvisioning, + Message: "Deployment not yet created", + }} +} + +// checkInitContainerStatus lists pods for the given Deployment and checks +// whether the lint init container has failed. Returns (healthy, message). +func (c *Controller) checkInitContainerStatus(ctx context.Context, dp *appsv1.Deployment) (bool, string) { + var podList corev1.PodList + if err := c.Ctl.List(ctx, dp.Namespace, &podList, client.MatchingLabels(dp.Spec.Selector.MatchLabels)); err != nil { + // If we can't list pods, don't block — assume healthy. + return true, "" + } + + for i := range podList.Items { + for _, initStatus := range podList.Items[i].Status.InitContainerStatuses { + if initStatus.Name != "lint" { + continue + } + if initStatus.State.Terminated != nil && initStatus.State.Terminated.ExitCode != 0 { + msg := initStatus.State.Terminated.Message + if msg == "" { + msg = fmt.Sprintf("lint exited with code %d", initStatus.State.Terminated.ExitCode) + } + return false, msg + } + if initStatus.State.Waiting != nil && initStatus.State.Waiting.Reason == "CrashLoopBackOff" { + msg := initStatus.State.Waiting.Message + if msg == "" { + msg = "lint init container is in CrashLoopBackOff — check pipeline configuration" + } + return false, msg + } + // Also check LastTerminationState: between restarts the current + // State may be Running or Waiting (not yet CrashLoopBackOff), + // but LastTerminationState records the previous failure. + if initStatus.LastTerminationState.Terminated != nil && initStatus.LastTerminationState.Terminated.ExitCode != 0 { + msg := initStatus.LastTerminationState.Terminated.Message + if msg == "" { + msg = fmt.Sprintf("lint exited with code %d", initStatus.LastTerminationState.Terminated.ExitCode) + } + return false, msg + } + } + } + return true, "" +} + +// applyStatus uses server-side apply to update the Pipeline status sub-resource. +func (c *Controller) applyStatus(ctx context.Context, pipeline *redpandav1alpha2.Pipeline, phase redpandav1alpha2.PipelinePhase, conditions []metav1.Condition) error { + pipeline.Status.ObservedGeneration = pipeline.Generation + pipeline.Status.Phase = phase + pipeline.Status.Conditions = conditionsForApply(pipeline, conditions) + + return c.Ctl.ApplyStatus(ctx, pipeline, client.ForceOwnership) +} + +// conditionsForApply merges the desired conditions into the existing conditions +// using the SSA-compatible helper from utils. +func conditionsForApply(pipeline *redpandav1alpha2.Pipeline, conditions []metav1.Condition) []metav1.Condition { + configs := utils.StatusConditionConfigs(pipeline.Status.Conditions, pipeline.Generation, conditions) + + out := make([]metav1.Condition, 0, len(configs)) + for _, cfg := range configs { + out = append(out, metav1.Condition{ + Type: *cfg.Type, + Status: *cfg.Status, + Reason: *cfg.Reason, + Message: *cfg.Message, + ObservedGeneration: *cfg.ObservedGeneration, + LastTransitionTime: *cfg.LastTransitionTime, + }) + } + return out +} + +func (c *Controller) skipPodMonitorWatchIfNotInstalled(ctx context.Context) (skip bool) { + var podMonitorList monitoringv1.PodMonitorList + err := c.Ctl.List(ctx, "default", &podMonitorList) + if meta.IsNoMatchError(err) { + return true + } else if err != nil { + log.Error(ctx, err, "could not list PodMonitors") + return true + } + return false +} diff --git a/operator/internal/controller/pipeline/controller_test.go b/operator/internal/controller/pipeline/controller_test.go new file mode 100644 index 000000000..6b1d16e3c --- /dev/null +++ b/operator/internal/controller/pipeline/controller_test.go @@ -0,0 +1,1455 @@ +// Copyright 2026 Redpanda Data, Inc. +// +// Use of this software is governed by the Business Source License +// included in the file licenses/BSL.md +// +// As of the Change Date specified in that file, in accordance with +// the Business Source License, use of this software will be governed +// by the Apache License, Version 2.0 + +package pipeline + +import ( + "context" + "fmt" + "os" + "path/filepath" + "slices" + "strings" + "testing" + "time" + + monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" + "github.com/redpanda-data/common-go/kube" + "github.com/redpanda-data/common-go/kube/kubetest" + "github.com/redpanda-data/common-go/license" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + policyv1 "k8s.io/api/policy/v1" + apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/utils/ptr" + ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/yaml" + + redpandav1alpha2 "github.com/redpanda-data/redpanda-operator/operator/api/redpanda/v1alpha2" + crds "github.com/redpanda-data/redpanda-operator/operator/config/crd/bases" + "github.com/redpanda-data/redpanda-operator/operator/internal/controller" + "github.com/redpanda-data/redpanda-operator/pkg/testutil" +) + +func setupTestEnv(t *testing.T) *kube.Ctl { + t.Helper() + + ctl := kubetest.NewEnv(t, kube.Options{ + Options: client.Options{ + Scheme: controller.UnifiedScheme, + }, + }) + + require.NoError(t, kube.ApplyAllAndWait(t.Context(), ctl, func(crd *apiextensionsv1.CustomResourceDefinition, err error) (bool, error) { + if err != nil { + return false, err + } + for _, cond := range crd.Status.Conditions { + if cond.Type == apiextensionsv1.Established { + return cond.Status == apiextensionsv1.ConditionTrue, nil + } + } + return false, nil + }, crds.All()...)) + + return ctl +} + +func TestReconcile_NoLicense(t *testing.T) { + ctl := setupTestEnv(t) + + ns, err := kube.Create(t.Context(), ctl, corev1.Namespace{ + ObjectMeta: metav1.ObjectMeta{Name: "test-no-license"}, + }) + require.NoError(t, err) + + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-pipeline", + Namespace: ns.Name, + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n generate:\n mapping: 'root = \"hello\"'\noutput:\n stdout: {}\n", + }, + } + require.NoError(t, ctl.Apply(t.Context(), pipeline)) + + c := &Controller{ + Ctl: ctl, + LicenseFilePath: "", // No license + } + + result, err := c.Reconcile(t.Context(), ctrl.Request{ + NamespacedName: kube.AsKey(pipeline), + }) + require.NoError(t, err) + assert.Equal(t, time.Minute, result.RequeueAfter, "should requeue for license retry") + + // Verify status shows license invalid. + require.NoError(t, ctl.Get(t.Context(), kube.AsKey(pipeline), pipeline)) + assert.Equal(t, redpandav1alpha2.PipelinePhasePending, pipeline.Status.Phase) + require.Len(t, pipeline.Status.Conditions, 1) + assert.Equal(t, redpandav1alpha2.PipelineConditionReady, pipeline.Status.Conditions[0].Type) + assert.Equal(t, metav1.ConditionFalse, pipeline.Status.Conditions[0].Status) + assert.Equal(t, redpandav1alpha2.PipelineReasonLicenseInvalid, pipeline.Status.Conditions[0].Reason) + + // Verify no Deployment was created. + var deployments appsv1.DeploymentList + require.NoError(t, ctl.List(t.Context(), ns.Name, &deployments)) + assert.Empty(t, deployments.Items) +} + +func TestReconcile_InvalidLicenseFile(t *testing.T) { + ctl := setupTestEnv(t) + + ns, err := kube.Create(t.Context(), ctl, corev1.Namespace{ + ObjectMeta: metav1.ObjectMeta{Name: "test-bad-license"}, + }) + require.NoError(t, err) + + // Write a bad license file. + dir := t.TempDir() + path := filepath.Join(dir, "license") + require.NoError(t, os.WriteFile(path, []byte("not-a-valid-license"), 0o644)) + + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-pipeline", + Namespace: ns.Name, + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n generate:\n mapping: 'root = \"hello\"'\noutput:\n stdout: {}\n", + }, + } + require.NoError(t, ctl.Apply(t.Context(), pipeline)) + + c := &Controller{ + Ctl: ctl, + LicenseFilePath: path, + } + + result, err := c.Reconcile(t.Context(), ctrl.Request{ + NamespacedName: kube.AsKey(pipeline), + }) + require.NoError(t, err) + assert.Equal(t, time.Minute, result.RequeueAfter) + + require.NoError(t, ctl.Get(t.Context(), kube.AsKey(pipeline), pipeline)) + assert.Equal(t, redpandav1alpha2.PipelineReasonLicenseInvalid, pipeline.Status.Conditions[0].Reason) + assert.Contains(t, pipeline.Status.Conditions[0].Message, "failed to read license") +} + +func TestReconcile_InvalidLicenseCleansUpManagedResources(t *testing.T) { + ctl := setupTestEnv(t) + + ns, err := kube.Create(t.Context(), ctl, corev1.Namespace{ + ObjectMeta: metav1.ObjectMeta{Name: "test-license-cleanup"}, + }) + require.NoError(t, err) + + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "cleanup-pipeline", + Namespace: ns.Name, + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n generate:\n mapping: 'root = \"hello\"'\noutput:\n stdout: {}\n", + }, + } + require.NoError(t, ctl.Apply(t.Context(), pipeline)) + + syncer := &kube.Syncer{ + Ctl: ctl, + Namespace: ns.Name, + Renderer: &render{ + pipeline: pipeline, + labels: Labels(pipeline), + }, + Owner: *metav1.NewControllerRef(pipeline, redpandav1alpha2.SchemeGroupVersion.WithKind("Pipeline")), + OwnershipLabels: Labels(pipeline), + } + _, err = syncer.Sync(t.Context()) + require.NoError(t, err) + require.NotEmpty(t, scrapeControllerObjects(t, ctl, pipeline)) + + c := &Controller{ + Ctl: ctl, + LicenseFilePath: "", + } + + result, err := c.Reconcile(t.Context(), ctrl.Request{ + NamespacedName: kube.AsKey(pipeline), + }) + require.NoError(t, err) + assert.Equal(t, time.Minute, result.RequeueAfter) + require.Empty(t, scrapeControllerObjects(t, ctl, pipeline)) +} + +func TestReconcile_InvalidClusterRefCleansUpManagedResources(t *testing.T) { + ctl := setupTestEnv(t) + + ns, err := kube.Create(t.Context(), ctl, corev1.Namespace{ + ObjectMeta: metav1.ObjectMeta{Name: "test-clusterref-cleanup"}, + }) + require.NoError(t, err) + + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "clusterref-cleanup-pipeline", + Namespace: ns.Name, + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n generate:\n mapping: 'root = \"hello\"'\noutput:\n stdout: {}\n", + ClusterSource: &redpandav1alpha2.ClusterSource{ + ClusterRef: &redpandav1alpha2.ClusterRef{Name: "missing-cluster"}, + }, + }, + } + require.NoError(t, ctl.Apply(t.Context(), pipeline)) + + syncer := &kube.Syncer{ + Ctl: ctl, + Namespace: ns.Name, + Renderer: &render{ + pipeline: pipeline, + labels: Labels(pipeline), + }, + Owner: *metav1.NewControllerRef(pipeline, redpandav1alpha2.SchemeGroupVersion.WithKind("Pipeline")), + OwnershipLabels: Labels(pipeline), + } + _, err = syncer.Sync(t.Context()) + require.NoError(t, err) + require.NotEmpty(t, scrapeControllerObjects(t, ctl, pipeline)) + + c := &Controller{ + Ctl: ctl, + } + + result, err := c.Reconcile(t.Context(), ctrl.Request{ + NamespacedName: kube.AsKey(pipeline), + }) + require.NoError(t, err) + assert.Equal(t, 30*time.Second, result.RequeueAfter) + require.Empty(t, scrapeControllerObjects(t, ctl, pipeline)) + require.NoError(t, ctl.Get(t.Context(), kube.AsKey(pipeline), pipeline)) + assert.Equal(t, redpandav1alpha2.PipelineReasonClusterRefInvalid, pipeline.Status.Conditions[0].Reason) +} + +func TestReconcile_Deletion(t *testing.T) { + ctl := setupTestEnv(t) + + ns, err := kube.Create(t.Context(), ctl, corev1.Namespace{ + ObjectMeta: metav1.ObjectMeta{Name: "test-deletion"}, + }) + require.NoError(t, err) + + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: ns.Name, + Namespace: ns.Name, + Finalizers: []string{finalizerKey}, + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n generate:\n mapping: 'root = \"hello\"'\noutput:\n stdout: {}\n", + }, + } + require.NoError(t, ctl.Apply(t.Context(), pipeline)) + + // Trigger deletion. + require.NoError(t, ctl.Delete(t.Context(), pipeline)) + + c := &Controller{ + Ctl: ctl, + LicenseFilePath: "", // License doesn't matter for deletion + } + + // Reconcile the deletion. + _, err = c.Reconcile(t.Context(), ctrl.Request{ + NamespacedName: kube.AsKey(pipeline), + }) + require.NoError(t, err) + + // Verify the object was GC'd (finalizer removal allows API server to delete it). + err = ctl.Get(t.Context(), kube.AsKey(pipeline), pipeline) + assert.True(t, apierrors.IsNotFound(err), "expected object to be garbage collected after finalizer removal") +} + +func TestRender_GoldenFiles(t *testing.T) { + golden := testutil.NewTxTar(t, "testdata/controller-tests.golden.txtar") + + testCases := []struct { + name string + pipeline *redpandav1alpha2.Pipeline + }{ + { + name: "basic-pipeline", + pipeline: &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "basic-pipeline", + Namespace: "default", + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n generate:\n mapping: 'root.message = \"hello\"'\n interval: \"5s\"\noutput:\n stdout: {}\n", + }, + }, + }, + { + name: "pipeline-with-annotations", + pipeline: &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "annotated-pipeline", + Namespace: "default", + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n generate:\n mapping: 'root = \"hello\"'\noutput:\n stdout: {}\n", + Annotations: map[string]string{ + "ad.datadoghq.com/connect.checks": "openmetrics", + }, + }, + }, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + labels := Labels(tc.pipeline) + r := &render{ + pipeline: tc.pipeline, + labels: labels, + } + + objs, err := r.Render(t.Context()) + require.NoError(t, err) + + manifest, err := yaml.Marshal(objs) + require.NoError(t, err) + + golden.AssertGolden(t, testutil.YAML, tc.name, manifest) + }) + } +} + +func TestReconcile_DeletionGC(t *testing.T) { + ctl := setupTestEnv(t) + + ns, err := kube.Create(t.Context(), ctl, corev1.Namespace{ + ObjectMeta: metav1.ObjectMeta{Name: "test-deletion-gc"}, + }) + require.NoError(t, err) + + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "gc-pipeline", + Namespace: ns.Name, + Finalizers: []string{finalizerKey}, + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n generate:\n mapping: 'root = \"hello\"'\noutput:\n stdout: {}\n", + }, + } + require.NoError(t, ctl.Apply(t.Context(), pipeline)) + + // Create child resources that the syncer would manage. + syncer := &kube.Syncer{ + Ctl: ctl, + Namespace: ns.Name, + Renderer: &render{ + pipeline: pipeline, + labels: Labels(pipeline), + }, + Owner: *metav1.NewControllerRef(pipeline, redpandav1alpha2.SchemeGroupVersion.WithKind("Pipeline")), + OwnershipLabels: Labels(pipeline), + } + _, err = syncer.Sync(t.Context()) + require.NoError(t, err) + + // Verify child objects exist. + objects := scrapeControllerObjects(t, ctl, pipeline) + require.NotEmpty(t, objects, "expected child resources to exist before deletion") + + // Trigger deletion. + require.NoError(t, ctl.Delete(t.Context(), pipeline)) + + c := &Controller{Ctl: ctl} + + // Reconcile the deletion a few times. + doneCh := make(chan error, 1) + go func() { + ctx, cancel := context.WithTimeout(t.Context(), 30*time.Second) + defer cancel() + doneCh <- ctl.DeleteAndWait(ctx, pipeline) + close(doneCh) + }() + + for range 3 { + _, err = c.Reconcile(t.Context(), ctrl.Request{ + NamespacedName: kube.AsKey(pipeline), + }) + require.NoError(t, err) + } + + require.NoError(t, <-doneCh) + + // Assert that all child resources have been GC'd. + require.Empty(t, scrapeControllerObjects(t, ctl, pipeline)) +} + +// scrapeControllerObjects finds all objects created by the pipeline controller using ownership labels. +func scrapeControllerObjects(t *testing.T, ctl *kube.Ctl, pipeline *redpandav1alpha2.Pipeline) []kube.Object { + ownershipLabels := Labels(pipeline) + + var objects []kube.Object + for _, objType := range Types() { + // Skip PodMonitor as it's optional (only created when monitoring.enabled is true). + if _, ok := objType.(*monitoringv1.PodMonitor); ok { + continue + } + list, err := kube.ListFor(ctl.Scheme(), objType) + require.NoError(t, err) + + err = ctl.List( + t.Context(), + pipeline.Namespace, + list, + client.MatchingLabels(ownershipLabels), + ) + require.NoError(t, err) + + objs, err := kube.Items[kube.Object](list) + require.NoError(t, err) + + for _, obj := range objs { + cleanObjectForGolden(ctl.Scheme(), obj) + objects = append(objects, obj) + } + } + + slices.SortFunc(objects, func(i, j client.Object) int { + iKey := fmt.Sprintf("%T%s%s", i, i.GetNamespace(), i.GetName()) + jKey := fmt.Sprintf("%T%s%s", j, j.GetNamespace(), j.GetName()) + return strings.Compare(iKey, jKey) + }) + + return objects +} + +// cleanObjectForGolden removes dynamic fields that change between test runs. +func cleanObjectForGolden(scheme *runtime.Scheme, obj client.Object) { + gvks, _, err := scheme.ObjectKinds(obj) + if err != nil { + panic(err) + } + obj.GetObjectKind().SetGroupVersionKind(gvks[0]) + + obj.SetCreationTimestamp(metav1.Time{}) + obj.SetFinalizers(nil) + obj.SetGeneration(0) + obj.SetManagedFields(nil) + obj.SetOwnerReferences(nil) + obj.SetResourceVersion("") + obj.SetUID("") +} + +func TestRender_CommonAnnotations(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "annotated-pipeline", + Namespace: "default", + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n generate:\n mapping: 'root = \"hello\"'\noutput:\n stdout: {}\n", + }, + } + + labels := Labels(pipeline) + r := &render{ + pipeline: pipeline, + labels: labels, + commonAnnotations: map[string]string{ + "compliance/owner": "platform-team", + "compliance/env": "production", + }, + } + + // Verify annotations propagate to all rendered objects. + objs, err := r.Render(t.Context()) + require.NoError(t, err) + require.Len(t, objs, 2, "expected ConfigMap and Deployment") + + for _, obj := range objs { + annotations := obj.(metav1.ObjectMetaAccessor).GetObjectMeta().GetAnnotations() + assert.Equal(t, "platform-team", annotations["compliance/owner"], + "commonAnnotations should propagate to %T", obj) + assert.Equal(t, "production", annotations["compliance/env"], + "commonAnnotations should propagate to %T", obj) + } + + // Verify pod template also has annotations. + dp := objs[1].(*appsv1.Deployment) + podAnnotations := dp.Spec.Template.ObjectMeta.Annotations + assert.Equal(t, "platform-team", podAnnotations["compliance/owner"]) +} + +func TestRender_PodAnnotations(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "dd-pipeline", + Namespace: "default", + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n generate:\n mapping: 'root = \"hello\"'\noutput:\n stdout: {}\n", + Annotations: map[string]string{ + "ad.datadoghq.com/connect.checks": `{"openmetrics":{"instances":[{"openmetrics_endpoint":"http://%%host%%:4195/metrics","namespace":"redpanda_connect","metrics":[".*"]}]}}`, + }, + }, + } + + labels := Labels(pipeline) + r := &render{ + pipeline: pipeline, + labels: labels, + commonAnnotations: map[string]string{ + "compliance/owner": "platform-team", + }, + } + + objs, err := r.Render(t.Context()) + require.NoError(t, err) + + // ConfigMap should only have commonAnnotations, not pod annotations. + cm := objs[0].(*corev1.ConfigMap) + assert.Equal(t, "platform-team", cm.Annotations["compliance/owner"]) + assert.Empty(t, cm.Annotations["ad.datadoghq.com/connect.checks"], + "spec.annotations should not propagate to ConfigMap") + + // Pod template should have both commonAnnotations and spec.annotations. + dp := objs[1].(*appsv1.Deployment) + podAnn := dp.Spec.Template.ObjectMeta.Annotations + assert.Equal(t, "platform-team", podAnn["compliance/owner"], + "commonAnnotations should be on pod template") + assert.Contains(t, podAnn["ad.datadoghq.com/connect.checks"], "openmetrics", + "spec.annotations should be on pod template") + + // Deployment metadata should only have commonAnnotations. + assert.Empty(t, dp.Annotations["ad.datadoghq.com/connect.checks"], + "spec.annotations should not propagate to Deployment metadata") +} + +func TestRender_PodAnnotations_Override(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "override-pipeline", + Namespace: "default", + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n generate:\n mapping: 'root = \"hello\"'\noutput:\n stdout: {}\n", + Annotations: map[string]string{ + "shared-key": "from-pipeline", + }, + }, + } + + labels := Labels(pipeline) + r := &render{ + pipeline: pipeline, + labels: labels, + commonAnnotations: map[string]string{ + "shared-key": "from-common", + }, + } + + objs, err := r.Render(t.Context()) + require.NoError(t, err) + + dp := objs[1].(*appsv1.Deployment) + podAnn := dp.Spec.Template.ObjectMeta.Annotations + assert.Equal(t, "from-pipeline", podAnn["shared-key"], + "per-pipeline annotations should override commonAnnotations on pod template") +} + +func TestRender_LicenseSecretAndEnvVar(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "license-test", + Namespace: "redpanda", + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n generate:\n mapping: 'root = \"hello\"'\noutput:\n stdout: {}\n", + }, + } + + licenseBytes := []byte("eyJvcmciOiJ0ZXN0In0=.signature") + r := &render{pipeline: pipeline, labels: Labels(pipeline), licenseContent: licenseBytes} + + objs, err := r.Render(t.Context()) + require.NoError(t, err) + + var sec *corev1.Secret + var dp *appsv1.Deployment + for _, o := range objs { + switch v := o.(type) { + case *corev1.Secret: + sec = v + case *appsv1.Deployment: + dp = v + } + } + + require.NotNil(t, sec, "expected a license Secret to be rendered") + assert.Equal(t, "license-test-license", sec.Name) + assert.Equal(t, "redpanda", sec.Namespace) + assert.Equal(t, corev1.SecretTypeOpaque, sec.Type) + assert.Equal(t, licenseBytes, sec.Data["license"]) + + require.NotNil(t, dp, "expected a Deployment to be rendered") + main := dp.Spec.Template.Spec.Containers[0] + var found *corev1.EnvVar + for i := range main.Env { + if main.Env[i].Name == "REDPANDA_LICENSE" { + found = &main.Env[i] + break + } + } + require.NotNil(t, found, "expected REDPANDA_LICENSE env var on connect container") + require.NotNil(t, found.ValueFrom) + require.NotNil(t, found.ValueFrom.SecretKeyRef) + assert.Equal(t, "license-test-license", found.ValueFrom.SecretKeyRef.Name) + assert.Equal(t, "license", found.ValueFrom.SecretKeyRef.Key) + + // The lint init container should also see the env var since it shares the slice. + require.Len(t, dp.Spec.Template.Spec.InitContainers, 1) + lint := dp.Spec.Template.Spec.InitContainers[0] + hasLicense := false + for _, e := range lint.Env { + if e.Name == "REDPANDA_LICENSE" { + hasLicense = true + break + } + } + assert.True(t, hasLicense, "lint init container should also receive REDPANDA_LICENSE so the license loads during lint") +} + +func TestRender_NoLicenseContent_OmitsSecretAndEnvVar(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "no-license-test", + Namespace: "default", + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n generate:\n mapping: 'root = \"hello\"'\noutput:\n stdout: {}\n", + }, + } + r := &render{pipeline: pipeline, labels: Labels(pipeline)} + + objs, err := r.Render(t.Context()) + require.NoError(t, err) + + for _, o := range objs { + _, isSecret := o.(*corev1.Secret) + assert.False(t, isSecret, "no Secret should be rendered when licenseContent is empty") + } + + for _, o := range objs { + dp, ok := o.(*appsv1.Deployment) + if !ok { + continue + } + for _, e := range dp.Spec.Template.Spec.Containers[0].Env { + assert.NotEqual(t, "REDPANDA_LICENSE", e.Name, "no REDPANDA_LICENSE env var should be set when no license") + } + } +} + +func TestRender_Deployment_HasLintInitContainer(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "lint-test", + Namespace: "default", + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n generate:\n mapping: 'root = \"hello\"'\noutput:\n stdout: {}\n", + }, + } + + labels := Labels(pipeline) + r := &render{pipeline: pipeline, labels: labels} + + objs, err := r.Render(t.Context()) + require.NoError(t, err) + + dp := objs[1].(*appsv1.Deployment) + + require.Len(t, dp.Spec.Template.Spec.InitContainers, 1, "expected one init container") + init := dp.Spec.Template.Spec.InitContainers[0] + assert.Equal(t, "lint", init.Name) + assert.Equal(t, []string{"/redpanda-connect", "lint", "/config/connect.yaml"}, init.Command) + assert.Equal(t, redpandav1alpha2.PipelineDefaultImage, init.Image, "init container should use same image as main container") + assert.Equal(t, corev1.TerminationMessageFallbackToLogsOnError, init.TerminationMessagePolicy) + + require.Len(t, init.VolumeMounts, 1) + assert.Equal(t, "config", init.VolumeMounts[0].Name) + assert.Equal(t, "/config", init.VolumeMounts[0].MountPath) + assert.True(t, init.VolumeMounts[0].ReadOnly) +} + +func TestRender_ConfigMap(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "render-test", + Namespace: "default", + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n", + ConfigFiles: map[string]string{ + "extra.yaml": "some: config", + }, + }, + } + + labels := Labels(pipeline) + r := &render{pipeline: pipeline, labels: labels} + + objs, err := r.Render(t.Context()) + require.NoError(t, err) + + cm := objs[0].(*corev1.ConfigMap) + assert.Equal(t, "render-test", cm.Name) + assert.Equal(t, pipeline.Spec.ConfigYAML, cm.Data["connect.yaml"]) + assert.Equal(t, "some: config", cm.Data["extra.yaml"]) +} + +func TestRender_ConfigMap_ReservedKey(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "reserved-key-test", + Namespace: "default", + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n", + ConfigFiles: map[string]string{ + "connect.yaml": "should fail", + }, + }, + } + + labels := Labels(pipeline) + r := &render{pipeline: pipeline, labels: labels} + + _, err := r.Render(t.Context()) + require.Error(t, err) + assert.Contains(t, err.Error(), "connect.yaml") +} + +func TestRender_Deployment_Defaults(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "deploy-test", + Namespace: "default", + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n", + }, + } + + labels := Labels(pipeline) + r := &render{pipeline: pipeline, labels: labels} + + objs, err := r.Render(t.Context()) + require.NoError(t, err) + + dp := objs[1].(*appsv1.Deployment) + assert.Equal(t, int32(1), *dp.Spec.Replicas) + assert.Equal(t, appsv1.RecreateDeploymentStrategyType, dp.Spec.Strategy.Type) + assert.Equal(t, redpandav1alpha2.PipelineDefaultImage, dp.Spec.Template.Spec.Containers[0].Image) + assert.NotNil(t, dp.Spec.Template.Spec.Containers[0].ReadinessProbe) +} + +func TestRender_Deployment_ImagePrecedence(t *testing.T) { + // Exercises the three-tier image precedence: + // 1. Pipeline.spec.image (per-pipeline override) wins. + // 2. render.defaultImage (chart-level default via the operator's + // --connect-default-image flag) wins when .spec.image is empty. + // 3. PipelineDefaultImage (binary-baked constant) wins when both + // are empty. + t.Run("spec_image_wins_over_chart_default", func(t *testing.T) { + pl := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "pl", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n", + Image: ptr.To("docker.example.com/connect:5.0.0"), + }, + } + r := &render{pipeline: pl, labels: Labels(pl), defaultImage: "docker.example.com/connect:4.92.0"} + objs, err := r.Render(t.Context()) + require.NoError(t, err) + assert.Equal(t, "docker.example.com/connect:5.0.0", objs[1].(*appsv1.Deployment).Spec.Template.Spec.Containers[0].Image) + }) + + t.Run("chart_default_wins_when_spec_image_empty", func(t *testing.T) { + pl := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "pl", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n", + }, + } + r := &render{pipeline: pl, labels: Labels(pl), defaultImage: "docker.example.com/connect:4.92.0"} + objs, err := r.Render(t.Context()) + require.NoError(t, err) + assert.Equal(t, "docker.example.com/connect:4.92.0", objs[1].(*appsv1.Deployment).Spec.Template.Spec.Containers[0].Image) + }) + + t.Run("binary_default_when_both_empty", func(t *testing.T) { + pl := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "pl", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n", + }, + } + r := &render{pipeline: pl, labels: Labels(pl)} + objs, err := r.Render(t.Context()) + require.NoError(t, err) + assert.Equal(t, redpandav1alpha2.PipelineDefaultImage, objs[1].(*appsv1.Deployment).Spec.Template.Spec.Containers[0].Image) + }) +} + +func TestRender_Deployment_Paused(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "paused-test", + Namespace: "default", + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n", + Paused: true, + }, + } + + labels := Labels(pipeline) + r := &render{pipeline: pipeline, labels: labels} + + objs, err := r.Render(t.Context()) + require.NoError(t, err) + + dp := objs[1].(*appsv1.Deployment) + assert.Equal(t, int32(0), *dp.Spec.Replicas) +} + +func TestRender_Deployment_ValueSources(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "values-test", + Namespace: "default", + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n", + ValueSources: []redpandav1alpha2.NamedValueSource{ + { + Name: "S3_SECRET_KEY", + Source: redpandav1alpha2.ValueSource{ + SecretKeyRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{Name: "s3-creds"}, + Key: "secret_access_key", + }, + }, + }, + { + Name: "DB_HOST", + Source: redpandav1alpha2.ValueSource{ + ConfigMapKeyRef: &corev1.ConfigMapKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{Name: "warehouse-env"}, + Key: "host", + }, + }, + }, + { + Name: "BUCKET", + Source: redpandav1alpha2.ValueSource{ + Inline: ptr.To("orders-warehouse"), + }, + }, + }, + }, + } + + labels := Labels(pipeline) + r := &render{pipeline: pipeline, labels: labels} + + objs, err := r.Render(t.Context()) + require.NoError(t, err) + + dp := objs[1].(*appsv1.Deployment) + // EnvFrom should be empty — the bag-of-Secrets pattern is gone. + assert.Empty(t, dp.Spec.Template.Spec.Containers[0].EnvFrom) + assert.Empty(t, dp.Spec.Template.Spec.InitContainers[0].EnvFrom) + + // Each ValueSource entry should appear as its own typed EnvVar. + envByName := map[string]corev1.EnvVar{} + for _, e := range dp.Spec.Template.Spec.Containers[0].Env { + envByName[e.Name] = e + } + + require.Contains(t, envByName, "S3_SECRET_KEY") + require.NotNil(t, envByName["S3_SECRET_KEY"].ValueFrom) + require.NotNil(t, envByName["S3_SECRET_KEY"].ValueFrom.SecretKeyRef) + assert.Equal(t, "s3-creds", envByName["S3_SECRET_KEY"].ValueFrom.SecretKeyRef.Name) + assert.Equal(t, "secret_access_key", envByName["S3_SECRET_KEY"].ValueFrom.SecretKeyRef.Key) + + require.Contains(t, envByName, "DB_HOST") + require.NotNil(t, envByName["DB_HOST"].ValueFrom) + require.NotNil(t, envByName["DB_HOST"].ValueFrom.ConfigMapKeyRef) + assert.Equal(t, "warehouse-env", envByName["DB_HOST"].ValueFrom.ConfigMapKeyRef.Name) + + require.Contains(t, envByName, "BUCKET") + assert.Equal(t, "orders-warehouse", envByName["BUCKET"].Value) +} + +func TestRender_Deployment_Zones(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{ + Name: "zone-test", + Namespace: "default", + }, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n", + Zones: []string{"us-east-1a", "us-east-1b"}, + }, + } + + labels := Labels(pipeline) + r := &render{pipeline: pipeline, labels: labels} + + objs, err := r.Render(t.Context()) + require.NoError(t, err) + + dp := objs[1].(*appsv1.Deployment) + // Verify node affinity. + require.NotNil(t, dp.Spec.Template.Spec.Affinity) + require.NotNil(t, dp.Spec.Template.Spec.Affinity.NodeAffinity) + terms := dp.Spec.Template.Spec.Affinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms + require.Len(t, terms, 1) + assert.Equal(t, zoneTopologyKey, terms[0].MatchExpressions[0].Key) + assert.Equal(t, []string{"us-east-1a", "us-east-1b"}, terms[0].MatchExpressions[0].Values) + + // Verify topology spread. + require.Len(t, dp.Spec.Template.Spec.TopologySpreadConstraints, 1) + assert.Equal(t, zoneTopologyKey, dp.Spec.Template.Spec.TopologySpreadConstraints[0].TopologyKey) +} + +// PodDisruptionBudget tests. + +func TestRender_PDB_NotConfigured(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "no-pdb", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n"}, + } + + r := &render{pipeline: pipeline, labels: Labels(pipeline)} + objs, err := r.Render(t.Context()) + require.NoError(t, err) + + // Should only have ConfigMap + Deployment, no PDB. + for _, obj := range objs { + assert.NotEqual(t, "PodDisruptionBudget", obj.GetObjectKind().GroupVersionKind().Kind) + } +} + +func TestRender_PDB_MaxUnavailable(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "pdb-max", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n", + Budget: &redpandav1alpha2.PipelineBudget{ + MaxUnavailable: 1, + }, + }, + } + + labels := Labels(pipeline) + r := &render{pipeline: pipeline, labels: labels} + objs, err := r.Render(t.Context()) + require.NoError(t, err) + + // Find the PDB. + var pdb *policyv1.PodDisruptionBudget + for _, obj := range objs { + if p, ok := obj.(*policyv1.PodDisruptionBudget); ok { + pdb = p + } + } + require.NotNil(t, pdb, "expected a PodDisruptionBudget in rendered objects") + assert.Equal(t, "pdb-max", pdb.Name) + assert.Equal(t, "default", pdb.Namespace) + assert.Equal(t, labels, pdb.Labels) + assert.Equal(t, labels, pdb.Spec.Selector.MatchLabels) + require.NotNil(t, pdb.Spec.MaxUnavailable) + assert.Equal(t, int32(1), pdb.Spec.MaxUnavailable.IntVal) + assert.Nil(t, pdb.Spec.MinAvailable) +} + +func TestRender_PDB_ZeroMaxUnavailable(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "pdb-zero", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n", + Budget: &redpandav1alpha2.PipelineBudget{ + MaxUnavailable: 0, + }, + }, + } + + labels := Labels(pipeline) + r := &render{pipeline: pipeline, labels: labels} + objs, err := r.Render(t.Context()) + require.NoError(t, err) + + var pdb *policyv1.PodDisruptionBudget + for _, obj := range objs { + if p, ok := obj.(*policyv1.PodDisruptionBudget); ok { + pdb = p + } + } + require.NotNil(t, pdb, "expected a PodDisruptionBudget in rendered objects") + require.NotNil(t, pdb.Spec.MaxUnavailable) + assert.Equal(t, int32(0), pdb.Spec.MaxUnavailable.IntVal) +} + +// License validation unit tests. + +func TestValidateLicenseNoPath(t *testing.T) { + c := &Controller{LicenseFilePath: ""} + err := c.validateLicense() + require.Error(t, err) + assert.Contains(t, err.Error(), "no license configured") +} + +func TestValidateLicenseBadPath(t *testing.T) { + c := &Controller{LicenseFilePath: "/nonexistent/path/to/license"} + err := c.validateLicense() + require.Error(t, err) + assert.Contains(t, err.Error(), "failed to read license") +} + +func TestValidateLicenseInvalidFile(t *testing.T) { + dir := t.TempDir() + path := filepath.Join(dir, "license") + require.NoError(t, os.WriteFile(path, []byte("not-a-valid-license"), 0o644)) + + c := &Controller{LicenseFilePath: path} + err := c.validateLicense() + require.Error(t, err) + assert.Contains(t, err.Error(), "failed to read license") +} + +func TestValidateLicenseOpenSource(t *testing.T) { + l := license.OpenSourceLicense + assert.False(t, l.AllowsEnterpriseFeatures()) +} + +func TestValidateLicenseExpired(t *testing.T) { + err := license.CheckExpiration(time.Now().Add(-24 * time.Hour)) + require.Error(t, err) +} + +func TestValidateLicenseNotExpired(t *testing.T) { + err := license.CheckExpiration(time.Now().Add(24 * time.Hour)) + require.NoError(t, err) +} + +func TestV0LicenseIncludesAllProducts(t *testing.T) { + l := &license.V0RedpandaLicense{ + Type: license.V0LicenseTypeEnterprise, + Expiry: time.Now().Add(24 * time.Hour).Unix(), + } + assert.True(t, l.AllowsEnterpriseFeatures()) + assert.True(t, l.IncludesProduct(license.ProductConnect)) +} + +func TestV1LicenseWithConnectProduct(t *testing.T) { + l := &license.V1RedpandaLicense{ + Type: license.LicenseTypeEnterprise, + Expiry: time.Now().Add(24 * time.Hour).Unix(), + Products: []license.Product{license.ProductConnect}, + } + assert.True(t, l.AllowsEnterpriseFeatures()) + assert.True(t, l.IncludesProduct(license.ProductConnect)) +} + +func TestV1LicenseWithoutConnectProduct(t *testing.T) { + l := &license.V1RedpandaLicense{ + Type: license.LicenseTypeEnterprise, + Expiry: time.Now().Add(24 * time.Hour).Unix(), + Products: []license.Product{}, + } + assert.True(t, l.AllowsEnterpriseFeatures()) + assert.False(t, l.IncludesProduct(license.ProductConnect)) +} + +func TestV1TrialLicenseWithConnect(t *testing.T) { + l := &license.V1RedpandaLicense{ + Type: license.LicenseTypeFreeTrial, + Expiry: time.Now().Add(24 * time.Hour).Unix(), + Products: []license.Product{license.ProductConnect}, + } + assert.True(t, l.AllowsEnterpriseFeatures()) + assert.True(t, l.IncludesProduct(license.ProductConnect)) +} + +func TestV1ExpiredEnterpriseLicense(t *testing.T) { + l := &license.V1RedpandaLicense{ + Type: license.LicenseTypeEnterprise, + Expiry: time.Now().Add(-24 * time.Hour).Unix(), + Products: []license.Product{license.ProductConnect}, + } + assert.False(t, l.AllowsEnterpriseFeatures()) +} + +func TestV1OpenSourceLicenseType(t *testing.T) { + l := &license.V1RedpandaLicense{ + Type: license.LicenseTypeOpenSource, + Expiry: time.Now().Add(24 * time.Hour).Unix(), + Products: []license.Product{license.ProductConnect}, + } + assert.False(t, l.AllowsEnterpriseFeatures()) +} + +// PodMonitor tests. + +func TestRender_PodMonitor_Disabled(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "pm-disabled", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n"}, + } + + r := &render{ + pipeline: pipeline, + labels: Labels(pipeline), + monitoring: MonitoringConfig{Enabled: false}, + } + objs, err := r.Render(t.Context()) + require.NoError(t, err) + assert.Len(t, objs, 2, "only ConfigMap + Deployment when monitoring disabled") +} + +func TestRender_PodMonitor_Enabled(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "pm-enabled", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n"}, + } + + r := &render{ + pipeline: pipeline, + labels: Labels(pipeline), + monitoring: MonitoringConfig{ + Enabled: true, + ScrapeInterval: "30s", + Labels: map[string]string{"team": "platform"}, + }, + } + objs, err := r.Render(t.Context()) + require.NoError(t, err) + require.Len(t, objs, 3, "ConfigMap + Deployment + PodMonitor") + + pm := objs[2].(*monitoringv1.PodMonitor) + assert.Equal(t, "pm-enabled", pm.Name) + assert.Equal(t, "default", pm.Namespace) + assert.Equal(t, "platform", pm.Labels["team"]) + assert.Equal(t, "redpanda-connect", pm.Labels["app.kubernetes.io/name"]) + require.Len(t, pm.Spec.PodMetricsEndpoints, 1) + assert.Equal(t, "/metrics", pm.Spec.PodMetricsEndpoints[0].Path) + assert.Equal(t, "http", *pm.Spec.PodMetricsEndpoints[0].Port) + assert.Equal(t, monitoringv1.Duration("30s"), pm.Spec.PodMetricsEndpoints[0].Interval) + assert.Equal(t, Labels(pipeline), pm.Spec.Selector.MatchLabels) +} + +func TestRender_PodMonitor_CommonAnnotations(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "pm-annotated", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n"}, + } + + r := &render{ + pipeline: pipeline, + labels: Labels(pipeline), + commonAnnotations: map[string]string{ + "compliance/owner": "platform-team", + }, + monitoring: MonitoringConfig{Enabled: true}, + } + objs, err := r.Render(t.Context()) + require.NoError(t, err) + require.Len(t, objs, 3) + + pm := objs[2].(*monitoringv1.PodMonitor) + assert.Equal(t, "platform-team", pm.Annotations["compliance/owner"]) +} + +func TestRender_PodMonitor_NoScrapeInterval(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "pm-no-interval", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n"}, + } + + r := &render{ + pipeline: pipeline, + labels: Labels(pipeline), + monitoring: MonitoringConfig{Enabled: true}, + } + objs, err := r.Render(t.Context()) + require.NoError(t, err) + + pm := objs[2].(*monitoringv1.PodMonitor) + assert.Empty(t, pm.Spec.PodMetricsEndpoints[0].Interval, "empty interval uses Prometheus default") +} + +func TestRender_Deployment_ServiceAccountName(t *testing.T) { + t.Run("propagates_to_pod_spec", func(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "sa-test", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n", + ServiceAccountName: "mysql-cdc-pipeline-sa", + }, + } + r := &render{pipeline: pipeline, labels: Labels(pipeline)} + objs, err := r.Render(t.Context()) + require.NoError(t, err) + dp := objs[1].(*appsv1.Deployment) + assert.Equal(t, "mysql-cdc-pipeline-sa", dp.Spec.Template.Spec.ServiceAccountName) + }) + + t.Run("empty_when_unset", func(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "sa-default", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ConfigYAML: "input:\n stdin: {}\noutput:\n stdout: {}\n"}, + } + r := &render{pipeline: pipeline, labels: Labels(pipeline)} + objs, err := r.Render(t.Context()) + require.NoError(t, err) + dp := objs[1].(*appsv1.Deployment) + assert.Empty(t, dp.Spec.Template.Spec.ServiceAccountName, + "unset means the namespace's default SA is used at admission time") + }) +} + +// TestRender_InlineMergesRedpandaPlugins covers the v2 cluster-binding +// render path: when a Pipeline is bound to a Redpanda cluster (via clusterRef +// or staticConfiguration), the operator merges seed_brokers, tls, and sasl +// into any output.redpanda and input.redpanda blocks in the user's configYaml. +// The deprecated `redpanda_common` plugin is intentionally NOT +// auto-configured — pushing users onto a deprecated path through the +// operator is a foot-gun, so the operator only fills in the supported +// `redpanda` plugin. +func TestRender_InlineMergesRedpandaPlugins(t *testing.T) { + clusterConn := &clusterConnection{ + Brokers: []string{"broker-0.rp.svc:9093", "broker-1.rp.svc:9093"}, + } + creds := &userCredentials{ + Mechanism: "SCRAM-SHA-512", + Username: "mysql-cdc-orders-svc", + } + + t.Run("merges_into_output_redpanda", func(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "p", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n stdin: {}\noutput:\n redpanda:\n topic: orders\n", + ClusterSource: &redpandav1alpha2.ClusterSource{ + ClusterRef: &redpandav1alpha2.ClusterRef{Name: "redpanda"}, + }, + }, + } + r := &render{ + pipeline: pipeline, + labels: Labels(pipeline), + clusterConn: clusterConn, + userCredentials: creds, + } + objs, err := r.Render(t.Context()) + require.NoError(t, err) + cm := objs[0].(*corev1.ConfigMap) + + var rendered map[string]any + require.NoError(t, yaml.Unmarshal([]byte(cm.Data["connect.yaml"]), &rendered)) + out, ok := rendered["output"].(map[string]any) + require.True(t, ok) + rp, ok := out["redpanda"].(map[string]any) + require.True(t, ok, "output.redpanda must remain a map after merge") + + // User-side field preserved. + assert.Equal(t, "orders", rp["topic"]) + // Operator-injected fields present. + assert.Equal(t, + []any{"broker-0.rp.svc:9093", "broker-1.rp.svc:9093"}, + rp["seed_brokers"]) + sasl, ok := rp["sasl"].([]any) + require.True(t, ok) + require.Len(t, sasl, 1) + assert.Equal(t, "SCRAM-SHA-512", sasl[0].(map[string]any)["mechanism"]) + + // No top-level `redpanda` block — that was the v1 shape; the + // v2 design pushes connection fields into the plugin blocks + // themselves. + _, hasTopLevel := rendered["redpanda"] + assert.False(t, hasTopLevel, "no top-level redpanda block in v2 render") + }) + + t.Run("merges_into_input_redpanda", func(t *testing.T) { + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "p", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n redpanda:\n topics: [orders]\n consumer_group: cg\noutput:\n stdout: {}\n", + ClusterSource: &redpandav1alpha2.ClusterSource{ + ClusterRef: &redpandav1alpha2.ClusterRef{Name: "redpanda"}, + }, + }, + } + r := &render{ + pipeline: pipeline, + labels: Labels(pipeline), + clusterConn: clusterConn, + userCredentials: creds, + } + objs, err := r.Render(t.Context()) + require.NoError(t, err) + cm := objs[0].(*corev1.ConfigMap) + + var rendered map[string]any + require.NoError(t, yaml.Unmarshal([]byte(cm.Data["connect.yaml"]), &rendered)) + in := rendered["input"].(map[string]any) + rp := in["redpanda"].(map[string]any) + assert.Equal(t, "cg", rp["consumer_group"]) + assert.NotNil(t, rp["seed_brokers"]) + assert.NotNil(t, rp["sasl"]) + }) + + t.Run("user_keys_win_on_conflict", func(t *testing.T) { + // User points the redpanda output at a different cluster — the + // operator must not clobber that override. + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "p", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "" + + "input:\n stdin: {}\n" + + "output:\n" + + " redpanda:\n" + + " topic: orders\n" + + " seed_brokers: [external.example.com:9093]\n", + ClusterSource: &redpandav1alpha2.ClusterSource{ + ClusterRef: &redpandav1alpha2.ClusterRef{Name: "redpanda"}, + }, + }, + } + r := &render{ + pipeline: pipeline, + labels: Labels(pipeline), + clusterConn: clusterConn, + userCredentials: creds, + } + objs, err := r.Render(t.Context()) + require.NoError(t, err) + cm := objs[0].(*corev1.ConfigMap) + + var rendered map[string]any + require.NoError(t, yaml.Unmarshal([]byte(cm.Data["connect.yaml"]), &rendered)) + rp := rendered["output"].(map[string]any)["redpanda"].(map[string]any) + assert.Equal(t, + []any{"external.example.com:9093"}, + rp["seed_brokers"], + "user-supplied seed_brokers wins") + // sasl wasn't user-supplied, so it should be filled in. + assert.NotNil(t, rp["sasl"]) + }) + + t.Run("no_redpanda_plugin_no_merge", func(t *testing.T) { + // Pipeline writes to S3 only — no output.redpanda block to merge + // into. The configYaml should pass through unchanged. + original := "input:\n stdin: {}\noutput:\n aws_s3:\n bucket: my-bucket\n" + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "p", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: original, + ClusterSource: &redpandav1alpha2.ClusterSource{ + ClusterRef: &redpandav1alpha2.ClusterRef{Name: "redpanda"}, + }, + }, + } + r := &render{ + pipeline: pipeline, + labels: Labels(pipeline), + clusterConn: clusterConn, + userCredentials: creds, + } + objs, err := r.Render(t.Context()) + require.NoError(t, err) + cm := objs[0].(*corev1.ConfigMap) + + // The rendered config still parses to the same structure; + // crucially, no top-level `redpanda` block is added. + var rendered map[string]any + require.NoError(t, yaml.Unmarshal([]byte(cm.Data["connect.yaml"]), &rendered)) + _, hasTopLevel := rendered["redpanda"] + assert.False(t, hasTopLevel) + // And output.aws_s3 is untouched. + out := rendered["output"].(map[string]any) + _, hasRedpanda := out["redpanda"] + assert.False(t, hasRedpanda, "operator must not synthesize an output.redpanda block") + }) + + t.Run("redpanda_common_is_not_auto_configured", func(t *testing.T) { + // The deprecated redpanda_common plugin used to consume a + // top-level `redpanda:` block. The v2 design intentionally + // drops that injection — users staying on redpanda_common need + // to hand-write its config. This test guards against an + // accidental regression that re-introduces the top-level block. + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "p", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ + ConfigYAML: "input:\n stdin: {}\noutput:\n redpanda_common:\n topic: orders\n", + ClusterSource: &redpandav1alpha2.ClusterSource{ + ClusterRef: &redpandav1alpha2.ClusterRef{Name: "redpanda"}, + }, + }, + } + r := &render{ + pipeline: pipeline, + labels: Labels(pipeline), + clusterConn: clusterConn, + userCredentials: creds, + } + objs, err := r.Render(t.Context()) + require.NoError(t, err) + cm := objs[0].(*corev1.ConfigMap) + + var rendered map[string]any + require.NoError(t, yaml.Unmarshal([]byte(cm.Data["connect.yaml"]), &rendered)) + _, hasTopLevel := rendered["redpanda"] + assert.False(t, hasTopLevel, "no top-level redpanda block; redpanda_common is not auto-configured") + }) + + t.Run("inline_only_pipeline_passes_through", func(t *testing.T) { + // No cluster binding at all — fully inline configYaml. Render + // must not modify it. + original := "input:\n stdin: {}\noutput:\n stdout: {}\n" + pipeline := &redpandav1alpha2.Pipeline{ + ObjectMeta: metav1.ObjectMeta{Name: "p", Namespace: "default"}, + Spec: redpandav1alpha2.PipelineSpec{ConfigYAML: original}, + } + r := &render{pipeline: pipeline, labels: Labels(pipeline)} + objs, err := r.Render(t.Context()) + require.NoError(t, err) + cm := objs[0].(*corev1.ConfigMap) + assert.Equal(t, original, cm.Data["connect.yaml"]) + }) +} diff --git a/operator/internal/controller/pipeline/render.go b/operator/internal/controller/pipeline/render.go new file mode 100644 index 000000000..fdd3d3bed --- /dev/null +++ b/operator/internal/controller/pipeline/render.go @@ -0,0 +1,722 @@ +// Copyright 2026 Redpanda Data, Inc. +// +// Use of this software is governed by the Business Source License +// included in the file licenses/BSL.md +// +// As of the Change Date specified in that file, in accordance with +// the Business Source License, use of this software will be governed +// by the Apache License, Version 2.0 + +package pipeline + +import ( + "context" + "fmt" + + "github.com/cockroachdb/errors" + monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" + "github.com/redpanda-data/common-go/kube" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + policyv1 "k8s.io/api/policy/v1" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/intstr" + "k8s.io/utils/ptr" + "sigs.k8s.io/yaml" + + redpandav1alpha2 "github.com/redpanda-data/redpanda-operator/operator/api/redpanda/v1alpha2" +) + +const ( + // Default resource requests for Connect pods. + defaultMemoryRequest = "256Mi" + defaultCPURequest = "100m" + + zoneTopologyKey = "topology.kubernetes.io/zone" +) + +// render implements [kube.Renderer] for Pipeline resources. +type render struct { + pipeline *redpandav1alpha2.Pipeline + labels map[string]string + commonAnnotations map[string]string + // defaultImage is the chart-level default Redpanda Connect image, used + // when the Pipeline CR omits .spec.image. Empty when the operator was + // installed without setting connectController.image.{repository,tag}. + // Falls through to the PipelineDefaultImage constant when empty. + defaultImage string + monitoring MonitoringConfig + // clusterConn holds resolved Redpanda cluster connection details when + // the Pipeline references a cluster via clusterRef. Nil when no + // clusterRef is configured. + clusterConn *clusterConnection + // userCredentials holds the SASL identity the pipeline authenticates + // as when bound to a cluster via .userRef. Nil when no userRef is set + // (which is also the case for staticConfiguration and inline-only + // pipelines). + userCredentials *userCredentials + // licenseContent is the raw bytes of the operator-level enterprise + // license. When non-empty, the renderer mirrors it into a Pipeline-owned + // Secret and injects REDPANDA_LICENSE into the Connect container so + // enterprise inputs (mysql_cdc, etc.) pass their own runtime license + // gate. Empty when no license is configured. + licenseContent []byte +} + +// licenseSecretSuffix is appended to the Pipeline name to derive the +// per-Pipeline Secret that mirrors the operator's license. +const licenseSecretSuffix = "-license" + +// licenseSecretKey is the key inside the per-Pipeline license Secret that +// holds the license bytes. +const licenseSecretKey = "license" + +// Types returns the set of Kubernetes resource types managed by the Pipeline +// controller. +func Types() []kube.Object { + return []kube.Object{ + &appsv1.Deployment{}, + &corev1.ConfigMap{}, + &corev1.Secret{}, + &policyv1.PodDisruptionBudget{}, + &monitoringv1.PodMonitor{}, + } +} + +func (r *render) Types() []kube.Object { + return Types() +} + +// Render produces all Kubernetes objects for the given Pipeline. +func (r *render) Render(_ context.Context) ([]kube.Object, error) { + cm, err := r.configMap() + if err != nil { + return nil, err + } + + dp := r.deployment() + + objs := []kube.Object{cm, dp} + + if sec := r.licenseSecret(); sec != nil { + objs = append(objs, sec) + } + + if pdb := r.podDisruptionBudget(); pdb != nil { + objs = append(objs, pdb) + } + + if pm := r.podMonitor(); pm != nil { + objs = append(objs, pm) + } + + return objs, nil +} + +// licenseSecret returns a Pipeline-owned Secret holding the operator's +// license bytes, or nil when no license content is available. +func (r *render) licenseSecret() *corev1.Secret { + if len(r.licenseContent) == 0 { + return nil + } + return &corev1.Secret{ + TypeMeta: metav1.TypeMeta{ + APIVersion: "v1", + Kind: "Secret", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: r.pipeline.Name + licenseSecretSuffix, + Namespace: r.pipeline.Namespace, + Labels: r.labels, + Annotations: r.annotations(), + }, + Type: corev1.SecretTypeOpaque, + Data: map[string][]byte{ + licenseSecretKey: r.licenseContent, + }, + } +} + +// Labels returns the standard set of labels applied to Pipeline-owned +// resources. +func Labels(pipeline *redpandav1alpha2.Pipeline) map[string]string { + return map[string]string{ + "app.kubernetes.io/name": "redpanda-connect", + "app.kubernetes.io/instance": pipeline.Name, + "app.kubernetes.io/managed-by": "redpanda-operator", + "app.kubernetes.io/component": "connect-pipeline", + } +} + +func (r *render) annotations() map[string]string { + if len(r.commonAnnotations) == 0 { + return nil + } + out := make(map[string]string, len(r.commonAnnotations)) + for k, v := range r.commonAnnotations { + out[k] = v + } + return out +} + +// podAnnotations returns annotations for the pod template, merging +// commonAnnotations with per-pipeline spec.annotations. Per-pipeline +// annotations take precedence. +func (r *render) podAnnotations() map[string]string { + specAnn := r.pipeline.Spec.Annotations + if len(r.commonAnnotations) == 0 && len(specAnn) == 0 { + return nil + } + out := make(map[string]string, len(r.commonAnnotations)+len(specAnn)) + for k, v := range r.commonAnnotations { + out[k] = v + } + for k, v := range specAnn { + out[k] = v + } + return out +} + +func (r *render) configMap() (*corev1.ConfigMap, error) { + rendered, err := r.renderConnectYAML() + if err != nil { + return nil, err + } + + data := map[string]string{ + "connect.yaml": rendered, + } + for filename, content := range r.pipeline.Spec.ConfigFiles { + if filename == "connect.yaml" { + return nil, errors.New("configFiles cannot contain a key named \"connect.yaml\"; use configYaml instead") + } + data[filename] = content + } + + return &corev1.ConfigMap{ + TypeMeta: metav1.TypeMeta{ + APIVersion: "v1", + Kind: "ConfigMap", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: r.pipeline.Name, + Namespace: r.pipeline.Namespace, + Labels: r.labels, + Annotations: r.annotations(), + }, + Data: data, + }, nil +} + +// renderConnectYAML returns the connect.yaml the pipeline pod will see. When +// the Pipeline is bound to a Redpanda cluster via .cluster.clusterRef or +// .cluster.staticConfiguration, the operator inline-merges the resolved +// connection fields (seed_brokers, tls, sasl) into any `input.redpanda` and +// `output.redpanda` blocks in the user's configYaml. The non-deprecated +// `redpanda` plugins are targeted; the legacy `redpanda_common` family is +// not auto-configured (it's been deprecated in Connect, and pushing users +// onto the deprecated path via the operator is a foot-gun). +// +// User-side keys win on conflict: if the user already set seed_brokers, tls, +// or sasl on their redpanda block, their values are left as-is. That's the +// escape hatch for pipelines that need a different cluster, an override TLS +// setting, etc. +func (r *render) renderConnectYAML() (string, error) { + configYAML := r.pipeline.Spec.ConfigYAML + + overlay := r.redpandaPluginOverlay() + if len(overlay) == 0 { + return configYAML, nil + } + + var userConfig map[string]any + if err := yaml.Unmarshal([]byte(configYAML), &userConfig); err != nil { + return "", errors.Wrap(err, "configYaml is not valid YAML") + } + if userConfig == nil { + userConfig = map[string]any{} + } + + // Inline-merge into input.redpanda and output.redpanda blocks. Missing + // blocks are not synthesized — if the user's config has no + // output.redpanda (e.g. they're writing to S3 only) we don't inject + // one. + mergeRedpandaPlugin(userConfig, "input", overlay) + mergeRedpandaPlugin(userConfig, "output", overlay) + + out, err := yaml.Marshal(userConfig) + if err != nil { + return "", errors.Wrap(err, "marshalling rendered connect.yaml") + } + return string(out), nil +} + +// redpandaPluginOverlay returns the connection fields the operator will +// merge into `input.redpanda` / `output.redpanda` blocks. Empty when the +// pipeline has no cluster binding (the fully-inline configYaml case). +func (r *render) redpandaPluginOverlay() map[string]any { + if r.clusterConn == nil && (r.pipeline.Spec.ClusterSource == nil || r.pipeline.Spec.ClusterSource.StaticConfiguration == nil) { + return nil + } + + overlay := map[string]any{} + + if r.clusterConn != nil && len(r.clusterConn.Brokers) > 0 { + seeds := make([]any, 0, len(r.clusterConn.Brokers)) + for _, b := range r.clusterConn.Brokers { + seeds = append(seeds, b) + } + overlay["seed_brokers"] = seeds + } + + if r.clusterConn != nil && r.clusterConn.TLS != nil { + overlay["tls"] = map[string]any{ + "enabled": true, + "root_cas_file": clusterTLSMountPath + "/ca.crt", + } + } + + if r.userCredentials != nil { + overlay["sasl"] = []any{map[string]any{ + "mechanism": r.userCredentials.Mechanism, + "username": "${REDPANDA_SASL_USERNAME}", + "password": "${REDPANDA_SASL_PASSWORD}", + }} + } + + return overlay +} + +// mergeRedpandaPlugin overlays connection fields onto the `redpanda` block +// nested under root[section] (section is "input" or "output"). User-side +// keys are preserved; only keys the user did NOT set get filled in from +// overlay. No-op if root[section].redpanda is missing or not a map. +func mergeRedpandaPlugin(root map[string]any, section string, overlay map[string]any) { + if len(overlay) == 0 { + return + } + sectionMap, ok := root[section].(map[string]any) + if !ok { + return + } + plugin, ok := sectionMap["redpanda"].(map[string]any) + if !ok { + return + } + for k, v := range overlay { + if _, exists := plugin[k]; exists { + continue + } + plugin[k] = v + } + sectionMap["redpanda"] = plugin + root[section] = sectionMap +} + +const ( + // clusterTLSVolumeName is the volume name for the CA certificate from the referenced cluster. + clusterTLSVolumeName = "cluster-tls-ca" + // clusterTLSMountPath is the mount path for the cluster CA certificate. + clusterTLSMountPath = "/etc/tls/certs/ca" +) + +// resolveImage picks the Redpanda Connect image for the Pipeline pod using +// the three-tier precedence: +// +// 1. .spec.image on the Pipeline CR (per-pipeline override; always wins). +// 2. .defaultImage on the renderer (chart-level default, set via the +// operator's --connect-default-image flag — itself derived from +// connectController.image.{repository,tag} in the operator chart). +// 3. redpandav1alpha2.PipelineDefaultImage (binary-baked fallback). +func (r *render) resolveImage() string { + if r.pipeline.Spec.Image != nil && *r.pipeline.Spec.Image != "" { + return *r.pipeline.Spec.Image + } + if r.defaultImage != "" { + return r.defaultImage + } + return redpandav1alpha2.PipelineDefaultImage +} + +func (r *render) deployment() *appsv1.Deployment { + replicas := r.pipeline.GetReplicas() + image := r.resolveImage() + + resources := corev1.ResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceMemory: resource.MustParse(defaultMemoryRequest), + corev1.ResourceCPU: resource.MustParse(defaultCPURequest), + }, + } + if r.pipeline.Spec.Resources != nil { + resources = *r.pipeline.Spec.Resources + } + + env := buildValueSourceEnv(r.pipeline) + if len(r.licenseContent) > 0 { + env = append(env, corev1.EnvVar{ + Name: "REDPANDA_LICENSE", + ValueFrom: &corev1.EnvVarSource{ + SecretKeyRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: r.pipeline.Name + licenseSecretSuffix, + }, + Key: licenseSecretKey, + }, + }, + }) + } + // SASL credentials for the redpanda input/output are projected when the + // pipeline is bound to a cluster via .userRef. The Pipeline YAML can + // reference these as ${REDPANDA_SASL_USERNAME} / ${REDPANDA_SASL_PASSWORD} + // / ${REDPANDA_SASL_MECHANISM}, or rely on the operator-generated + // `redpanda` block in connect.yaml (which references them internally). + if r.userCredentials != nil { + env = append(env, r.userCredentials.envVars()...) + } + volumes := []corev1.Volume{ + { + Name: "config", + VolumeSource: corev1.VolumeSource{ + ConfigMap: &corev1.ConfigMapVolumeSource{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: r.pipeline.Name, + }, + }, + }, + }, + } + volumeMounts := []corev1.VolumeMount{ + { + Name: "config", + MountPath: "/config", + ReadOnly: true, + }, + } + + // Inject cluster connection env vars and TLS volumes when clusterRef is set. + if cc := r.clusterConn; cc != nil { + clusterEnv, clusterVolumes, clusterMounts := buildClusterConnectionResources(cc) + env = append(env, clusterEnv...) + volumes = append(volumes, clusterVolumes...) + volumeMounts = append(volumeMounts, clusterMounts...) + } + + return &appsv1.Deployment{ + TypeMeta: metav1.TypeMeta{ + APIVersion: "apps/v1", + Kind: "Deployment", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: r.pipeline.Name, + Namespace: r.pipeline.Namespace, + Labels: r.labels, + Annotations: r.annotations(), + }, + Spec: appsv1.DeploymentSpec{ + Replicas: ptr.To(replicas), + Selector: &metav1.LabelSelector{ + MatchLabels: r.labels, + }, + Strategy: appsv1.DeploymentStrategy{ + Type: appsv1.RecreateDeploymentStrategyType, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: r.labels, + Annotations: r.podAnnotations(), + }, + Spec: corev1.PodSpec{ + ServiceAccountName: r.pipeline.Spec.ServiceAccountName, + InitContainers: []corev1.Container{ + { + Name: "lint", + Image: image, + Command: []string{"/redpanda-connect", "lint", "/config/connect.yaml"}, + Env: env, + TerminationMessagePolicy: corev1.TerminationMessageFallbackToLogsOnError, + VolumeMounts: volumeMounts, + }, + }, + Containers: []corev1.Container{ + { + Name: "connect", + Image: image, + Command: []string{"/redpanda-connect", "run", "/config/connect.yaml"}, + Ports: []corev1.ContainerPort{ + {Name: "http", ContainerPort: 4195, Protocol: corev1.ProtocolTCP}, + }, + Env: env, + Resources: resources, + ReadinessProbe: &corev1.Probe{ + ProbeHandler: corev1.ProbeHandler{ + HTTPGet: &corev1.HTTPGetAction{ + Path: "/ready", + Port: intstr.FromInt32(4195), + }, + }, + InitialDelaySeconds: 5, + PeriodSeconds: 10, + }, + VolumeMounts: volumeMounts, + }, + }, + Volumes: volumes, + NodeSelector: r.pipeline.Spec.NodeSelector, + Tolerations: r.pipeline.Spec.Tolerations, + Affinity: buildAffinity(r.pipeline), + TopologySpreadConstraints: buildTopologySpreadConstraints(r.pipeline, r.labels), + }, + }, + }, + } +} + +func (r *render) podDisruptionBudget() *policyv1.PodDisruptionBudget { + budget := r.pipeline.Spec.Budget + if budget == nil { + return nil + } + + maxUnavailable := intstr.FromInt32(int32(budget.MaxUnavailable)) + + return &policyv1.PodDisruptionBudget{ + TypeMeta: metav1.TypeMeta{ + APIVersion: "policy/v1", + Kind: "PodDisruptionBudget", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: r.pipeline.Name, + Namespace: r.pipeline.Namespace, + Labels: r.labels, + Annotations: r.annotations(), + }, + Spec: policyv1.PodDisruptionBudgetSpec{ + MaxUnavailable: &maxUnavailable, + Selector: &metav1.LabelSelector{ + MatchLabels: r.labels, + }, + }, + } +} + +// buildClusterConnectionResources returns the env vars, volumes, and volume +// mounts needed to connect to a Redpanda cluster via clusterRef. +func buildClusterConnectionResources(cc *clusterConnection) ([]corev1.EnvVar, []corev1.Volume, []corev1.VolumeMount) { + var env []corev1.EnvVar + var volumes []corev1.Volume + var mounts []corev1.VolumeMount + + env = append(env, corev1.EnvVar{ + Name: "RPK_BROKERS", + Value: cc.BrokersString(), + }) + + if cc.TLS != nil { + caCertPath := fmt.Sprintf("%s/%s", clusterTLSMountPath, cc.TLS.CACertSecretRef.Key) + + env = append(env, corev1.EnvVar{ + Name: "RPK_TLS_ENABLED", + Value: "true", + }, corev1.EnvVar{ + Name: "RPK_TLS_ROOT_CAS_FILE", + Value: caCertPath, + }) + + volumes = append(volumes, corev1.Volume{ + Name: clusterTLSVolumeName, + VolumeSource: corev1.VolumeSource{ + Projected: &corev1.ProjectedVolumeSource{ + Sources: []corev1.VolumeProjection{ + { + Secret: &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: cc.TLS.CACertSecretRef.Name, + }, + Items: []corev1.KeyToPath{ + { + Key: cc.TLS.CACertSecretRef.Key, + Path: cc.TLS.CACertSecretRef.Key, + }, + }, + }, + }, + }, + }, + }, + }) + + mounts = append(mounts, corev1.VolumeMount{ + Name: clusterTLSVolumeName, + MountPath: clusterTLSMountPath, + ReadOnly: true, + }) + } else { + env = append(env, corev1.EnvVar{ + Name: "RPK_TLS_ENABLED", + Value: "false", + }) + } + + if cc.SASL != nil { + env = append(env, corev1.EnvVar{ + Name: "RPK_SASL_MECHANISM", + Value: cc.SASL.Mechanism, + }, corev1.EnvVar{ + Name: "RPK_SASL_USER", + Value: cc.SASL.Username, + }) + + if cc.SASL.PasswordRef != nil { + env = append(env, corev1.EnvVar{ + Name: "RPK_SASL_PASSWORD", + ValueFrom: &corev1.EnvVarSource{ + SecretKeyRef: cc.SASL.PasswordRef, + }, + }) + } + } + + return env, volumes, mounts +} + +func (r *render) podMonitor() *monitoringv1.PodMonitor { + if !r.monitoring.Enabled { + return nil + } + + labels := make(map[string]string, len(r.labels)+len(r.monitoring.Labels)) + for k, v := range r.labels { + labels[k] = v + } + for k, v := range r.monitoring.Labels { + labels[k] = v + } + + endpoint := monitoringv1.PodMetricsEndpoint{ + Path: "/metrics", + Port: ptr.To("http"), + } + if r.monitoring.ScrapeInterval != "" { + endpoint.Interval = monitoringv1.Duration(r.monitoring.ScrapeInterval) + } + + return &monitoringv1.PodMonitor{ + TypeMeta: metav1.TypeMeta{ + APIVersion: "monitoring.coreos.com/v1", + Kind: "PodMonitor", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: r.pipeline.Name, + Namespace: r.pipeline.Namespace, + Labels: labels, + Annotations: r.annotations(), + }, + Spec: monitoringv1.PodMonitorSpec{ + PodMetricsEndpoints: []monitoringv1.PodMetricsEndpoint{endpoint}, + Selector: metav1.LabelSelector{ + MatchLabels: r.labels, + }, + }, + } +} + +// buildValueSourceEnv projects each spec.valueSources entry into a single +// container env var. Unlike the earlier `envFrom: secretRef` bag-of-Secrets +// pattern, every value is a named pull: the user explicitly chooses which +// key flows into which env var, and Secret keys not listed in valueSources +// stay out of the pod's environment. +func buildValueSourceEnv(pipeline *redpandav1alpha2.Pipeline) []corev1.EnvVar { + if len(pipeline.Spec.ValueSources) == 0 { + return []corev1.EnvVar{} + } + + env := make([]corev1.EnvVar, 0, len(pipeline.Spec.ValueSources)) + for _, vs := range pipeline.Spec.ValueSources { + entry := corev1.EnvVar{Name: vs.Name} + switch { + case vs.Source.Inline != nil: + entry.Value = *vs.Source.Inline + case vs.Source.SecretKeyRef != nil: + entry.ValueFrom = &corev1.EnvVarSource{SecretKeyRef: vs.Source.SecretKeyRef} + case vs.Source.ConfigMapKeyRef != nil: + entry.ValueFrom = &corev1.EnvVarSource{ConfigMapKeyRef: vs.Source.ConfigMapKeyRef} + case vs.Source.ExternalSecretRefSelector != nil: + // External secrets are projected through a Kubernetes Secret + // the ESO operator manages; reference it by name with the + // same key the source declared (defaulting to the entry name). + entry.ValueFrom = &corev1.EnvVarSource{ + SecretKeyRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: vs.Source.ExternalSecretRefSelector.Name, + }, + Key: vs.Name, + }, + } + default: + // CEL on ValueSource enforces exactly-one. Skip if somehow + // none set; the spec-level webhook would have rejected this. + continue + } + env = append(env, entry) + } + return env +} + +// buildAffinity constructs a node affinity that restricts pods to the specified +// zones. Returns nil if no zones are configured. +func buildAffinity(connect *redpandav1alpha2.Pipeline) *corev1.Affinity { + if len(connect.Spec.Zones) == 0 { + return nil + } + + return &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{ + { + MatchExpressions: []corev1.NodeSelectorRequirement{ + { + Key: zoneTopologyKey, + Operator: corev1.NodeSelectorOpIn, + Values: connect.Spec.Zones, + }, + }, + }, + }, + }, + }, + } +} + +// buildTopologySpreadConstraints returns the topology spread constraints for +// the pipeline. If zones are configured and no explicit constraints are +// provided, a default constraint is generated to spread pods evenly across +// the specified zones. +func buildTopologySpreadConstraints(connect *redpandav1alpha2.Pipeline, selectorLabels map[string]string) []corev1.TopologySpreadConstraint { + if len(connect.Spec.TopologySpreadConstraints) > 0 { + return connect.Spec.TopologySpreadConstraints + } + + if len(connect.Spec.Zones) > 0 { + return []corev1.TopologySpreadConstraint{ + { + MaxSkew: 1, + TopologyKey: zoneTopologyKey, + WhenUnsatisfiable: corev1.ScheduleAnyway, + LabelSelector: &metav1.LabelSelector{ + MatchLabels: selectorLabels, + }, + }, + } + } + + return nil +} + +// FormatConditionMessage returns a formatted message for a condition update +// during resource reconciliation. +func FormatConditionMessage(resource, detail string) string { + return fmt.Sprintf("%s reconciliation failed: %v", resource, detail) +} diff --git a/operator/internal/controller/pipeline/testdata/controller-tests.golden.txtar b/operator/internal/controller/pipeline/testdata/controller-tests.golden.txtar new file mode 100644 index 000000000..e05ccb2aa --- /dev/null +++ b/operator/internal/controller/pipeline/testdata/controller-tests.golden.txtar @@ -0,0 +1,181 @@ +-- basic-pipeline -- +- apiVersion: v1 + data: + connect.yaml: | + input: + generate: + mapping: 'root.message = "hello"' + interval: "5s" + output: + stdout: {} + kind: ConfigMap + metadata: + labels: + app.kubernetes.io/component: connect-pipeline + app.kubernetes.io/instance: basic-pipeline + app.kubernetes.io/managed-by: redpanda-operator + app.kubernetes.io/name: redpanda-connect + name: basic-pipeline + namespace: default +- apiVersion: apps/v1 + kind: Deployment + metadata: + labels: + app.kubernetes.io/component: connect-pipeline + app.kubernetes.io/instance: basic-pipeline + app.kubernetes.io/managed-by: redpanda-operator + app.kubernetes.io/name: redpanda-connect + name: basic-pipeline + namespace: default + spec: + replicas: 1 + selector: + matchLabels: + app.kubernetes.io/component: connect-pipeline + app.kubernetes.io/instance: basic-pipeline + app.kubernetes.io/managed-by: redpanda-operator + app.kubernetes.io/name: redpanda-connect + strategy: + type: Recreate + template: + metadata: + labels: + app.kubernetes.io/component: connect-pipeline + app.kubernetes.io/instance: basic-pipeline + app.kubernetes.io/managed-by: redpanda-operator + app.kubernetes.io/name: redpanda-connect + spec: + containers: + - command: + - /redpanda-connect + - run + - /config/connect.yaml + image: docker.redpanda.com/redpandadata/connect:4.87.0 + name: connect + ports: + - containerPort: 4195 + name: http + protocol: TCP + readinessProbe: + httpGet: + path: /ready + port: 4195 + initialDelaySeconds: 5 + periodSeconds: 10 + resources: + requests: + cpu: 100m + memory: 256Mi + volumeMounts: + - mountPath: /config + name: config + readOnly: true + initContainers: + - command: + - /redpanda-connect + - lint + - /config/connect.yaml + image: docker.redpanda.com/redpandadata/connect:4.87.0 + name: lint + resources: {} + terminationMessagePolicy: FallbackToLogsOnError + volumeMounts: + - mountPath: /config + name: config + readOnly: true + volumes: + - configMap: + name: basic-pipeline + name: config + status: {} +-- pipeline-with-annotations -- +- apiVersion: v1 + data: + connect.yaml: | + input: + generate: + mapping: 'root = "hello"' + output: + stdout: {} + kind: ConfigMap + metadata: + labels: + app.kubernetes.io/component: connect-pipeline + app.kubernetes.io/instance: annotated-pipeline + app.kubernetes.io/managed-by: redpanda-operator + app.kubernetes.io/name: redpanda-connect + name: annotated-pipeline + namespace: default +- apiVersion: apps/v1 + kind: Deployment + metadata: + labels: + app.kubernetes.io/component: connect-pipeline + app.kubernetes.io/instance: annotated-pipeline + app.kubernetes.io/managed-by: redpanda-operator + app.kubernetes.io/name: redpanda-connect + name: annotated-pipeline + namespace: default + spec: + replicas: 1 + selector: + matchLabels: + app.kubernetes.io/component: connect-pipeline + app.kubernetes.io/instance: annotated-pipeline + app.kubernetes.io/managed-by: redpanda-operator + app.kubernetes.io/name: redpanda-connect + strategy: + type: Recreate + template: + metadata: + annotations: + ad.datadoghq.com/connect.checks: openmetrics + labels: + app.kubernetes.io/component: connect-pipeline + app.kubernetes.io/instance: annotated-pipeline + app.kubernetes.io/managed-by: redpanda-operator + app.kubernetes.io/name: redpanda-connect + spec: + containers: + - command: + - /redpanda-connect + - run + - /config/connect.yaml + image: docker.redpanda.com/redpandadata/connect:4.87.0 + name: connect + ports: + - containerPort: 4195 + name: http + protocol: TCP + readinessProbe: + httpGet: + path: /ready + port: 4195 + initialDelaySeconds: 5 + periodSeconds: 10 + resources: + requests: + cpu: 100m + memory: 256Mi + volumeMounts: + - mountPath: /config + name: config + readOnly: true + initContainers: + - command: + - /redpanda-connect + - lint + - /config/connect.yaml + image: docker.redpanda.com/redpandadata/connect:4.87.0 + name: lint + resources: {} + terminationMessagePolicy: FallbackToLogsOnError + volumeMounts: + - mountPath: /config + name: config + readOnly: true + volumes: + - configMap: + name: annotated-pipeline + name: config + status: {} diff --git a/taskfiles/k8s.yml b/taskfiles/k8s.yml index d4665028a..5915406cb 100644 --- a/taskfiles/k8s.yml +++ b/taskfiles/k8s.yml @@ -46,6 +46,8 @@ tasks: PATH: ./internal/controller/redpanda - NAME: console PATH: ./internal/controller/console + - NAME: pipeline + PATH: ./internal/controller/pipeline - NAME: v1-manager PATH: ./internal/controller/vectorized - NAME: decommission @@ -93,6 +95,7 @@ tasks: ./config/rbac/itemized/sidecar.yaml \ ./config/rbac/itemized/crd-installation.yaml \ ./config/rbac/itemized/multicluster-manager.yaml \ + ./config/rbac/itemized/pipeline.yaml \ ./config/rbac/itemized/v1-manager.yaml \ ./config/rbac/itemized/v2-manager.yaml \ ./config/rbac/itemized/console.yaml