Conversation
Signed-off-by: Vadim Bauer <vb@container-registry.com>
📝 WalkthroughWalkthroughThis PR introduces a complete Helm chart for Harbor Next container registry, including deployment manifests for core components (core, registry, portal, jobservice, exporter, trivy), infrastructure templates for ingress/routing/exposure, comprehensive documentation with deployment guides, test suites for validation, JSON schema for value validation, and CI/CD task automation. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Rationale: Very large scope (100+ new files) with heterogeneous changes spanning Helm templates, Kubernetes manifests, test suites, documentation, and configuration. Requires verification of template logic, conditional rendering across storage backends and exposure methods, comprehensive test coverage, schema correctness, and documentation accuracy. While changes follow consistent patterns, the breadth demands careful review of helper template interdependencies, manifest resource definitions, and test assertions. Possibly related PRs
Suggested labels
✨ Finishing Touches
🧪 Generate unit tests (beta)
|
Dependency Review✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.Scanned FilesNone |
There was a problem hiding this comment.
18 issues found across 89 files
Note: This PR contains a large number of files. cubic only reviews up to 75 files per PR, so some files may not have been reviewed.
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="deploy/chart/templates/_helpers-tls.tpl">
<violation number="1" location="deploy/chart/templates/_helpers-tls.tpl:11">
P2: `harbor.component.scheme` always returns `http`, so TLS-enabled installs still configure HTTP for jobservice and trivy probes. Use `.Values.tls.enabled` to switch to `https`.</violation>
</file>
<file name="deploy/chart/docs/guide/nutanix.md">
<violation number="1" location="deploy/chart/docs/guide/nutanix.md:21">
P2: Add `--create-namespace` to the Loki Helm install command so the first monitoring step works on a fresh cluster.</violation>
</file>
<file name="deploy/chart/templates/NOTES.txt">
<violation number="1" location="deploy/chart/templates/NOTES.txt:5">
P2: Add a gateway branch for image push instructions; gateway-only installs currently omit `docker login/tag/push` guidance.</violation>
</file>
<file name="deploy/chart/templates/portal.deployment.yaml">
<violation number="1" location="deploy/chart/templates/portal.deployment.yaml:19">
P2: Add a pod-template checksum annotation for `portal.configmap.yaml` so portal pods roll automatically when config changes.</violation>
</file>
<file name="deploy/chart/example/aws-eks-irsa/README.md">
<violation number="1" location="deploy/chart/example/aws-eks-irsa/README.md:181">
P2: Avoid `docker login -p` in docs; it exposes credentials in process arguments and shell history. Use `--password-stdin` instead.</violation>
</file>
<file name="deploy/chart/docs/guide/suse.md">
<violation number="1" location="deploy/chart/docs/guide/suse.md:100">
P2: Add the required `externalURL` to the values.yaml example; the chart requires it and installs will fail validation without it.</violation>
</file>
<file name="deploy/chart/templates/_helpers.tpl">
<violation number="1" location="deploy/chart/templates/_helpers.tpl:256">
P1: The fallback secret key is deterministically derived from the release name via `sha256sum`, making it trivially computable by anyone who knows the release name. This key encrypts credentials in the database. Consider using the same pattern as other secrets in this chart (`secretKeyHelper` + `randAlphaNum` fallback), which preserves stability across upgrades while generating a truly random key on first install.</violation>
<violation number="2" location="deploy/chart/templates/_helpers.tpl:305">
P2: Template name `imagePullSecret` is missing the `harbor.` namespace prefix, unlike all other templates in this file. Helm template names are global across a release's chart and subcharts, so this risks a name collision (e.g., with the Valkey subchart).</violation>
</file>
<file name="deploy/chart/example/aws-eks-irsa/verify.sh">
<violation number="1" location="deploy/chart/example/aws-eks-irsa/verify.sh:29">
P1: Post-increment counters can terminate the script under `set -e` when incrementing from 0.</violation>
</file>
<file name="deploy/chart/values.schema.json">
<violation number="1" location="deploy/chart/values.schema.json:7">
P2: `database.host` is not actually enforced by the root schema, so invalid values pass schema validation but fail later at template render time.</violation>
</file>
<file name="deploy/chart/templates/expose-route.yaml">
<violation number="1" location="deploy/chart/templates/expose-route.yaml:29">
P1: Allowing configurable Route termination here is unsafe because `passthrough`/`reencrypt` are incompatible with the HTTP-only core backend target (`http-web`). Restrict this route to `edge` termination unless a TLS backend service is added.</violation>
</file>
<file name="deploy/chart/templates/core.secret.yaml">
<violation number="1" location="deploy/chart/templates/core.secret.yaml:25">
P2: Reuse existing tls.key/tls.crt values from the current Secret (like other keys do) so upgrades don’t rotate the token signing certificate unintentionally.</violation>
</file>
<file name="deploy/chart/templates/ingress.secret.yaml">
<violation number="1" location="deploy/chart/templates/ingress.secret.yaml:16">
P2: Guard `ca.crt` rendering for existing secrets; missing `ca.crt` currently renders an invalid value and can break upgrades.</violation>
</file>
<file name="deploy/chart/templates/registry.configmap.yaml">
<violation number="1" location="deploy/chart/templates/registry.configmap.yaml:168">
P2: The middleware config block is unreachable because `harbor.middlware.enabled` always returns false. As written, this feature will never be rendered even when users configure middleware values.</violation>
</file>
<file name="deploy/chart/templates/_helpers-redis.tpl">
<violation number="1" location="deploy/chart/templates/_helpers-redis.tpl:124">
P1: harbor.redis.password always returns empty when Valkey is enabled, so the registry Redis config omits the password even if `valkey.auth.enabled=true`. Authenticated Valkey deployments will fail to connect without a password.</violation>
</file>
<file name="deploy/chart/templates/trivy.secret.yaml">
<violation number="1" location="deploy/chart/templates/trivy.secret.yaml:11">
P2: The Trivy redisURL uses a helper that inserts `$(REDIS_PASSWORD)` when Valkey auth is enabled, but the Trivy container doesn’t set REDIS_PASSWORD. This leaves an unresolved placeholder in the Redis URL and can break Trivy’s Redis connection when auth is on. Add REDIS_PASSWORD to the Trivy secret/env or generate a fully-resolved URL in the secret.</violation>
</file>
<file name="deploy/chart/templates/registry.secret.yaml">
<violation number="1" location="deploy/chart/templates/registry.secret.yaml:10">
P2: When `registry.existingSecret` is set, REGISTRY_HTTP_SECRET is omitted from the chart-managed secret, but the registry pod still sources env vars from that secret. This leaves the registry container without the required HTTP secret (or with a mismatch vs. registryctl), which can break token validation. Populate REGISTRY_HTTP_SECRET from the existing secret or switch the registry envFrom to use the existing secret as well.</violation>
</file>
<file name="deploy/chart/templates/jobservice.secret.yaml">
<violation number="1" location="deploy/chart/templates/jobservice.secret.yaml:15">
P2: `jobservice.existingSecret` currently causes the generated jobservice secret to omit `JOBSERVICE_SECRET`, but the jobservice Deployment always mounts the generated secret. That leaves `JOBSERVICE_SECRET` unset when an existing secret is configured, breaking jobservice auth configuration. Either always populate this key or switch the Deployment/Secret creation to use the existing secret name.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| {{- .Values.secretKey }} | ||
| {{- else }} | ||
| {{- /* Generate a deterministic key based on release name */ -}} | ||
| {{- $key := printf "%s-harbor-secret-key" .Release.Name | sha256sum | trunc 16 }} |
There was a problem hiding this comment.
P1: The fallback secret key is deterministically derived from the release name via sha256sum, making it trivially computable by anyone who knows the release name. This key encrypts credentials in the database. Consider using the same pattern as other secrets in this chart (secretKeyHelper + randAlphaNum fallback), which preserves stability across upgrades while generating a truly random key on first install.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At deploy/chart/templates/_helpers.tpl, line 256:
<comment>The fallback secret key is deterministically derived from the release name via `sha256sum`, making it trivially computable by anyone who knows the release name. This key encrypts credentials in the database. Consider using the same pattern as other secrets in this chart (`secretKeyHelper` + `randAlphaNum` fallback), which preserves stability across upgrades while generating a truly random key on first install.</comment>
<file context>
@@ -0,0 +1,307 @@
+{{- .Values.secretKey }}
+{{- else }}
+{{- /* Generate a deterministic key based on release name */ -}}
+{{- $key := printf "%s-harbor-secret-key" .Release.Name | sha256sum | trunc 16 }}
+{{- $key }}
+{{- end }}
</file context>
| local desc="$1"; shift | ||
| if "$@" >/dev/null 2>&1; then | ||
| echo " PASS: ${desc}" | ||
| ((PASS++)) |
There was a problem hiding this comment.
P1: Post-increment counters can terminate the script under set -e when incrementing from 0.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At deploy/chart/example/aws-eks-irsa/verify.sh, line 29:
<comment>Post-increment counters can terminate the script under `set -e` when incrementing from 0.</comment>
<file context>
@@ -0,0 +1,123 @@
+ local desc="$1"; shift
+ if "$@" >/dev/null 2>&1; then
+ echo " PASS: ${desc}"
+ ((PASS++))
+ else
+ echo " FAIL: ${desc}"
</file context>
| name: {{ include "harbor.fullname" . }}-core | ||
| weight: 100 | ||
| tls: | ||
| termination: {{ .Values.expose.route.tls.termination | default "edge" }} |
There was a problem hiding this comment.
P1: Allowing configurable Route termination here is unsafe because passthrough/reencrypt are incompatible with the HTTP-only core backend target (http-web). Restrict this route to edge termination unless a TLS backend service is added.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At deploy/chart/templates/expose-route.yaml, line 29:
<comment>Allowing configurable Route termination here is unsafe because `passthrough`/`reencrypt` are incompatible with the HTTP-only core backend target (`http-web`). Restrict this route to `edge` termination unless a TLS backend service is added.</comment>
<file context>
@@ -0,0 +1,32 @@
+ name: {{ include "harbor.fullname" . }}-core
+ weight: 100
+ tls:
+ termination: {{ .Values.expose.route.tls.termination | default "edge" }}
+ insecureEdgeTerminationPolicy: {{ .Values.expose.route.tls.insecureEdgeTerminationPolicy | default "Redirect" }}
+ wildcardPolicy: None
</file context>
| {{- end -}} | ||
|
|
||
| {{- define "harbor.redis.password" -}} | ||
| {{- ternary "" .Values.externalRedis.password (.Values.valkey.enabled) }} |
There was a problem hiding this comment.
P1: harbor.redis.password always returns empty when Valkey is enabled, so the registry Redis config omits the password even if valkey.auth.enabled=true. Authenticated Valkey deployments will fail to connect without a password.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At deploy/chart/templates/_helpers-redis.tpl, line 124:
<comment>harbor.redis.password always returns empty when Valkey is enabled, so the registry Redis config omits the password even if `valkey.auth.enabled=true`. Authenticated Valkey deployments will fail to connect without a password.</comment>
<file context>
@@ -0,0 +1,129 @@
+{{- end -}}
+
+{{- define "harbor.redis.password" -}}
+ {{- ternary "" .Values.externalRedis.password (.Values.valkey.enabled) }}
+{{- end -}}
+
</file context>
| Check if internal TLS is enabled | ||
| */}} | ||
| {{- define "harbor.component.scheme" -}} | ||
| {{- printf "http" -}} |
There was a problem hiding this comment.
P2: harbor.component.scheme always returns http, so TLS-enabled installs still configure HTTP for jobservice and trivy probes. Use .Values.tls.enabled to switch to https.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At deploy/chart/templates/_helpers-tls.tpl, line 11:
<comment>`harbor.component.scheme` always returns `http`, so TLS-enabled installs still configure HTTP for jobservice and trivy probes. Use `.Values.tls.enabled` to switch to `https`.</comment>
<file context>
@@ -0,0 +1,48 @@
+Check if internal TLS is enabled
+*/}}
+{{- define "harbor.component.scheme" -}}
+ {{- printf "http" -}}
+{{- end }}
+
</file context>
| {{- if $existingSecret }} | ||
| tls.crt: {{ index $existingSecret.data "tls.crt" }} | ||
| tls.key: {{ index $existingSecret.data "tls.key" }} | ||
| ca.crt: {{ index $existingSecret.data "ca.crt" }} |
There was a problem hiding this comment.
P2: Guard ca.crt rendering for existing secrets; missing ca.crt currently renders an invalid value and can break upgrades.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At deploy/chart/templates/ingress.secret.yaml, line 16:
<comment>Guard `ca.crt` rendering for existing secrets; missing `ca.crt` currently renders an invalid value and can break upgrades.</comment>
<file context>
@@ -0,0 +1,24 @@
+{{- if $existingSecret }}
+ tls.crt: {{ index $existingSecret.data "tls.crt" }}
+ tls.key: {{ index $existingSecret.data "tls.key" }}
+ ca.crt: {{ index $existingSecret.data "ca.crt" }}
+{{- else }}
+ {{- $ca := genCA "harbor-ca" 365 }}
</file context>
| schema1: | ||
| enabled: true | ||
|
|
||
| {{- if (include "harbor.middlware.enabled" .) }} |
There was a problem hiding this comment.
P2: The middleware config block is unreachable because harbor.middlware.enabled always returns false. As written, this feature will never be rendered even when users configure middleware values.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At deploy/chart/templates/registry.configmap.yaml, line 168:
<comment>The middleware config block is unreachable because `harbor.middlware.enabled` always returns false. As written, this feature will never be rendered even when users configure middleware values.</comment>
<file context>
@@ -0,0 +1,188 @@
+ schema1:
+ enabled: true
+
+ {{- if (include "harbor.middlware.enabled" .) }}
+ {{- $middleware := .Values.registry.middleware }}
+ {{- $middlewareType := $middleware.type }}
</file context>
| {{ include "harbor.labels" . | indent 4 }} | ||
| type: Opaque | ||
| data: | ||
| redisURL: {{ include "harbor.redis.url.trivy" . | b64enc }} |
There was a problem hiding this comment.
P2: The Trivy redisURL uses a helper that inserts $(REDIS_PASSWORD) when Valkey auth is enabled, but the Trivy container doesn’t set REDIS_PASSWORD. This leaves an unresolved placeholder in the Redis URL and can break Trivy’s Redis connection when auth is on. Add REDIS_PASSWORD to the Trivy secret/env or generate a fully-resolved URL in the secret.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At deploy/chart/templates/trivy.secret.yaml, line 11:
<comment>The Trivy redisURL uses a helper that inserts `$(REDIS_PASSWORD)` when Valkey auth is enabled, but the Trivy container doesn’t set REDIS_PASSWORD. This leaves an unresolved placeholder in the Redis URL and can break Trivy’s Redis connection when auth is on. Add REDIS_PASSWORD to the Trivy secret/env or generate a fully-resolved URL in the secret.</comment>
<file context>
@@ -0,0 +1,13 @@
+{{ include "harbor.labels" . | indent 4 }}
+type: Opaque
+data:
+ redisURL: {{ include "harbor.redis.url.trivy" . | b64enc }}
+ gitHubToken: {{ .Values.trivy.gitHubToken | default "" | b64enc | quote }}
+{{- end }}
</file context>
| {{- include "harbor.componentLabels" (dict "root" . "component" "registry") | nindent 4 }} | ||
| type: Opaque | ||
| data: | ||
| {{- if not .Values.registry.existingSecret }} |
There was a problem hiding this comment.
P2: When registry.existingSecret is set, REGISTRY_HTTP_SECRET is omitted from the chart-managed secret, but the registry pod still sources env vars from that secret. This leaves the registry container without the required HTTP secret (or with a mismatch vs. registryctl), which can break token validation. Populate REGISTRY_HTTP_SECRET from the existing secret or switch the registry envFrom to use the existing secret as well.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At deploy/chart/templates/registry.secret.yaml, line 10:
<comment>When `registry.existingSecret` is set, REGISTRY_HTTP_SECRET is omitted from the chart-managed secret, but the registry pod still sources env vars from that secret. This leaves the registry container without the required HTTP secret (or with a mismatch vs. registryctl), which can break token validation. Populate REGISTRY_HTTP_SECRET from the existing secret or switch the registry envFrom to use the existing secret as well.</comment>
<file context>
@@ -0,0 +1,60 @@
+ {{- include "harbor.componentLabels" (dict "root" . "component" "registry") | nindent 4 }}
+type: Opaque
+data:
+ {{- if not .Values.registry.existingSecret }}
+ REGISTRY_HTTP_SECRET: {{ include "harbor.secretKeyHelper" (dict "key" "REGISTRY_HTTP_SECRET" "data" ($existingSecret.data | default dict)) | default (randAlphaNum 16) | b64enc | quote }}
+ {{- end }}
</file context>
| {{- end }} | ||
| # Placeholder to ensure secret is always created | ||
| _PLACEHOLDER: {{ "placeholder" | b64enc | quote }} | ||
| {{- if not .Values.jobservice.existingSecret }} |
There was a problem hiding this comment.
P2: jobservice.existingSecret currently causes the generated jobservice secret to omit JOBSERVICE_SECRET, but the jobservice Deployment always mounts the generated secret. That leaves JOBSERVICE_SECRET unset when an existing secret is configured, breaking jobservice auth configuration. Either always populate this key or switch the Deployment/Secret creation to use the existing secret name.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At deploy/chart/templates/jobservice.secret.yaml, line 15:
<comment>`jobservice.existingSecret` currently causes the generated jobservice secret to omit `JOBSERVICE_SECRET`, but the jobservice Deployment always mounts the generated secret. That leaves `JOBSERVICE_SECRET` unset when an existing secret is configured, breaking jobservice auth configuration. Either always populate this key or switch the Deployment/Secret creation to use the existing secret name.</comment>
<file context>
@@ -0,0 +1,21 @@
+ {{- end }}
+ # Placeholder to ensure secret is always created
+ _PLACEHOLDER: {{ "placeholder" | b64enc | quote }}
+ {{- if not .Values.jobservice.existingSecret }}
+ JOBSERVICE_SECRET: {{ include "harbor.secretKeyHelper" (dict "key" "JOBSERVICE_SECRET" "data" ($existingSecret.data | default dict)) | default (randAlphaNum 16) | b64enc | quote }}
+ {{- end }}
</file context>
There was a problem hiding this comment.
Pull request overview
This PR adds a new Helm chart (deploy/chart/) for the Harbor Next container registry platform, along with chart-level quality tasks (taskfile/helm.yml) integrated into the root Taskfile. The chart is a modern reimplementation designed to replace the upstream harbor-helm chart with a cleaner architecture.
Changes:
- Adds a complete Helm chart under
deploy/chart/with templates for all Harbor components (core, portal, registry, jobservice, exporter, trivy), supporting Ingress, Gateway API, OpenShift Routes, PodDisruptionBudgets, ServiceMonitors, TLS cert-manager integration, and more - Adds
taskfile/helm.ymlwith chart linting, security scanning (Trivy, Kubescape, kube-linter), and CI quality gate tasks - Adds example values files for k3d, RKE2, OpenShift, and AWS EKS with IRSA, plus deployment guides
Reviewed changes
Copilot reviewed 88 out of 89 changed files in this pull request and generated 23 comments.
Show a summary per file
| File | Description |
|---|---|
taskfile/helm.yml |
New Helm quality tasks (lint, scan, CI); missing TRIVY_VERSION var definition |
Taskfile.yml |
Integrates new helm.yml tasks under helm: namespace |
deploy/chart/Chart.yaml |
Chart metadata, dependencies (Valkey) |
deploy/chart/templates/_helpers.tpl |
Core helpers including predictable secret key generation (security issue) |
deploy/chart/templates/_helpers-tls.tpl |
TLS helpers; contains middlware typo |
deploy/chart/templates/_helpers-trace.tpl |
Trace helpers; Jaeger password encoding bug |
deploy/chart/templates/_helpers-redis.tpl |
Redis URL and credential helpers |
deploy/chart/templates/_helpers-urls.tpl |
Internal service URL helpers |
deploy/chart/templates/_helpers-database.tpl |
Database connection helpers |
deploy/chart/templates/_helpers-jobservice.tpl |
Jobservice helpers including dead TLS code |
deploy/chart/templates/core.*.yaml |
Core component templates; token key pair regenerated on every upgrade |
deploy/chart/templates/registry.*.yaml |
Registry templates; filesystem mount applied for non-filesystem storage |
deploy/chart/templates/trivy.*.yaml |
Trivy templates; inconsistent whitespace control in service |
deploy/chart/templates/servicemonitor.yaml |
Prometheus ServiceMonitor; honorLabels: false cannot be set due to default true bug |
deploy/chart/templates/pdbs.yaml |
PodDisruptionBudgets for all components |
deploy/chart/templates/expose-*.yaml |
ClusterIP/NodePort/LoadBalancer/Route exposure templates |
deploy/chart/templates/ingress.yaml |
Kubernetes Ingress template |
deploy/chart/templates/gateway-httproute.yaml |
Gateway API HTTPRoute |
deploy/chart/templates/extraManifests.yaml |
User-defined extra manifests |
deploy/chart/tests/*.yaml |
helm-unittest test suites for all components |
deploy/chart/example/*.yaml |
Example values for multiple platforms; real hostname/credentials in rke2 example |
deploy/chart/example/aws-eks-irsa/README.md |
Real AWS account ID exposed |
deploy/chart/docs/guide/*.md |
Deployment guides for various platforms |
deploy/chart/REMAINING-GAPS.md |
Documents known feature gaps vs upstream |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| externalURL: "https://rancher-dev-16gb-fsn1-1.container-registry.com" | ||
|
|
||
| # Ingress configuration for nginx ingress controller | ||
| ingress: | ||
| enabled: true | ||
| className: nginx | ||
| annotations: | ||
| nginx.ingress.kubernetes.io/proxy-body-size: "0" | ||
| nginx.ingress.kubernetes.io/ssl-redirect: "true" | ||
| nginx.ingress.kubernetes.io/proxy-buffering: "off" | ||
| nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: "30" | ||
| nginx.ingress.kubernetes.io/proxy-next-upstream-tries: "5" | ||
| core: rancher-dev-16gb-fsn1-1.container-registry.com | ||
| hosts: | ||
| - host: rancher-dev-16gb-fsn1-1.container-registry.com | ||
| paths: | ||
| - path: / | ||
| pathType: Prefix | ||
| tls: | ||
| - secretName: harbor-tls | ||
| hosts: | ||
| - rancher-dev-16gb-fsn1-1.container-registry.com | ||
|
|
||
| # TLS configuration - auto-generate certs | ||
| tls: | ||
| enabled: true | ||
| certSource: auto | ||
|
|
||
| # Database configuration - use existing PostgreSQL in my-db namespace | ||
| database: | ||
| host: postgresql.my-db.svc.cluster.local | ||
| port: 5432 | ||
| username: postgres | ||
| password: test1234! |
There was a problem hiding this comment.
In deploy/chart/example/openshift/values.yaml (line 14) and deploy/chart/example/aws-eks-irsa/values-aws-irsa.yaml (line 21), the example files contain harborAdminPassword: "CHANGEME" and harborAdminPassword: "Harbor12345" respectively. While placeholder values in examples are expected, the RKE2 example (deploy/chart/example/rke2-rancher.yaml line 5) exposes a real external hostname (rancher-dev-16gb-fsn1-1.container-registry.com) and a database password (password: test1234! on line 38). Real infrastructure hostnames and credentials should be replaced with clearly fictional placeholders.
|
|
||
| ## Prerequisites | ||
|
|
||
| - AWS CLI v2, configured for account `163500494166` (or your target account) |
There was a problem hiding this comment.
In deploy/chart/example/aws-eks-irsa/README.md line 37, the README references a specific AWS account ID: - AWS CLI v2, configured for account \163500494166` (or your target account). This appears to be a real AWS account ID and should be replaced with a generic placeholder like <YOUR_ACCOUNT_ID>`.
| - AWS CLI v2, configured for account `163500494166` (or your target account) | |
| - AWS CLI v2, configured for account `<YOUR_ACCOUNT_ID>` (or your target account) |
| {{- if not .Values.core.tokenSecretName }} | ||
| {{- $ca := genCA "harbor-token-ca" 365 }} | ||
| tls.key: {{ .Values.core.tokenKey | default $ca.Key | b64enc | quote }} | ||
| tls.crt: {{ .Values.core.tokenCert | default $ca.Cert | b64enc | quote }} | ||
| {{- end }} |
There was a problem hiding this comment.
In deploy/chart/templates/core.secret.yaml lines 24-26, a new CA (genCA) is generated on every helm template or helm upgrade run to produce tls.key and tls.crt. Unlike the REGISTRY_HTTP_SECRET and CSRF_KEY which use secretKeyHelper to persist existing values via lookup, the token signing key pair is regenerated on every upgrade. This will invalidate all existing Harbor authentication tokens after every upgrade, requiring all users to log in again. The same lookup-based persistence pattern should be used for the token key pair.
| {{ if .Values.trivy.enabled }} | ||
| apiVersion: v1 | ||
| kind: Service | ||
| metadata: | ||
| name: "{{ template "harbor.trivy" . }}" | ||
| labels: | ||
| {{ include "harbor.labels" . | indent 4 }} | ||
| spec: | ||
| ports: | ||
| - name: http-trivy | ||
| protocol: TCP | ||
| port: 8080 | ||
| selector: | ||
| {{- include "harbor.componentSelectorLabels" (dict "root" . "component" "trivy") | nindent 4 }} | ||
| {{ end }} |
There was a problem hiding this comment.
In deploy/chart/templates/trivy.service.yaml, line 1 uses {{ if .Values.trivy.enabled }} (without the dash - after {{) and line 15 uses {{ end }} (also without dash), which means whitespace will not be trimmed. This is inconsistent with the rest of the codebase which consistently uses {{- and -}} for whitespace control. While this won't cause errors in Kubernetes (extra whitespace in YAML is generally fine), it creates extra blank lines in rendered output. The trivy.secret.yaml uses {{- if .Values.trivy.enabled }} correctly.
| volumeMounts: | ||
| - name: storage | ||
| mountPath: {{ .Values.registry.storage.filesystem.rootdirectory | default "/storage" }} | ||
| {{- if and (eq .Values.registry.storage.type "gcs") .Values.registry.storage.gcs.keyfile }} |
There was a problem hiding this comment.
In deploy/chart/templates/registry.deployment.yaml lines 80-81, the storage volume mount path for the registry container uses {{ .Values.registry.storage.filesystem.rootdirectory | default "/storage" }} even when the storage type is not filesystem (e.g., when using S3). When using S3, the filesystem mount would be mounted to /storage unnecessarily. Additionally, if registry.storage.filesystem.rootdirectory is not set in the values (because S3 is configured), this silently uses /storage. The volume mount (and possibly the storage volume in lines 183-188) should be conditional on eq .Values.registry.storage.type "filesystem".
| path: /metrics | ||
| interval: {{ .Values.metrics.serviceMonitor.interval | default "30s" }} | ||
| scrapeTimeout: {{ .Values.metrics.serviceMonitor.scrapeTimeout | default "10s" }} | ||
| honorLabels: {{ .Values.metrics.serviceMonitor.honorLabels | default true }} |
There was a problem hiding this comment.
In deploy/chart/templates/servicemonitor.yaml, line 34 renders honorLabels as a boolean directly: honorLabels: {{ .Values.metrics.serviceMonitor.honorLabels | default true }}. When the value is true, this renders as honorLabels: true which is correct YAML. However, when the user explicitly sets it to false, {{ false | default true }} will evaluate to true because false is the zero-value and default in Helm treats false as empty. The user cannot override honorLabels to false. Use | default "true" | toString or check with hasKey instead.
| path: /metrics | ||
| port: {{ include "harbor.metrics.portName" . | quote }} | ||
| initialDelaySeconds: 10 | ||
| periodSeconds: 10 | ||
| readinessProbe: | ||
| httpGet: | ||
| path: /metrics | ||
| port: {{ include "harbor.metrics.portName" . | quote }} |
There was a problem hiding this comment.
In deploy/chart/templates/core.deployment.yaml line 53, the metrics port is named using {{ include "harbor.metrics.portName" . | quote }} which returns either "http-metrics" or "https-metrics" based on TLS. However, the container port name must match what core.service.yaml exposes. The core service (line 18 of core.service.yaml) uses the same harbor.metrics.portName helper, so they should match. But the liveness/readiness probes in the exporter (lines 108-116 of exporter.deployment.yaml) reference the port by name using {{ include "harbor.metrics.portName" . | quote }} — this is valid, but the port name includes quotes from quote making it "http-metrics" (with quotes as part of the value). Verify that these are rendering correctly without the surrounding quotes double-quoting the port name value.
| {{- printf "http" -}} | ||
| {{- end }} | ||
|
|
||
| {{- define "harbor.middlware.enabled" -}} |
There was a problem hiding this comment.
The helper function name harbor.middlware.enabled contains a typo — "middlware" should be "middleware". The same typo appears at the call site in registry.configmap.yaml line 168. While this is currently consistent (misspelled in both places), it's still a bug in the API name that should be fixed before any other templates reference the corrected spelling.
| port: | ||
| targetPort: http-web |
There was a problem hiding this comment.
In deploy/chart/templates/expose-route.yaml, the targetPort is hardcoded to http-web (line 23), which is the port name from core.service.yaml. However, if the port naming changes or the core service configuration changes, this will silently fail. More critically, the OpenShift Route targets the core service, not an expose-service — so if the expose-route is used simultaneously with expose-service, traffic will flow to two different backends. The comment at the top says "Route pointing to the core service", which is intentional, but the targetPort should reference what the core Service actually exposes for consistency with other templates.
| {{- define "harbor.secretKey" -}} | ||
| {{- if .Values.secretKey }} | ||
| {{- .Values.secretKey }} | ||
| {{- else }} | ||
| {{- /* Generate a deterministic key based on release name */ -}} | ||
| {{- $key := printf "%s-harbor-secret-key" .Release.Name | sha256sum | trunc 16 }} | ||
| {{- $key }} | ||
| {{- end }} |
There was a problem hiding this comment.
The harbor.secretKey helper derives the encryption key from .Release.Name via sha256sum and truncation when .Values.secretKey is not set. Because Release.Name is non-secret and typically guessable or observable, an attacker who obtains a database dump or backup can recompute this key and decrypt credentials that Harbor expects to keep confidential at rest. Instead, this key should be a high-entropy secret (e.g., generated randomly once and persisted in a Secret, or required to be supplied explicitly) rather than being deterministically computed from public data.
There was a problem hiding this comment.
Actionable comments posted: 4
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (26)
deploy/chart/example/README.md-16-19 (1)
16-19:⚠️ Potential issue | 🟠 MajorFix the chart path in these install commands.
This README sits under
deploy/chart/example/, sohelm install harbor . -f example/k3d-local.yamlonly works if the reader has already changed intodeploy/chart/. Copied verbatim from this directory,.points at the examples folder and the values file path is wrong, so the documented flow breaks.Suggested doc fix
-helm install harbor . -n harbor --create-namespace -f example/k3d-local.yaml +helm install harbor .. -n harbor --create-namespace -f k3d-local.yaml-helm install harbor . -n harbor -f example/k3d-local.yaml +helm install harbor .. -n harbor -f k3d-local.yamlAlso applies to: 79-80
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/example/README.md` around lines 16 - 19, The helm install examples in README.md use "." and "example/k3d-local.yaml" which are wrong for this file's location; update the commands in deploy/chart/example/README.md so they point to the chart directory and values file correctly (e.g., when run from the example/ directory use "../" for the chart and the local values file "k3d-local.yaml"); change the install lines (the helm install harbor . -n harbor --create-namespace -f example/k3d-local.yaml occurrences) and the similar block at lines 79-80 to use the correct chart path and values file relative to the example/ directory (replace "." with "../" and "example/k3d-local.yaml" with "k3d-local.yaml").deploy/chart/example/openshift/values.yaml-37-43 (1)
37-43:⚠️ Potential issue | 🟠 MajorKeep the database password in one place.
database.passwordandPOSTGRESQL_PASSWORDhave to stay identical. If a user edits only one of them, Harbor will come up pointing at a PostgreSQL instance that was initialized with different credentials.🛠️ Suggested consolidation
+# Reuse a single value for Harbor and the bootstrap Secret +databasePassword: &databasePassword CHANGEME + database: host: postgres.vad1mo-dev.svc.cluster.local port: 5432 username: harbor - password: CHANGEME + password: *databasePassword database: registry sslmode: disable ... stringData: POSTGRESQL_USER: harbor - POSTGRESQL_PASSWORD: CHANGEME + POSTGRESQL_PASSWORD: *databasePassword POSTGRESQL_DATABASE: registryAlso applies to: 180-183
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/example/openshift/values.yaml` around lines 37 - 43, The values file currently duplicates credentials between database.password and the POSTGRESQL_PASSWORD env var, which can drift; consolidate by referencing a single source of truth (e.g., keep database.password and set POSTGRESQL_PASSWORD to reference that same value or vice versa) so both use the same variable; update the template/values so POSTGRESQL_PASSWORD is assigned from database.password (or database.password is assigned from POSTGRESQL_PASSWORD) and remove the duplicate literal to ensure database.password and POSTGRESQL_PASSWORD remain identical (adjust any references in templates/deployment that use POSTGRESQL_PASSWORD or database.password accordingly).deploy/chart/README.md.gotmpl-10-15 (1)
10-15:⚠️ Potential issue | 🟠 MajorDocument
harborAdminPasswordin the quick-start path.The chart validation helper currently requires either
harborAdminPasswordorexistingSecretAdminPassword, but both install snippets and the Required Values table omit it. That leaves the generated README incomplete for first-time installs.📚 Suggested update
helm install my-harbor oci://8gears.container-registry.com/8gcr/chart/harbor-next \ --set externalURL=https://harbor.example.com \ --set database.host=my-postgres.example.com \ - --set database.password=secret + --set database.password=secret \ + --set harborAdminPassword=change-me ... | `externalURL` | Public URL for Harbor (e.g., `https://harbor.example.com`) | | `database.host` | PostgreSQL host | | `database.password` | PostgreSQL password (or use `database.existingSecret`) | +| `harborAdminPassword` | Initial Harbor admin password (or use `existingSecretAdminPassword`) |Also applies to: 130-137, 152-157
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/README.md.gotmpl` around lines 10 - 15, The quick-start install examples and Required Values table in the README template omit the required harborAdminPassword (or alternative existingSecretAdminPassword) even though the chart validation helper enforces one of them; update the README quick-start snippets and the Required Values section to document harborAdminPassword (and mention existingSecretAdminPassword as an alternative), showing how to set it via --set harborAdminPassword=... (or point to creating a Kubernetes Secret named accordingly), and ensure examples near the lines referenced include that flag so first-time installs satisfy the validator.deploy/chart/docs/guide/nutanix.md-20-27 (1)
20-27:⚠️ Potential issue | 🟠 MajorAdd the missing Helm repository setup before these chart installations.
A clean environment cannot resolve
grafana/loki-stack,grafana/alloy, oringress-nginx/ingress-nginxwithout first adding and refreshing those repositories.🛠️ Proposed fix
+helm repo add grafana https://grafana.github.io/helm-charts +helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx +helm repo update + helm install loki grafana/loki-stack \ --namespace monitoring \ --set grafana.enabled=false \ --set promtail.enabled=true \ --set loki.persistence.enabled=true \ --set loki.persistence.size=10GiAlso applies to: 97-102, 110-115
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/docs/guide/nutanix.md` around lines 20 - 27, The chart install snippets (e.g., the helm install loki grafana/loki-stack command and the other installs referencing grafana/alloy and ingress-nginx/ingress-nginx) assume repos exist; add explicit helm repo add commands for the Grafana and ingress-nginx repositories and run helm repo update before any helm install calls so the charts resolve in a clean environment (ensure you add the grafana repo for grafana/loki-stack and grafana/alloy and the ingress-nginx repo for ingress-nginx/ingress-nginx, then run helm repo update prior to the installs).deploy/chart/templates/pdbs.yaml-25-29 (1)
25-29:⚠️ Potential issue | 🟠 MajorPDB will be invalid if neither
maxUnavailablenorminAvailableis set.If a component's
pdbconfiguration hasenabled: truebut neithermaxUnavailablenorminAvailableis specified, the resulting PodDisruptionBudget will be missing a required field and will fail validation during apply.🐛 Proposed fix: add a default value
spec: {{- if hasKey .values.pdb "maxUnavailable" }} maxUnavailable: {{ .values.pdb.maxUnavailable }} {{- else if hasKey .values.pdb "minAvailable" }} minAvailable: {{ .values.pdb.minAvailable }} + {{- else }} + maxUnavailable: 1 {{- end }} selector:Alternatively, consider validating in values.schema.json that at least one of these fields is required when
pdb.enabledis true.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/pdbs.yaml` around lines 25 - 29, The PodDisruptionBudget template can render an invalid PDB when .values.pdb.enabled is true but neither .values.pdb.maxUnavailable nor .values.pdb.minAvailable is set; update the pdbs.yaml template (the block using hasKey .values.pdb "maxUnavailable" / "minAvailable") to provide a safe default (for example use Helm's default function to set maxUnavailable: 1 when neither key exists) so the rendered PDB always contains one of the required fields, and/or add a validation rule to values.schema.json that requires at least one of maxUnavailable or minAvailable when pdb.enabled is true.deploy/chart/templates/trivy.service.yaml-9-12 (1)
9-12: 🛠️ Refactor suggestion | 🟠 MajorCreate a
harbor.trivy.porthelper and use it for port configuration consistency.The port
8080is hardcoded in the service file and also duplicated in theharbor.trivy.urlhelper (line 93 of_helpers-urls.tpl). Other services (core, portal, registry) use Helm helpers likeharbor.core.portandharbor.registry.portfor consistency and maintainability. To follow the same pattern, add aharbor.trivy.porthelper definition and use it in both the service file and the URL helper. Additionally, explicitly specifytargetPortto match the approach used incore.service.yamlandregistry.service.yaml.Implementation steps
- Add to
_helpers-urls.tpl:{{- define "harbor.trivy.url" -}} -http://{{ include "harbor.fullname" . }}-trivy:8080 +http://{{ include "harbor.fullname" . }}-trivy:{{ include "harbor.trivy.port" . }} {{- end }} +{{/* +Container port +*/}} +{{- define "harbor.trivy.port" -}} +8080 +{{- end }}
- Update
trivy.service.yaml:ports: - name: http-trivy protocol: TCP - port: 8080 + port: {{ include "harbor.trivy.port" . }} + targetPort: {{ include "harbor.trivy.port" . }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/trivy.service.yaml` around lines 9 - 12, Add a new Helm helper named "harbor.trivy.port" (so it can be reused by "harbor.trivy.url") and update both the helper file that defines "harbor.trivy.url" and the service template "trivy.service.yaml" to use this helper instead of the hardcoded 8080; in "trivy.service.yaml" also add an explicit targetPort that references the same harbor.trivy.port helper to match the pattern used by core/registry services.deploy/chart/templates/jobservice.configmap.yaml-16-17 (1)
16-17:⚠️ Potential issue | 🟠 MajorScope the Redis queue namespace per release.
The worker namespace is hardcoded to
"harbor_job_service_namespace"in both jobservice.configmap.yaml (line 17) and exporter.deployment.yaml. Two installs that shareexternalRediswill enqueue and consume each other's jobs. This should be derived from the release/chart name.💡 Suggested changes
In
jobservice.configmap.yaml(line 17):- namespace: "harbor_job_service_namespace" + namespace: {{ printf "%s-jobservice" (include "harbor.fullname" .) | quote }}In
exporter.deployment.yaml:- value: "harbor_job_service_namespace" + value: {{ printf "%s-jobservice" (include "harbor.fullname" .) | quote }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/jobservice.configmap.yaml` around lines 16 - 17, Replace the hardcoded Redis queue namespace "harbor_job_service_namespace" with a value derived from the release/chart name so each release gets its own queue prefix; update jobservice.configmap.yaml (the mapping that includes harbor.redis.url.jobservice and the "namespace" key) and exporter.deployment.yaml to construct the namespace from the Helm release name (for example using .Release.Name or a chart fullname helper such as include "harbor.fullname" .) Ensure both templates use the same generated value so jobservice and exporter remain consistent within a release.deploy/chart/templates/tls-certificates.yaml-12-16 (1)
12-16:⚠️ Potential issue | 🟠 MajorSanitize wildcard hosts in resource names.
The
replace "." "-"filter leaves asterisks intact, so a wildcard host like*.example.comproduces invalid Kubernetes resource names (*-example-comviolates DNS-1123 subdomain format). WhilednsNamescorrectly accepts the raw host, bothmetadata.nameandspec.secretNameneed sanitization. Add validation or normalization to ensure all characters conform to[a-z0-9-].🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/tls-certificates.yaml` around lines 12 - 16, The metadata.name and spec.secretName currently use {{ $fullname }}-{{ .host | replace "." "-" }} which leaves '*' and other invalid chars in resource names; sanitize .host by lowercasing and replacing any non [a-z0-9-] chars with '-' and trimming leading/trailing hyphens (e.g. create a template var like $sanitizedHost := (.host | lower | regexReplaceAll "[^a-z0-9-]" "-" | regexReplaceAll "^[-]+|[-]+$" "") and then use {{ $fullname }}-{{ $sanitizedHost }} for both metadata.name and spec.secretName) so the resulting names always conform to DNS-1123.deploy/chart/templates/jobservice.secret.yaml-15-17 (1)
15-17:⚠️ Potential issue | 🟠 Major
jobservice.existingSecretis only half-wired.When this value is set,
JOBSERVICE_SECRETis omitted here, butjobservice.deployment.yamlstill pointsenvFrom.secretRefat the generated<fullname>-jobservicesecret. That leaves the pod without this key unless the chart also switches to the user-provided secret or copies the value into the generated one.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/jobservice.secret.yaml` around lines 15 - 17, The chart currently omits JOBSERVICE_SECRET when jobservice.existingSecret is set but deployment still references the generated <fullname>-jobservice secret via envFrom.secretRef, leaving the pod without the key; update the templates so that when jobservice.existingSecret is provided the deployment's envFrom.secretRef points to that user secret (or alternatively copy the JOBSERVICE_SECRET value into the generated secret), by modifying the logic in jobservice.secret.yaml (JOBSERVICE_SECRET generation) and jobservice.deployment.yaml (envFrom.secretRef) to conditionally use .Values.jobservice.existingSecret instead of always referencing the generated <fullname>-jobservice secret.deploy/chart/example/aws-eks-irsa/teardown.sh-45-50 (1)
45-50:⚠️ Potential issue | 🟠 MajorDon't hide failed RDS deletions.
Both the delete calls and the waiters are wrapped in
|| true, so deletion protection, dependency errors, or waiter timeouts still let the script continue as if cleanup succeeded. That can leave billable Aurora resources behind while printing=== Teardown complete ===.Also applies to: 58-63
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/example/aws-eks-irsa/teardown.sh` around lines 45 - 50, Currently the RDS deletion and waiter commands for DB_INSTANCE_ID/AWS_REGION are silenced with "|| true" and stderr redirected, which hides failures; update the blocks that call "aws rds delete-db-instance" and "aws rds wait db-instance-deleted" (and the analogous block at lines 58-63) to remove the "|| true" suppression and avoid discarding stderr, check each command's exit status, log a clear error (including DB_INSTANCE_ID and AWS_REGION) if deletion or waiting fails, and exit non‑zero so the teardown stops and surfaces billing-impacting failures instead of continuing as if successful.deploy/chart/templates/exporter.deployment.yaml-19-29 (1)
19-29:⚠️ Potential issue | 🟠 MajorSecret changes won't roll the exporter pods.
This pod reads DB and Redis credentials through
secretKeyRef, but the template has no checksum annotations. Updating those chart-managed secrets will not restart the deployment, so exporter can keep running with stale env values until someone manually rolls it.Also applies to: 77-81, 96-102
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/exporter.deployment.yaml` around lines 19 - 29, The Deployment for the exporter lacks checksum annotations for Secrets referenced by its env via secretKeyRef, so updates to those Secrets won't roll pods; fix by adding deterministic checksum annotations under metadata.annotations of the exporter Deployment (the same annotations block shown around the exporter component and labels and the "harbor.componentSelectorLabels" include) that compute a sha256 of each referenced Secret/ConfigMap used by the exporter (DB and Redis credentials and any other secret/configmap referenced on the pod, including the ones around lines 77-81 and 96-102), e.g. add keys like checksum/secret-<name>: {{ include ... | sha256sum }} or compute sha256 of the resource manifest via a helper template and reference those helper outputs in metadata.annotations so the Deployment will restart when those secrets change.deploy/chart/example/aws-eks-irsa/verify.sh-8-9 (1)
8-9:⚠️ Potential issue | 🟠 MajorAdd an EXIT trap to ensure port-forward cleanup and move unguarded docker tag inside a check block.
Line 70's
docker tagcommand runs outsidecheck()without error suppression. Withset -eenabled, it will exit immediately on failure without executing the cleanup at line 116, leaving the background port-forward process (PF_PID) running. Add a trap handler for EXIT to guarantee cleanup, and wrap the docker tag command in acheck()call (line 70). This ensures failed verification steps are counted in the results rather than aborting the run.The same issue applies at lines 69 (docker pull), 82 (aws s3 ls), 95 (kubectl logs), and 106 (kubectl get secret), though they suppress errors; moving them into check blocks will provide consistent error accounting.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/example/aws-eks-irsa/verify.sh` around lines 8 - 9, Add a global EXIT trap that calls a small cleanup function which checks PF_PID and, if set and running, kills it and waits so the background port-forward is always cleaned up on any exit; then move the unguarded commands into the existing check() wrapper so failures are counted instead of aborting: wrap the docker pull and docker tag calls, aws s3 ls, kubectl logs, and kubectl get secret invocations in check() calls (use the same check() helper already in the script) and remove standalone invocation that can trigger set -e termination; reference PF_PID for cleanup and the check() function to locate where to apply the changes.deploy/chart/templates/_helpers-tls.tpl-10-12 (1)
10-12:⚠️ Potential issue | 🟠 MajorThe TLS scheme helper always returns HTTP regardless of TLS configuration.
The helper's comment states "Check if internal TLS is enabled" but the implementation at lines 10-12 always returns
http, ignoring TLS state entirely. This causes the helper to generate incorrect schemes for health probes and configuration in components like Trivy and jobservice when TLS is enabled. The fix should check.Values.tls.enabled, which is already the established pattern in the same file'sharbor.metrics.portNamehelper.♻️ Proposed fix
{{- define "harbor.component.scheme" -}} - {{- printf "http" -}} + {{- if .Values.tls.enabled -}} + {{- printf "https" -}} + {{- else -}} + {{- printf "http" -}} + {{- end -}} {{- end }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/_helpers-tls.tpl` around lines 10 - 12, The helper "harbor.component.scheme" currently always returns "http"; change it to branch on .Values.tls.enabled like the other helpers (e.g., "harbor.metrics.portName") so it returns "https" when TLS is enabled and "http" otherwise; update the define "harbor.component.scheme" to inspect .Values.tls.enabled and use printf to output the correct scheme accordingly.deploy/chart/templates/_helpers-tls.tpl-14-16 (1)
14-16:⚠️ Potential issue | 🟠 MajorFix the
harbor.middlware.enabledhelper: rename tomiddlewareand output nothing when disabled.The helper name contains a typo (
middlware→middleware). More critically, the helper renders the stringfalse, which is non-empty and therefore truthy in Go template conditionals. This means the middleware configuration block inregistry.configmap.yamlis always included regardless of the intent to disable it.Rename the helper to
harbor.middleware.enabled, update the caller inregistry.configmap.yaml, and output nothing (or an empty line) when disabled. Alternatively, follow the pattern used in test data files and check the value directly with{{- if .Values.registry.middleware.enabled }}instead of using this helper.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/_helpers-tls.tpl` around lines 14 - 16, The helper defined as "harbor.middlware.enabled" has a typo and returns the literal string "false" (which is truthy), so rename the helper to "harbor.middleware.enabled", update any callers (e.g., the reference in registry.configmap.yaml) to use the corrected name, and change the helper implementation to emit nothing (empty output or empty line) when the middleware is disabled; alternatively remove the helper and have the caller check the value directly with ".Values.registry.middleware.enabled". Ensure the helper only prints content when enabled so template conditionals behave correctly.deploy/chart/templates/portal.deployment.yaml-18-22 (1)
18-22:⚠️ Potential issue | 🟠 MajorAdd checksum annotation to roll portal pods when nginx.conf changes.
The
portal-configConfigMap is mounted viasubPathat/etc/nginx/nginx.conf. Kubernetes does not live-update files mounted withsubPath, so without a checksum annotation in the pod template, a Helm upgrade can leave old portal pods serving stale NGINX config until restarted manually.♻️ Proposed fix
template: metadata: annotations: + checksum/configmap: {{ include (print $.Template.BasePath "/portal.configmap.yaml") . | sha256sum }} {{- with .Values.portal.podAnnotations }} {{- toYaml . | nindent 8 }} {{- end }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/portal.deployment.yaml` around lines 18 - 22, Add a checksum annotation to the pod template annotations so portal pods will roll when the mounted portal-config's nginx.conf changes: compute a SHA256 (or similar) checksum of the portal-config ConfigMap data (the nginx.conf content) and add it as an annotation key like checksum/nginx-conf alongside existing .Values.portal.podAnnotations in the deployment metadata (the pod template annotations block shown). Ensure the checksum is rendered in Helm templates (using include/lookup/tpl and sha256sum) so it updates whenever the ConfigMap data changes, causing Kubernetes to recreate pods.deploy/chart/templates/core.secret.yaml-23-27 (1)
23-27:⚠️ Potential issue | 🟠 MajorToken CA regenerates on every template rendering.
When
.Values.core.tokenSecretNameis not provided, thegenCAfunction generates a new CA on eachhelm templateorhelm upgraderun. This will cause pods to restart unnecessarily and may invalidate existing tokens.Consider using the
lookuppattern (similar to line 1) to preserve existing token keys across upgrades:🔧 Proposed fix to preserve token keys across upgrades
+ {{- $existingTokenKey := "" }} + {{- $existingTokenCert := "" }} + {{- if $existingSecret.data }} + {{- $existingTokenKey = index $existingSecret.data "tls.key" | default "" }} + {{- $existingTokenCert = index $existingSecret.data "tls.crt" | default "" }} + {{- end }} {{- if not .Values.core.tokenSecretName }} {{- $ca := genCA "harbor-token-ca" 365 }} - tls.key: {{ .Values.core.tokenKey | default $ca.Key | b64enc | quote }} - tls.crt: {{ .Values.core.tokenCert | default $ca.Cert | b64enc | quote }} + tls.key: {{ .Values.core.tokenKey | default $existingTokenKey | default ($ca.Key | b64enc) | quote }} + tls.crt: {{ .Values.core.tokenCert | default $existingTokenCert | default ($ca.Cert | b64enc) | quote }} {{- end }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/core.secret.yaml` around lines 23 - 27, The CA is being regenerated every render because genCA ("harbor-token-ca") is always called when .Values.core.tokenSecretName is empty; instead use the Kubernetes lookup pattern to fetch an existing Secret and reuse its tls.key/tls.crt if present, falling back to genCA only when the Secret does not exist. Update the template to call lookup for Secret (referencing the same name you would create, and keys tls.key and tls.crt), use the Secret's data values when found, and only call genCA and default to $ca.Key/$ca.Cert when lookup returns nil; reference genCA, .Values.core.tokenSecretName, tls.key and tls.crt to locate the code to change.deploy/chart/templates/core.deployment.yaml-84-90 (1)
84-90:⚠️ Potential issue | 🟠 MajorREDIS_PASSWORD must also be injected when external Redis has authentication configured.
The condition only checks
.Values.valkey.auth.enabled, but when using external Redis (.Values.valkey.enabled: false), authentication still requires the password environment variable. The secret is created correctly, but the deployment fails to inject it.🔧 Proposed fix to handle external Redis authentication
- {{- if .Values.valkey.auth.enabled }} + {{- if or .Values.valkey.auth.enabled (and (not .Values.valkey.enabled) .Values.externalRedis.password) }} - name: REDIS_PASSWORD valueFrom: secretKeyRef: name: {{ include "harbor.redis.secretName" . }} key: {{ include "harbor.redis.secretKey" . }} {{- end }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/core.deployment.yaml` around lines 84 - 90, The REDIS_PASSWORD env block is only rendered when .Values.valkey.auth.enabled is true, so deployments using external Redis (.Values.valkey.enabled: false) with auth enabled don't get the secret injected; update the conditional around the REDIS_PASSWORD block to render when authentication is required for either internal or external Redis (e.g. check (.Values.valkey.auth.enabled or .Values.valkey.externalAuthEnabled) or simply use (.Values.valkey.enabled or .Values.valkey.auth.enabled) depending on your values schema), keeping the same secretKeyRef using the existing helpers include "harbor.redis.secretName" and include "harbor.redis.secretKey" so the secret is injected regardless of whether valkey is internal or external.deploy/chart/example/aws-eks-irsa/values-aws-irsa.yaml-20-21 (1)
20-21:⚠️ Potential issue | 🟠 MajorAvoid a well-known Harbor admin password in the example values.
Anyone applying this file as-is gets a predictable admin credential. Leave this empty and require an override or existing secret, then update
setup.shso its login hint stops assumingHarbor12345.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/example/aws-eks-irsa/values-aws-irsa.yaml` around lines 20 - 21, Example values currently include a hard-coded Harbor admin password (harborAdminPassword: "Harbor12345"); remove or blank that value (set harborAdminPassword: "" or omit it) and require callers to supply the password via an override or an existing Kubernetes Secret referenced by values (e.g., an existingSecret or secretName field), and update the deployment bootstrap script (setup.sh) to stop assuming "Harbor12345" by reading the password from the provided override/secret or failing with a clear message if none is provided. Ensure references to externalURL and harborAdminPassword in templates/Helm chart code and in setup.sh use the new secret/override path.deploy/chart/example/aws-eks-irsa/setup.sh-305-310 (1)
305-310:⚠️ Potential issue | 🟠 MajorDon't print the Aurora master password to stdout.
This writes a long-lived secret into terminal scrollback and CI logs. If operators need a handoff artifact, store it in a local
0600file or require them to provide the password explicitly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/example/aws-eks-irsa/setup.sh` around lines 305 - 310, Stop printing the DB_MASTER_PASSWORD to stdout; remove the echo of DB_MASTER_PASSWORD and instead persist the secret to a restricted file or require explicit input. Update setup.sh so that instead of echoing DB_MASTER_PASSWORD it writes the value to a file with permissions 0600 (create a variable like MASTER_PASSWORD_FILE or reuse KUBECONFIG_PATH dirname), ensure umask/chmod 600 is applied before writing, and update any downstream consumers to read from that file (or replace with interactive prompt that requires the operator to input the password rather than printing it). Ensure references to DB_MASTER_PASSWORD in the script are changed to read from the secure file or prompt.deploy/chart/example/aws-eks-irsa/setup.sh-263-274 (1)
263-274:⚠️ Potential issue | 🟠 MajorPhase 6 is not rerunnable.
A second run hits
CREATE USERagain and aborts withrole already exists, which breaks an otherwise idempotent setup flow. Make the user creation conditional and quote the identifier while you're there.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/example/aws-eks-irsa/setup.sh` around lines 263 - 274, Phase 6 fails on repeated runs because the unguarded CREATE USER ${DB_IAM_USER} will error if the role exists; replace that statement with a conditional creation using a PL/pgSQL block or a SELECT against pg_roles (e.g., check rolname = '...') so the user is only created when absent, and quote the identifier everywhere it is used (replace ${DB_IAM_USER} with a quoted identifier) including in CREATE USER and subsequent GRANT statements; keep the kubectl run pg-setup command and the psql heredoc but update the SQL block to perform the existence check and use quoted identifiers for consistency.deploy/chart/templates/registry.deployment.yaml-114-125 (1)
114-125:⚠️ Potential issue | 🟠 MajorThese secret lookups don't match the chart's values contract.
deploy/chart/values.yamldefinesregistry.credentials.existingSecret, notregistry.existingSecret, so theREGISTRY_HTTP_SECREToverride here can never be selected.CORE_SECRETalso hardcodessecretand ignorescore.existingSecretKey, which breaks non-default secret layouts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/registry.deployment.yaml` around lines 114 - 125, The manifest is referencing the wrong values and a hardcoded key: update the REGISTRY_HTTP_SECRET block to check .Values.registry.credentials.existingSecret and use .Values.registry.credentials.existingSecretKey for the secretKeyRef key (instead of .Values.registry.existingSecret/.existingSecretKey), and change the CORE_SECRET key to use .Values.core.existingSecretKey (defaulting to "secret" if unset) while keeping the name expression that defaults to include "harbor.core" (i.e., use name: {{ default (include "harbor.core" .) .Values.core.existingSecret }} and key: {{ default "secret" .Values.core.existingSecretKey }}).deploy/chart/templates/jobservice.configmap-env.yaml-29-33 (1)
29-33: 🛠️ Refactor suggestion | 🟠 MajorOnly emit proxy vars when a proxy URL is configured.
With the current defaults,
jobserviceis already inproxy.components, so this block still injects proxy env vars when both proxy URLs are blank. Gate each variable on a non-empty value and only addNO_PROXYwhen a proxy is actually enabled.Suggested guard
- {{- if has "jobservice" .Values.proxy.components }} - HTTP_PROXY: "{{ .Values.proxy.httpProxy }}" - HTTPS_PROXY: "{{ .Values.proxy.httpsProxy }}" - NO_PROXY: "{{ template "harbor.noProxy" . }}" - {{- end }} + {{- $jobserviceProxyEnabled := has "jobservice" .Values.proxy.components }} + {{- if and $jobserviceProxyEnabled .Values.proxy.httpProxy }} + HTTP_PROXY: {{ .Values.proxy.httpProxy | quote }} + {{- end }} + {{- if and $jobserviceProxyEnabled .Values.proxy.httpsProxy }} + HTTPS_PROXY: {{ .Values.proxy.httpsProxy | quote }} + {{- end }} + {{- if and $jobserviceProxyEnabled (or .Values.proxy.httpProxy .Values.proxy.httpsProxy .Values.proxy.noProxy) }} + NO_PROXY: {{ template "harbor.noProxy" . | quote }} + {{- end }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/jobservice.configmap-env.yaml` around lines 29 - 33, The template currently emits HTTP_PROXY, HTTPS_PROXY and NO_PROXY whenever jobservice is listed in proxy.components even if values are empty; update the block that checks has "jobservice" .Values.proxy.components to also gate each env var on a non-empty value (e.g., only render HTTP_PROXY when .Values.proxy.httpProxy is not empty and HTTPS_PROXY when .Values.proxy.httpsProxy is not empty) and only render NO_PROXY (template "harbor.noProxy") when at least one proxy URL is set; adjust the conditional logic around the HTTP_PROXY, HTTPS_PROXY and NO_PROXY lines so blank values are never injected.deploy/chart/example/aws-eks-irsa/setup.sh-245-254 (1)
245-254:⚠️ Potential issue | 🟠 MajorScript will fail on the 6th and subsequent reruns due to AWS IAM policy version limits.
The code creates a new policy version every time it runs. AWS managed policies have a hard limit of 5 versions. After 5 reruns, the
aws iam create-policy-versioncall will fail unless you delete non-default versions first or implement version reuse logic.Consider updating the policy document only if it has changed, or add logic to delete the oldest non-default version before creating a new one.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/example/aws-eks-irsa/setup.sh` around lines 245 - 254, The script always calls aws iam create-policy-version with POLICY_DOC_FINAL and POLICY_ARN which will fail after 5 created versions; before creating a new version, fetch existing versions for POLICY_ARN (use aws iam list-policy-versions / get-policy-version), compare the current default policy document to POLICY_DOC_FINAL and skip creation if unchanged, and if a new version is needed and there are already 5 versions delete the oldest non-default version via aws iam delete-policy-version before calling aws iam create-policy-version; update the logic around POLICY_DOC_FINAL and the aws iam create-policy-version call to implement these checks and deletion.deploy/chart/example/aws-eks-irsa/setup.sh-115-118 (1)
115-118:⚠️ Potential issue | 🟠 MajorAdd
audcondition and narrow the trust policy to specific service accounts.The wildcard
system:serviceaccount:${NAMESPACE}:*allows any service account in the namespace to assume this role. AWS IRSA best practices require:
- Add a
StringEqualscondition on theaudclaim with valuests.amazonaws.comto prevent token misuse outside AWS STS- Replace the wildcard
StringLikewith either exactStringEqualsmatching for a single service account or an enumerated list of allowed service accounts (e.g.,harbor-core,harbor-chartmuseum, etc.)Example fix
"Condition": { "StringEquals": { "${OIDC_PROVIDER}:aud": "sts.amazonaws.com", "${OIDC_PROVIDER}:sub": "system:serviceaccount:${NAMESPACE}:harbor-core" } }or with multiple service accounts:
"Condition": { "StringEquals": { "${OIDC_PROVIDER}:aud": "sts.amazonaws.com" }, "StringLike": { "${OIDC_PROVIDER}:sub": [ "system:serviceaccount:${NAMESPACE}:harbor-core", "system:serviceaccount:${NAMESPACE}:harbor-chartmuseum" ] } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/example/aws-eks-irsa/setup.sh` around lines 115 - 118, Update the IRSA trust policy Condition block: add a StringEquals entry for "${OIDC_PROVIDER}:aud" with value "sts.amazonaws.com" and replace the broad "${OIDC_PROVIDER}:sub" wildcard ("system:serviceaccount:${NAMESPACE}:*") with either exact StringEquals for a single service account or a StringLike/array of specific service accounts (e.g., "system:serviceaccount:${NAMESPACE}:harbor-core", "system:serviceaccount:${NAMESPACE}:harbor-chartmuseum"); modify the Condition object accordingly so it contains the aud check and narrowed sub values instead of the namespace-wide wildcard.deploy/chart/Taskfile.yaml-64-69 (1)
64-69:⚠️ Potential issue | 🟠 Major
docs:verifydoes not validate thatREADME.mdis up-to-date.The task runs
helm-docs --dry-runwhich only prints output to stdout without comparing it against the actualREADME.md. This allows CI to pass with stale chart documentation. The proper pattern is already implemented intaskfile/helm.yml— use it as a reference, but note that it filterskroki.io/svgboblines before comparing, which your suggested fix omits.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/Taskfile.yaml` around lines 64 - 69, The docs:verify task currently runs helm-docs --dry-run which doesn't compare generated docs to README.md; change docs:verify (in Taskfile.yaml) to generate helm-docs output to a temporary file (as done in taskfile/helm.yml), filter out kroki.io/svgbob lines from both generated output and the repo README (so transient svgbob lines don't cause false positives), then diff the filtered generated file against README.md and fail CI if they differ; update the cmds for docs:verify to mirror the helm.yml pattern (use helm-docs to write to a temp file, apply the kroki.io/svgbob filter, and diff against README.md).taskfile/helm.yml-11-13 (1)
11-13:⚠️ Potential issue | 🟠 MajorAdd
TRIVY_VERSIONtovarsor clarify the version inheritance pattern.Lines 62 and 78 interpolate
{{.TRIVY_VERSION}}which is not declared in this file'svarssection. While the mainTaskfile.ymlloadsTRIVY_VERSION=0.69.3fromversions.envand includes this taskfile (making it available via inheritance), this is inconsistent with howCT_VERSIONandKUBE_LINTER_VERSIONare handled—both of which define defaults locally. For clarity and self-containment, either addTRIVY_VERSIONto thevarssection with a default, or document that it requires the parent dotenv.Additionally, the
trivy-chart:reporttask (lines 74–82) omits--exit-code 1, which means it will not fail even when HIGH/CRITICAL findings are present. Thetrivy-charttask correctly includes this flag to gate on findings. If the report variant is intentionally report-only without enforcement, document this; otherwise, add the flag for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@taskfile/helm.yml` around lines 11 - 13, Add a TRIVY_VERSION entry with a sensible default to the vars section so the file is self-contained (mirror how CT_VERSION and KUBE_LINTER_VERSION are declared) and update the trivy-chart:report task to include the same gating flag used by trivy-chart (add "--exit-code 1") unless the intent is to keep it report-only, in which case add a comment documenting that trivy-chart:report intentionally omits "--exit-code 1"; refer to the vars block (TRIVY_VERSION) and the tasks named trivy-chart and trivy-chart:report when making these edits.
🟡 Minor comments (14)
deploy/chart/example/aws-eks-irsa/README.md-7-27 (1)
7-27:⚠️ Potential issue | 🟡 MinorAdd a language to this fenced block.
Line 7 opens an unlabeled code fence, which markdownlint already flags. Use
texthere to keep the docs lint-clean.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/example/aws-eks-irsa/README.md` around lines 7 - 27, The fenced code block starting with triple backticks in the README.md diagram is missing a language label; update the opening fence (the line that currently contains only ```) to use a language hint of text (i.e., replace ``` with ```text) so markdownlint stops flagging the unlabeled code fence and the diagram remains rendered as plain text.deploy/chart/docs/guide/suse.md-108-112 (1)
108-112:⚠️ Potential issue | 🟡 MinorAdd fence languages for the plain-text snippets.
Both blocks are currently tripping MD040, so docs lint will stay noisy until they use an explicit language such as
text.Also applies to: 138-140
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/docs/guide/suse.md` around lines 108 - 112, The two plain-text fenced code blocks showing the hosts override (the block containing the line "127.0.0.1 harbor.example.com" and the other similar block later) are triggering MD040; fix by adding an explicit fence language (e.g., change ``` to ```text) for both code fences so the lint stops complaining while keeping the snippet content unchanged.deploy/chart/docs/guide/suse.md-71-75 (1)
71-75:⚠️ Potential issue | 🟡 MinorUse
\qin thepsqltranscript for broader compatibility.While
exitworks in PostgreSQL 11+,\qis the canonical meta-command that works across all PostgreSQL versions and is the standard documented approach for exitingpsql.🐛 Proposed fix
-postgres=# exit +postgres=# \q🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/docs/guide/suse.md` around lines 71 - 75, The psql transcript uses the shell command `exit`; replace that with the canonical psql meta-command `\q` so the example works across all PostgreSQL versions—update the snippet that shows "postgres=# exit" to "postgres=# \q" in the suse.md guide (the psql transcript block).deploy/chart/templates/_helpers-redis.tpl-77-79 (1)
77-79:⚠️ Potential issue | 🟡 MinorLeading whitespace in URL helper output.
The
harbor.redis.url.corehelper has a leading space before the{{ include ... }}which will be included in the rendered output, potentially causing issues when the URL is used in configuration files.💡 Suggested fix to trim whitespace
{{- define "harbor.redis.url.core" -}} - {{ include "harbor.redis.url" . }}/0?idle_timeout_seconds=30 +{{- include "harbor.redis.url" . }}/0?idle_timeout_seconds=30 {{- end -}}The same pattern appears in
harbor.redis.url.harbor(line 99),harbor.redis.url.registry(line 107), andharbor.redis.url.jobservice(line 111).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/_helpers-redis.tpl` around lines 77 - 79, The helpers output a leading space because there is whitespace before the include; update each helper (harbor.redis.url.core, harbor.redis.url.harbor, harbor.redis.url.registry, harbor.redis.url.jobservice) to trim that whitespace so the rendered URL has no leading space—either remove the leading space character before the include or use Helm whitespace control (e.g., change the opening tag to trim left) so the include begins immediately with the URL and then append the path/query (/0?idle_timeout_seconds=30).deploy/chart/templates/_helpers-jobservice.tpl-45-51 (1)
45-51:⚠️ Potential issue | 🟡 MinorMissing validation for
jobservice.secretNamewhencertSourceis "secret".When
tls.certSourceequals"secret", the helper returns.Values.jobservice.secretNamedirectly. If this value is not configured, it will render empty, potentially causing deployment failures.💡 Suggested fix with validation
{{- define "harbor.jobservice.secretName" -}} {{- if eq .Values.tls.certSource "secret" -}} - {{- .Values.jobservice.secretName -}} + {{- required "jobservice.secretName is required when tls.certSource is 'secret'" .Values.jobservice.secretName -}} {{- else -}} {{- printf "%s-jobservice-internal-tls" (include "harbor.fullname" .) -}} {{- end -}} {{- end -}}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/_helpers-jobservice.tpl` around lines 45 - 51, The helper "harbor.jobservice.secretName" should validate .Values.jobservice.secretName when .Values.tls.certSource == "secret"; update the template so that inside the eq branch you call the Helm required helper (e.g. required "error message: jobservice.secretName must be set when tls.certSource=secret" .Values.jobservice.secretName) instead of returning the raw value, so the chart fails fast with a clear message (leave the else branch that uses printf and include "harbor.fullname" unchanged).deploy/chart/templates/_helpers-redis.tpl-63-72 (1)
63-72:⚠️ Potential issue | 🟡 MinorSame potential nil reference in
harbor.redis.url.Similar to
harbor.redis.secretKey, this helper accesses.Values.valkey.auth.enabledwithout guarding on.Values.valkey.enabled.💡 Suggested fix
{{- define "harbor.redis.url" -}} {{- $root := . -}} {{- $host := include "harbor.redis.host" $root -}} {{- $port := include "harbor.redis.port" $root -}} - {{- if .Values.valkey.auth.enabled -}} + {{- if and .Values.valkey.enabled .Values.valkey.auth.enabled -}} {{- printf "redis://:$(REDIS_PASSWORD)@%s:%s" $host $port -}} {{- else -}} {{- printf "redis://%s:%s" $host $port -}} {{- end -}} {{- end }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/_helpers-redis.tpl` around lines 63 - 72, The harbor.redis.url helper reads .Values.valkey.auth.enabled without first ensuring .Values.valkey exists/is enabled; update the conditional to guard on .Values.valkey.enabled (and then .Values.valkey.auth.enabled) before using the auth branch so you avoid a nil reference; adjust the if check in the harbor.redis.url template to first test .Values.valkey.enabled and only then test .Values.valkey.auth.enabled, keeping the rest of the logic that uses include "harbor.redis.host" and include "harbor.redis.port" unchanged.deploy/chart/templates/NOTES.txt-27-40 (1)
27-40:⚠️ Potential issue | 🟡 MinorDocker push instructions missing for gateway exposure.
The docker login/tag/push instructions are only rendered when
ingress.enabledis true. Users deploying withgateway.enabled(Kubernetes Gateway API) won't see these instructions.💡 Suggested fix to add gateway support
3. To push images: {{- if .Values.ingress.enabled }} {{- if gt (len .Values.ingress.hosts) 0 }} {{- range .Values.ingress.hosts }} docker login {{ .host }} docker tag myimage:latest {{ .host }}/library/myimage:latest docker push {{ .host }}/library/myimage:latest {{- end }} {{- else }} docker login {{ (urlParse .Values.externalURL).host }} docker tag myimage:latest {{ (urlParse .Values.externalURL).host }}/library/myimage:latest docker push {{ (urlParse .Values.externalURL).host }}/library/myimage:latest {{- end }} +{{- else if .Values.gateway.enabled }} +{{- range .Values.gateway.hostnames }} + docker login {{ . }} + docker tag myimage:latest {{ . }}/library/myimage:latest + docker push {{ . }}/library/myimage:latest +{{- end }} {{- end }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/NOTES.txt` around lines 27 - 40, The NOTES template only emits docker login/tag/push steps when ingress.enabled is true; extend the conditional to also handle gateway.enabled and its hosts: check if either .Values.ingress.enabled with non-empty .Values.ingress.hosts OR .Values.gateway.enabled with non-empty .Values.gateway.hosts, iterate the appropriate hosts (use range over .Values.gateway.hosts when gateway is enabled) to render docker login/tag/push, and keep the existing fallback that uses (urlParse .Values.externalURL).host when neither ingress nor gateway hosts are present; update the template conditionals and range blocks accordingly, referencing ingress.enabled, ingress.hosts, gateway.enabled, gateway.hosts, externalURL and urlParse.deploy/chart/templates/_helpers-redis.tpl-52-58 (1)
52-58:⚠️ Potential issue | 🟡 Minor
harbor.redis.secretKeyshould checkvalkey.enabledbefore accessingvalkey.auth.enabledfor consistency with other helpers.All similar helpers (
harbor.redis.host,harbor.redis.port,harbor.redis.secretName,harbor.redis.scheme) check.Values.valkey.enabledfirst, then handle external Redis as a fallback. Theharbor.redis.secretKeyhelper breaks this pattern by only checking.Values.valkey.auth.enabled, returning "valkey-password" even when ValKey is disabled. While calls to this helper are currently guarded in deployments, the helper's internal logic should follow the established pattern for consistency and future-proofing.Suggested fix
{{- define "harbor.redis.secretKey" -}} -{{- if .Values.valkey.auth.enabled -}} +{{- if and .Values.valkey.enabled .Values.valkey.auth.enabled -}} valkey-password {{- else -}} REDIS_PASSWORD {{- end -}} {{- end }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/_helpers-redis.tpl` around lines 52 - 58, The helper harbor.redis.secretKey currently checks .Values.valkey.auth.enabled directly and can return "valkey-password" even when ValKey is overall disabled; update the helper to first check .Values.valkey.enabled (like harbor.redis.host/port/secretName/scheme) and only then inspect .Values.valkey.auth.enabled, otherwise fall back to the external Redis default key (e.g., REDIS_PASSWORD); ensure the logic mirrors the pattern used by the other harbor.redis.* helpers so the helper behaves consistently whether ValKey is enabled or not.deploy/chart/example/rke2-rancher.yaml-2-2 (1)
2-2:⚠️ Potential issue | 🟡 MinorIncorrect path in usage comment.
The usage comment references
examples/rke2-rancher.yamlbut the file is located atexample/rke2-rancher.yaml(singular "example").Suggested fix
# Harbor Helm Chart - RKE2/Rancher Values -# Usage: helm install harbor . -n harbor -f examples/rke2-rancher.yaml +# Usage: helm install harbor . -n harbor -f example/rke2-rancher.yaml🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/example/rke2-rancher.yaml` at line 2, The usage comment at the top currently refers to "examples/rke2-rancher.yaml" but the file is actually named "example/rke2-rancher.yaml"; update the usage comment string in this file to the correct path "example/rke2-rancher.yaml" (modify the line containing 'helm install harbor . -n harbor -f examples/rke2-rancher.yaml' to use the singular "example") so the documented install command matches the real file name.deploy/chart/templates/portal.service.yaml-14-16 (1)
14-16:⚠️ Potential issue | 🟡 MinorRename portal port helpers for consistency with core pattern.
The helper naming in portal is reversed compared to core:
harbor.portal.portreturns 80 (service port) whileharbor.portal.service.portreturns 8080 (container port). This contradicts the semantic convention used incore.service.yaml, whereharbor.core.service.portis the service port andharbor.core.portis the container port. Although the actual port assignment in the manifest works correctly, aligning the helper names would improve clarity. Consider renaming the portal helpers to match the core pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/templates/portal.service.yaml` around lines 14 - 16, Rename the portal helper functions to match the core pattern so the helper returning the service port is named "harbor.portal.service.port" and the helper returning the container (target) port is named "harbor.portal.port"; update the helper definitions accordingly (swap the values so service port = 80, container port = 8080) and update every usage in templates (e.g., in portal.service.yaml where currently include "harbor.portal.port" and include "harbor.portal.service.port") to use the new names; ensure the helper templates (the functions that defined "harbor.portal.port" and "harbor.portal.service.port") are renamed/adjusted to preserve the original numeric mapping but with the corrected semantic names.deploy/chart/Chart.yaml-37-38 (1)
37-38:⚠️ Potential issue | 🟡 MinorMalformed GitHub URL for Source Helm Chart link.
The URL
https://github.com/container-registry/harbor-next/deploy/chart/is missing the required path segment to navigate to a directory on GitHub. It should include/tree/main/(or the appropriate branch) before the directory path.Proposed fix
- name: Source Helm Chart - url: https://github.com/container-registry/harbor-next/deploy/chart/ + url: https://github.com/container-registry/harbor-next/tree/main/deploy/chart🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/Chart.yaml` around lines 37 - 38, Update the malformed Source Helm Chart URL in Chart.yaml by inserting the GitHub tree segment for the branch (e.g., add "/tree/main/" before "deploy/chart/") so the URL points to the repository directory (replace the existing "https://github.com/container-registry/harbor-next/deploy/chart/" with "https://github.com/container-registry/harbor-next/tree/main/deploy/chart/" or the correct branch name).deploy/chart/README.md-5-5 (1)
5-5:⚠️ Potential issue | 🟡 MinorHyphenate
cloud-native.This is user-facing docs text, and the current wording will trip grammar/spell checks.
✏️ Suggested wording
-A modern, production-ready Helm chart for [Harbor Next](https://github.com/container-registry/harbor-next) - the cloud native container registry for Kubernetes. +A modern, production-ready Helm chart for [Harbor Next](https://github.com/container-registry/harbor-next) - the cloud-native container registry for Kubernetes.As per coding guidelines, "Spell checking must pass on staged
.mdand.ymlfiles via lefthook git hook before commit".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/README.md` at line 5, Update the README description phrase "cloud native container registry" to use the hyphenated adjective "cloud-native container registry"; locate the sentence that currently reads "A modern, production-ready Helm chart for Harbor Next - the cloud native container registry for Kubernetes." and replace "cloud native" with "cloud-native" so spelling/grammar checks and lefthook hooks pass.deploy/chart/example/aws-eks-irsa/verify.sh-55-77 (1)
55-77:⚠️ Potential issue | 🟡 MinorUse an environment variable for the Harbor admin password instead of hardcoding.
The verify script hardcodes
Harbor12345in thedocker logincommand. Use the pattern from the OpenShift example instead:echo "${HARBOR_ADMIN_PASSWORD}" | docker login ... --password-stdin. The password can be exported fromvalues-aws-irsa.yaml(harborAdminPassword) or injected as a secret viaexistingSecretAdminPassword. The current port-forward approach viasvc/${RELEASE_NAME}-coreis the intended design for this example and is correct.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/example/aws-eks-irsa/verify.sh` around lines 55 - 77, Replace the hardcoded Harbor password in the docker login step with an environment variable: read HARBOR_ADMIN_PASSWORD (exported from values-aws-irsa.yaml or injected via existingSecretAdminPassword) and pipe it into docker login using --username admin --password-stdin against "localhost:${LOCAL_PORT}" (keep existing NAMESPACE/RELEASE_NAME/LOCAL_PORT usage and port-forward logic around PF_PID). Ensure the script fails visibly if HARBOR_ADMIN_PASSWORD is not set and remove the literal "Harbor12345" from the docker login invocation.deploy/chart/values.yaml-51-53 (1)
51-53:⚠️ Potential issue | 🟡 MinorThe documented default doesn't match the actual value.
This field defaults to
"", but the comment advertisesHarbor12345. Helm-generated docs will publish the wrong default and nudge users toward a well-known password.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy/chart/values.yaml` around lines 51 - 53, The comment for harborAdminPassword is wrong: the key harborAdminPassword currently defaults to an empty string but the comment claims "Harbor12345"; update the documentation to match the actual default by changing the comment/default text to "" (or explicitly state "no default — must be set") next to the harborAdminPassword entry so generated Helm docs don't advertise the incorrect default; locate the harborAdminPassword key in values.yaml and update the surrounding comment text accordingly.
| check() { | ||
| local desc="$1"; shift | ||
| if "$@" >/dev/null 2>&1; then | ||
| echo " PASS: ${desc}" | ||
| ((PASS++)) | ||
| else | ||
| echo " FAIL: ${desc}" | ||
| ((FAIL++)) | ||
| fi |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, examine the verify.sh script
head -50 deploy/chart/example/aws-eks-irsa/verify.shRepository: container-registry/harbor-next
Length of output: 1253
🏁 Script executed:
# Test the Bash behavior claim about ((PASS++)) under set -e
bash -c '
set -e
PASS=0
echo "Testing ((PASS++)) when PASS=0:"
((PASS++))
echo "This should not print if ((PASS++)) exits the script"
'Repository: container-registry/harbor-next
Length of output: 105
🏁 Script executed:
# Also test the alternative syntax
bash -c '
set -e
PASS=0
echo "Testing ((++PASS)) when PASS=0:"
((++PASS))
echo "After ((++PASS)): PASS=$PASS"
'Repository: container-registry/harbor-next
Length of output: 130
🏁 Script executed:
# Test return value behavior
bash -c '
PASS=0
((PASS++))
echo "Return value of ((PASS++)) when PASS=0: $?"
PASS=0
((++PASS))
echo "Return value of ((++PASS)) when PASS=0: $?"
'Repository: container-registry/harbor-next
Length of output: 157
((PASS++)) and ((FAIL++)) exit immediately on the first check under set -euo pipefail.
Post-increment operators return the previous value. When PASS=0, ((PASS++)) returns 1, which triggers exit under set -e. The first check always fails. Use ((++PASS)) or PASS=$((PASS + 1)) instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deploy/chart/example/aws-eks-irsa/verify.sh` around lines 25 - 33, The
check() helper uses post-increment ((PASS++)) and ((FAIL++)) which under set
-euo pipefail can cause the script to exit because the arithmetic expression
returns the previous value; update the increments inside check() to use
pre-increment ((++PASS)) and ((++FAIL)) or an explicit assignment
(PASS=$((PASS+1)), FAIL=$((FAIL+1))) so the command returns a non-zero-safe exit
status; locate the check() function and replace the two occurrences of PASS++
and FAIL++ accordingly.
| {{- define "harbor.secretKey" -}} | ||
| {{- if .Values.secretKey }} | ||
| {{- .Values.secretKey }} | ||
| {{- else }} | ||
| {{- /* Generate a deterministic key based on release name */ -}} | ||
| {{- $key := printf "%s-harbor-secret-key" .Release.Name | sha256sum | trunc 16 }} | ||
| {{- $key }} | ||
| {{- end }} |
There was a problem hiding this comment.
Don't derive secretKey from the release name.
This fallback is fully predictable: anyone who knows the Helm release name can recreate the same value. For auth/encryption material, please require an explicit key or persist a randomly generated one and reuse it across upgrades.
🧰 Tools
🪛 smarty-lint (0.3.3)
[warn] 251-251: Unnecessary trailing space
(internalUnnecessaryTrailingSpace)
[error] 252-252: Invalid expression " ." in extended AAUONMS expression.
(internalInvalid)
[error] 252-252: Invalid expression "." in extended AAUONMS expression.
(internalInvalid)
[error] 253-253: Invalid expression "." in extended AAUONMS expression.
(internalInvalid)
[error] 253-253: Invalid expression "." in extended AAUONMS expression.
(internalInvalid)
[warn] 255-255: Unnecessary trailing space
(internalUnnecessaryTrailingSpace)
[warn] 255-255: Unnecessary trailing space
(internalUnnecessaryTrailingSpace)
[warn] 255-255: Unnecessary trailing space
(internalUnnecessaryTrailingSpace)
[warn] 255-255: Unnecessary trailing space
(internalUnnecessaryTrailingSpace)
[warn] 255-255: Unnecessary trailing space
(internalUnnecessaryTrailingSpace)
[warn] 255-255: Unnecessary trailing space
(internalUnnecessaryTrailingSpace)
[warn] 255-255: Unnecessary trailing space
(internalUnnecessaryTrailingSpace)
[error] 256-256: Invalid expression " :" in extended AAUONMS expression.
(internalInvalid)
[warn] 256-256: Unnecessary trailing space
(internalUnnecessaryTrailingSpace)
[error] 256-256: Invalid expression " ." in assignment.
(internalInvalid)
[error] 256-256: Invalid expression "." in assignment.
(internalInvalid)
[error] 256-256: Invalid expression " | " in assignment.
(internalInvalid)
[error] 256-256: Invalid expression " | " in assignment.
(internalInvalid)
[warn] 256-256: Unnecessary trailing space
(internalUnnecessaryTrailingSpace)
[warn] 256-256: Unnecessary trailing space
(internalUnnecessaryTrailingSpace)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deploy/chart/templates/_helpers.tpl` around lines 251 - 258, The template
"harbor.secretKey" currently derives a predictable key from .Release.Name (using
.Release.Name | sha256sum | trunc 16), which is insecure; change it to require
an explicit .Values.secretKey or read an existing persisted Secret instead of
computing one deterministically: remove the deterministic fallback in the helm
tpl and either (a) make the template return a fail/required error when
.Values.secretKey is empty, or (b) look up a Kubernetes Secret (via the lookup
function) and use its value if present, and if not present create a separate
Secret resource that generates a cryptographically random key once and persists
it for future upgrades; reference .Values.secretKey and the "harbor.secretKey"
template when implementing this change.
| ports: | ||
| - name: http | ||
| port: {{ .Values.expose.clusterIP.ports.http }} | ||
| targetPort: {{ include "harbor.core.port" . }} | ||
| protocol: TCP | ||
| {{- if .Values.tls.enabled }} | ||
| - name: https | ||
| port: {{ .Values.expose.clusterIP.ports.https }} | ||
| targetPort: {{ include "harbor.core.port" . }} | ||
| protocol: TCP | ||
| {{- end }} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's read the full expose-service.yaml file
cat -n "deploy/chart/templates/expose-service.yaml"Repository: container-registry/harbor-next
Length of output: 4612
🏁 Script executed:
# Look for the harbor.core.port template definition
find . -type f -name "*.yaml" -o -name "*.tpl" | head -20
rg "harbor\.core\.port" --type yamlRepository: container-registry/harbor-next
Length of output: 1844
🏁 Script executed:
# Check the values.yaml or values-schema to understand port configuration
find . -type f -name "values*.yaml" | head -10Repository: container-registry/harbor-next
Length of output: 306
🏁 Script executed:
# Find the harbor.core.port template definition
rg "define.*harbor\.core\.port|harbor\.core\.port" deploy/chart/templates/_helpers*.tplRepository: container-registry/harbor-next
Length of output: 148
🏁 Script executed:
# Check values.yaml to understand the configuration
grep -A 10 "^expose:" deploy/chart/values.yaml | head -50Repository: container-registry/harbor-next
Length of output: 372
🏁 Script executed:
# Look for tls configuration
grep -A 10 "^tls:" deploy/chart/values.yaml | head -50Repository: container-registry/harbor-next
Length of output: 416
🏁 Script executed:
# Search for port configurations
grep -B 2 -A 2 "httpport\|CORE_LOCAL_URL" deploy/chart/templates/core.configmap.yamlRepository: container-registry/harbor-next
Length of output: 546
🏁 Script executed:
# Get the full definition of harbor.core.port
cat -n deploy/chart/templates/_helpers-urls.tpl | grep -A 5 "define.*harbor\.core\.port"Repository: container-registry/harbor-next
Length of output: 186
🏁 Script executed:
# Also check the core deployment to see what ports it actually listens on
grep -B 5 -A 5 "ports:" deploy/chart/templates/core.deployment.yaml | head -40Repository: container-registry/harbor-next
Length of output: 592
🏁 Script executed:
# Check the core configmap for complete port configuration
grep -B 2 -A 2 "httpport\|httpsport" deploy/chart/templates/core.configmap.yamlRepository: container-registry/harbor-next
Length of output: 216
🏁 Script executed:
# Check if Harbor core has TLS configuration for port 8080
grep -r "tls\|https\|ssl" deploy/chart/templates/core.deployment.yaml deploy/chart/templates/core.configmap.yaml | head -30Repository: container-registry/harbor-next
Length of output: 354
🏁 Script executed:
# Check what happens when tls.enabled is true - is there a separate TLS port?
grep -B 5 -A 10 "tls.enabled" deploy/chart/templates/core.deployment.yaml deploy/chart/templates/core.configmap.yamlRepository: container-registry/harbor-next
Length of output: 56
🏁 Script executed:
# Look for any TLS configuration in the core configmap or deployment
cat -n deploy/chart/templates/core.configmap.yaml | head -100Repository: container-registry/harbor-next
Length of output: 4872
🏁 Script executed:
# Check if there's any TLS listener or HTTPS port config in core
grep -i "tlsport\|httpsport\|8443" deploy/chart/templates/core.configmap.yaml deploy/chart/templates/core.deployment.yamlRepository: container-registry/harbor-next
Length of output: 56
🏁 Script executed:
# Look for any tls.enabled conditional in core templates
grep -B 3 -A 3 "tls.enabled" deploy/chart/templates/core.deployment.yamlRepository: container-registry/harbor-next
Length of output: 56
🏁 Script executed:
# Check if Harbor core itself handles TLS or relies on ingress/loadbalancer
grep -r "tls\|https\|ssl" deploy/chart/templates/core.* | grep -i "port\|listen\|8443"Repository: container-registry/harbor-next
Length of output: 56
🏁 Script executed:
# Check the TLS helper template to see if it's used for core
cat -n deploy/chart/templates/_helpers-tls.tplRepository: container-registry/harbor-next
Length of output: 1641
🏁 Script executed:
# Look for any documentation or comments about TLS configuration
grep -r "TLS\|HTTPS" deploy/chart/values.yaml | head -20Repository: container-registry/harbor-next
Length of output: 770
🏁 Script executed:
# Check if Harbor core actually supports TLS listener
grep -B 5 -A 5 "tls" deploy/chart/templates/core.deployment.yamlRepository: container-registry/harbor-next
Length of output: 432
🏁 Script executed:
# Check if there's any TLS/HTTPS listener configuration in the core application
grep -i "listenaddr\|tlsaddr\|serveTLS\|8443" deploy/chart/templates/core.configmap.yaml deploy/chart/templates/core.deployment.yamlRepository: container-registry/harbor-next
Length of output: 56
🏁 Script executed:
# Look at the actual values to see what ports are expected
grep -B 5 -A 15 "expose:" deploy/chart/values.yaml | grep -A 50 "ports:"Repository: container-registry/harbor-next
Length of output: 242
🏁 Script executed:
# Check documentation or README about TLS configuration
find . -name "README*" -o -name "*.md" | xargs grep -l "TLS\|tls.enabled" 2>/dev/null | head -5Repository: container-registry/harbor-next
Length of output: 197
🏁 Script executed:
# Verify what the tls.enabled flag actually does for the expose service
grep -B 10 -A 10 "tls.enabled" deploy/chart/templates/expose-service.yamlRepository: container-registry/harbor-next
Length of output: 1909
🏁 Script executed:
# Check the REMAINING-GAPS.md to see if there are known issues about this
cat deploy/chart/REMAINING-GAPS.md | head -100Repository: container-registry/harbor-next
Length of output: 3083
🏁 Script executed:
# Search for any issues or discussions about TLS configuration in the codebase
grep -r "tls.*enabled" deploy/chart/README.md | head -20Repository: container-registry/harbor-next
Length of output: 360
🏁 Script executed:
# Final verification: check if Harbor core has any HTTPS listener capability
grep -i "https\|tls.*listen\|8443" deploy/chart/templates/core.deployment.yaml deploy/chart/templates/core.configmap.yamlRepository: container-registry/harbor-next
Length of output: 164
🏁 Script executed:
# Final check: look for any environment variables or config that enables TLS listener in core
grep -i "TLS_\|HTTPS_\|CERT_" deploy/chart/templates/core.configmap.yaml | grep -v "PROXY\|SSLMODE\|DATABASE"Repository: container-registry/harbor-next
Length of output: 56
🏁 Script executed:
# Check if there's a separate https port defined anywhere for the core component
grep -r "8443\|httpsport" deploy/chart/templates/Repository: container-registry/harbor-next
Length of output: 56
Don't expose HTTPS on the same backend port as HTTP.
All three service variants (ClusterIP, NodePort, LoadBalancer) map both port 80 and port 443 to harbor.core.port (8080). Harbor core has no HTTPS listener—it only accepts HTTP on port 8080. When tls.enabled=true, the Service exposes port 443 but forwards it to port 8080, creating a protocol mismatch. This only works if TLS is terminated at an ingress controller or external load balancer. If this Service is accessed directly (especially as a LoadBalancer without external TLS termination), TLS clients will fail.
Either:
- Remove the HTTPS port when core has no TLS listener, or
- Configure core to listen on a distinct HTTPS port and route accordingly.
Also applies to: 62-78, 88-98
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deploy/chart/templates/expose-service.yaml` around lines 49 - 59, The service
currently maps HTTPS port to the same backend port returned by include
"harbor.core.port" which is HTTP-only; update the template so the HTTPS service
port is either omitted when the core backend has no TLS listener (i.e. when
.Values.tls.enabled is true but no HTTPS backend is configured) or routed to a
distinct HTTPS backend port (e.g. use a new value/include like
.Values.core.httpsPort or include "harbor.core.httpsPort" for targetPort)
instead of include "harbor.core.port"; apply this conditional change wherever
HTTPS is declared (the ClusterIP/NodePort/LoadBalancer service blocks that
reference .Values.tls.enabled, .Values.expose.*.ports.https and include
"harbor.core.port") so HTTP and HTTPS do not forward to the same backend port
unless the backend actually supports HTTPS.
| {{- $secretName := include "harbor.ingress.secret" . }} | ||
| {{- $existingSecret := lookup "v1" "Secret" .Release.Namespace $secretName }} | ||
| {{- $certHost := include "harbor.ingress.primaryHost" . }} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate and read the ingress.secret.yaml file
find . -name "ingress.secret.yaml" -type fRepository: container-registry/harbor-next
Length of output: 118
🏁 Script executed:
# Also search for the ingress.yaml file to understand the context better
find . -name "ingress.yaml" -type fRepository: container-registry/harbor-next
Length of output: 253
🏁 Script executed:
# Look for _helpers.tpl or similar files that might define the helper functions
find . -path "*/chart/templates/*" -name "*helper*" -o -path "*/chart/templates/*" -name "_*"Repository: container-registry/harbor-next
Length of output: 380
🏁 Script executed:
# Read the full ingress.secret.yaml file
cat -n ./deploy/chart/templates/ingress.secret.yamlRepository: container-registry/harbor-next
Length of output: 1105
🏁 Script executed:
# Read the helpers files to find the relevant functions
# Start with _helpers.tpl
cat -n ./deploy/chart/templates/_helpers.tpl | head -100Repository: container-registry/harbor-next
Length of output: 3866
🏁 Script executed:
# Search for harbor.ingress in the helpers files
rg "harbor\.ingress" ./deploy/chart/templates/ -A 5Repository: container-registry/harbor-next
Length of output: 2987
🏁 Script executed:
# Also read the ingress.yaml to see multi-host settings
cat -n ./deploy/chart/templates/ingress.yaml | head -80Repository: container-registry/harbor-next
Length of output: 2993
🏁 Script executed:
# Verify the Values schema to confirm ingress.hosts can be a list with multiple items
rg "ingress:" ./deploy/chart/values.yaml -A 30 | head -60Repository: container-registry/harbor-next
Length of output: 940
Autogenerated ingress cert only covers the primary hostname.
At line 19, genSignedCert is called with a single SAN: (list $certHost), where $certHost is the primary host from harbor.ingress.primaryHost. However, the chart allows configuring multiple hosts via .Values.ingress.hosts and .Values.ingress.tls[].hosts (lines 18-22 reference this configuration). Any additional hostname configured in the Ingress will fail TLS validation since it is not in the certificate's SAN list.
Either include all configured ingress hosts in the SAN list, or validate that auto-generation is only used with a single host.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deploy/chart/templates/ingress.secret.yaml` around lines 2 - 4, The
autogenerated cert currently passes only the primary host ($certHost) to
genSignedCert, which will miss additional configured hostnames; update the
template to collect all hosts from .Values.ingress.hosts and from every entry in
.Values.ingress.tls[].hosts (or fail validation) and pass that complete list of
SANs into genSignedCert (replace the single (list $certHost) argument), or
alternatively add a check that ensures auto-generation runs only when there is
exactly one ingress host and error out otherwise; reference genSignedCert,
$certHost, .Values.ingress.hosts and .Values.ingress.tls in your change.
Summary
Related Issues
Type of Change
fix:)feat:)feat!:/fix!:)docs:)refactor:)ci:/build:)chore:)Release Notes
Testing
Checklist
git commit -s)Summary by CodeRabbit
Release Notes
New Features
Documentation