Skip to content

LivenessProbes configuration breaks statefulset #1827

@71g3pf4c3

Description

@71g3pf4c3

Report

If perconaxtradbcluster.spec.haproxy.livenessProbes.httpGet field is added, statefullset is generated with both default exec probe and httpGet probe, which breaks statefulset spec

More about the problem

PerconaXtraDBCluster manifest (just https://github.com/percona/percona-xtradb-cluster-operator/blob/v1.15.0/deploy/cr.yaml with added httpGet probe):

apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBCluster
metadata:
  name: cluster1
  finalizers:
    - percona.com/delete-pxc-pods-in-order
spec:
  crVersion: 1.15.0
  tls:
    enabled: true
  updateStrategy: SmartUpdate
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: disabled
    schedule: "0 4 * * *"
  pxc:
    size: 3
    image: percona/percona-xtradb-cluster:8.0.36-28.1
    autoRecovery: true
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    podDisruptionBudget:
      maxUnavailable: 1
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 6G
    gracePeriod: 600
  haproxy:
    livenessProbes:
      httpGet:
        path: /healthz
        port: 8080
        scheme: HTTP
    enabled: true
    size: 3
    image: percona/haproxy:2.8.5
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    podDisruptionBudget:
      maxUnavailable: 1
    gracePeriod: 30
  proxysql:
    enabled: false
    size: 3
    image: percona/proxysql2:2.5.5
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 2G
    podDisruptionBudget:
      maxUnavailable: 1
    gracePeriod: 30
  logcollector:
    enabled: true
    image: percona/percona-xtradb-cluster-operator:1.15.0-logcollector-fluentbit3.1.4
    resources:
      requests:
        memory: 100M
        cpu: 200m
  pmm:
    enabled: false
    image: percona/pmm-client:2.42.0
    serverHost: monitoring-service
    resources:
      requests:
        memory: 150M
        cpu: 300m
  backup:
    image: percona/percona-xtradb-cluster-operator:1.15.0-pxc8.0-backup-pxb8.0.35
    pitr:
      enabled: false
      storageName: STORAGE-NAME-HERE
      timeBetweenUploads: 60
      timeoutSeconds: 60
    storages:
      s3-us-west:
        type: s3
        verifyTLS: true
        s3:
          bucket: S3-BACKUP-BUCKET-NAME-HERE
          credentialsSecret: my-cluster-name-backup-s3
          region: us-west-2
      azure-blob:
        type: azure
        azure:
          credentialsSecret: azure-secret
          container: test
      fs-pvc:
        type: filesystem
        volume:
          persistentVolumeClaim:
            accessModes: [ "ReadWriteOnce" ]
            resources:
              requests:
                storage: 6G
    schedule:
      - name: "daily-backup"
        schedule: "0 0 * * *"
        keep: 5
        storageName: fs-pvc

Generated haproxy statefulset snippet:

        livenessProbe:
          exec:
            command:
            - /opt/percona/haproxy_liveness_check.sh
          failureThreshold: 4
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 5

Steps to reproduce

  1. Install kind
  2. Install pxc-operator
  3. Apply attached manifest

Versions

  1. Kubernetes 1.29.2 (kind)
  2. Operator 1.15.0
  3. Database percona/percona-xtradb-cluster:8.0.36-28.1

Anything else?

No response

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions