Skip to content

bug: victoria-logs-cluster vlstorage.existingClaim doesn't work #2388

@sahilarora535

Description

@sahilarora535

Chart name and version
chart: victoria-logs-cluster
version: v0.10.0

Describe the bug
I am using victoria-logs-cluster with two vlstorage pods. There is a key in vlstorage.persistentVolume.existingClaim. When this key is set, the expectation is to use the supplied persistent volume claim instead of creating a new claim. When this is unset, the helm-chart creates 2 pvc, <pvc-name>-0 and <pvc-name>-1.

What I did was I created the persistent volume claim myself with the same name. When I set vlstorage.persistentVolume.existingClaim: "victoria-logs-cluster-vlstorage-pvc" then the vlstorage pods don't come up.

Events:                                                                                                                                                                                                          │
│   Type     Reason             Age                 From                Message                                                                                                                                    │
│   ----     ------             ----                ----                -------                                                                                                                                    │
│   Warning  FailedScheduling   35m                 default-scheduler   0/12 nodes are available: persistentvolumeclaim "victoria-logs-cluster-vlstorage-pvc" not found. preemption: 0/12 nodes are available: 12  │
│ Preemption is not helpful for scheduling.                                                                                                                                                                        │
│   Warning  FailedScheduling   20m (x3 over 30m)   default-scheduler   0/12 nodes are available: persistentvolumeclaim "victoria-logs-cluster-vlstorage-pvc" not found. preemption: 0/12 nodes are available: 12  │
│ Preemption is not helpful for scheduling.                                                                                        

This is happening because of the fact that I created two PVCs, which were:

victoria-logs-cluster-vlstorage-pvc-0
victoria-logs-cluster-vlstorage-pvc-1

And instead of following the statefulset indexing for the existing claim, it's using the claim name as is, which it's unable to find.

This looks like a gap in helm chart. It would be greatly helpful if this could be fixed.

Custom values
Please provide only custom values (excluding default ones):

  persistentVolume:
    # -- Create/use Persistent Volume Claim for vlstorage component. Empty dir if false. If true,  vlstorage will create/use a Persistent Volume Claim
    enabled: true
    name: vlstorage-volume

    # -- Array of access modes. Must match those of existing PV or dynamic provisioner. Details are [here](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
    accessModes:
      - ReadWriteOnce
    # -- Persistent volume annotations
    annotations: {}
    # -- Persistent volume labels
    labels: {}
    # -- Storage class name. Will be empty if not set
    storageClassName: ""
    # --  Existing Claim name. Requires vlstorage.persistentVolume.enabled: true. If defined, PVC must be created manually before volume will be bound
    existingClaim: "victoria-logs-cluster-vlstorage-pvc"

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions