Skip to content

unused cleans disks that not mounted right now, but assigned to PV in k8s #82

@fedordikarev

Description

@fedordikarev

I'm not sure if it easy to do or even possible with the current architecture of the tool, but we got some incidents due to next behaviour:

  1. k8s sts was scaled down
  2. due to that underlaying disks were unmounted, while PV referring them are still in place
  3. engineer from the team runs unused tool and remove disks that are currently unmounted
  4. after scale-up event pods can't start as there is no disks referred by existing PV

It could be less an issue after k8s 1.27 and persistentvolumeclaim-retention, but should we add some extra checks (in the tool or maybe external) to check if there any PV referring to the disk before deleting them?

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions