Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Importing/Mounting pre-existing volumes in linstor/DRBD #660

Open
bernardgut opened this issue May 6, 2024 · 2 comments
Open

Importing/Mounting pre-existing volumes in linstor/DRBD #660

bernardgut opened this issue May 6, 2024 · 2 comments

Comments

@bernardgut
Copy link

Hello

Lets say I run a Talos cluster with some Piraeus/Linstor/DRBD backed by ZFSTHIN datasets (or LVM PVs for that mater). Lets say I recreate the cluster for xyz reason (I lost the Piraeus operator state). The storage datasets (PVs) are still there on the nodes physical disks. How would you "import" them back into Piraeus/Linstor/DRBD CSI stack (If possible at all) ??

I will give you a concrete example :

This morning a nuked a cluster. Upon recreation the cluster is "clean" (k linstor resource list-volumes returns null). On two of the nodes I have :

/ # zfs list
NAME                                                     USED  AVAIL  REFER  MOUNTPOINT
zpool-1                                                 25.5G  22.5G    96K  /zpool-1
zpool-1/pvc-4888c2c6-f3c9-4ffa-b2de-ee2498055ef2_00000  21.9G  22.5G  21.9G  -

Is there any "easy" way for the operator to recreate automagically the linstor resources associated with this pvc such that I can then bind it to an existing workload ? How would you go on about this ?

thanks
B.

@WanzenBug
Copy link
Member

Is there any "easy" way for the operator to recreate automagically the linstor resources associated with this pvc such that I can then bind it to an existing workload.

The answer is no. If you nuke the kube API LINSTOR will lose it's state. You could create a script that tries to find the "most plausible" state by looking at existing backing devices and DRBD metadata (i.e. all zvols and LVs will be named "pvc-_00000".

If you then use that collected information to make the appropriate API requests to LINSTOR, you could get LINSTOR to nearly the original state. Afterwards you would just have to translate the LINSTOR resources to PV and PVCs, which should be relatively straight forward.

@bernardgut
Copy link
Author

bernardgut commented May 7, 2024

I see. I guess at that point it is just easier to do somethging along the lines of

  1. create a new pvc along with the new workload
  2. Log into one of the nodes, zfs send originpool/old-pvc@snapshot | zfs receive destinationpool/new-pvc
  3. Let DRBD do its thing

that should work right ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants