-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discussion: Replica placement strategy #467
Comments
I don't think anyone actually used it, I certainly have never seen it in action. But: there is the possibility to implement "custom" strategy in LINSTOR by way of setting the |
Interesting. I started to go down the path of writing my own operator using the Python API, but if https://github.com/LINBIT/linstor-server/blob/55d2909657d05cfc086e72d45dbbb6cac05d6345/docs/rest_v1_openapi.yaml#L3983 is one of the few references I can find. I'll start playing with it and see what I can uncover. |
Are you sure this is actually used? I can't find anything where this script is actually called / executed |
Should be somewhere in here: https://github.com/LINBIT/linstor-server/blob/master/controller/src/main/java/com/linbit/linstor/core/apicallhandler/controller/autoplacer/PreSelector.java Again, I can't confirm if it even works |
With the introduction of the LinstorNodeConnection feature in the recent release, I'd like to revisit this issue to discuss its potential applicability for our use-case. In scenarios where we are running a database, it would be highly beneficial to have a synchronous replica within the same zone for quick recovery, along with an additional Disaster Recovery (DR) replica in a different zone for enhanced resilience without sacrificing write performance. As I initially pointed out, the current options I noticed that the Autoplacer/PreSelectScript feature, which we discussed as a possible workaround, appears to have been removed from the codebase. Are there any plans to reintroduce this functionality or something similar? If this discussion would be more appropriate in the Linstor-server repository, I'm happy to open an issue there. |
Yeah, moving the discussion to the linstor-server repo would be good. Having a good use case/example/scenario to explain the feature also helps :) |
This perhaps is not a question for the piraeus-operator perse, but more Linstor in general, but posting here as others might find it useful. Please let me know if it would be better suited to post somewhere else.
We are attempting to run a large stretched cluster over multiple regions in which we use the Piraeus operator to provide a solid storage foundation for HA and disaster recovery.
The cluster is stretched over the following regions:
Desired Result
For most of our workloads we would want to have a placement count of 3 where the replica's should be placed in the following way:
linstor.csi.linbit.com/replicasOnSame
andlinstor.csi.linbit.com/replicasOnDifferent
are missing the flexibility to allow for this exact configuration. At least there is not one that I can think of directly.Possible workaround 1
storage-group=eu-a
storage-group=eu-a
I guess this would work, but we would always to create pairs of three nodes. Mistakes can be make quite easily during node configuration.
Possible workaround 2
Only use
linstor.csi.linbit.com/replicasOnSame: "topology.kubernetes.io/zone"
Write an operator that will create an extra replica's in the same region but on a different zone. This basically will use the command
I guess the Linstor Python API could be used for this. If the affinity controller is used I believe this should work when the original datacenter goes down?
Conclusion
Maybe there is a simpeler solution which I haven't thought of yet. Hoping to get some idea's from others :)
The text was updated successfully, but these errors were encountered: