-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for other operators using the NodeMaintenance resource. #12
Comments
|
Adding a timeout buffer for each node maintenance invocation, e.g. creating a CR to initiate maintenance with an optional timeout field might be a reasonable thing to do, not only for storage nodes, but for general use.
Not sure I completely understand the question here. If the intention is to create the CRD without deploying the operator itself (e.g. the controller) then yes, it is possible. |
Rather than a timeout, I was thinking of a user-estimate of how long the Maintenance was going to be for, so that we can decide whether to wait for the node (with it's disks) to come up or recreate the data elsewhere from other replicas. I'm thinking of consuming the CRD as a general way to signal NodeMaintenance for our storage operator with or without the operator. Preferably, the node maintenance operator would also be deployed alongside it, but I'm imagining our consumption of it wouldn't be affected by it's absence. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with /lifecycle rotten |
/remove-lifecycle stale |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
I understand this is still an interesting feature /remove-lifecycle stale |
I was wondering if there was any good reason other operators couldn't respond to the node maintenance CRD and make decisions about what to do with resources tied to the node (like local storage).
The real life example I was considering would require us to add an optional field to the MaintenanceCRD where the length of the maintenance is estimated.
Flow:
The text was updated successfully, but these errors were encountered: