Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for other operators using the NodeMaintenance resource. #12

Open
rohantmp opened this issue Apr 25, 2019 · 13 comments
Open

Support for other operators using the NodeMaintenance resource. #12

rohantmp opened this issue Apr 25, 2019 · 13 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@rohantmp
Copy link

rohantmp commented Apr 25, 2019

I was wondering if there was any good reason other operators couldn't respond to the node maintenance CRD and make decisions about what to do with resources tied to the node (like local storage).

The real life example I was considering would require us to add an optional field to the MaintenanceCRD where the length of the maintenance is estimated.

Flow:

  • A storage operator has replicated data on the nodes and wants to put a node in maintenance.
  • User puts a node into Maintenance by creating a NodeMaintenance object.
  • In the NM object, the user estimates how many minutes the Node is going to be in maintenance for.
  • The storage on that node goes down. The storage operator must choose whether to recreate the replicated data elsewhere (to maintain the # of replicas) or to wait for the node to come up.
    • If the estimated maintenance time > (the estimated time to recreate the replicated data + SOME_OFFSET), then recreate the data elsewhere
    • else, wait for the node to come up.
      • if the node does not come up within that time limit, recreate the data elsewhere.
@rohantmp
Copy link
Author

  • Can we include features in the CRD for consumption by other controllers?
  • Can we deploy the CRD developed here without it's controller? (to avoid dependency)

@yanirq
Copy link
Contributor

yanirq commented Apr 28, 2019

Can we include features in the CRD for consumption by other controllers?

Adding a timeout buffer for each node maintenance invocation, e.g. creating a CR to initiate maintenance with an optional timeout field might be a reasonable thing to do, not only for storage nodes, but for general use.
@aglitke @rmohr @MarSik - any thoughts here ?

Can we deploy the CRD developed here without it's controller? (to avoid dependency)

Not sure I completely understand the question here. If the intention is to create the CRD without deploying the operator itself (e.g. the controller) then yes, it is possible.
What dependency are you trying to avoid?

@rohantmp
Copy link
Author

Rather than a timeout, I was thinking of a user-estimate of how long the Maintenance was going to be for, so that we can decide whether to wait for the node (with it's disks) to come up or recreate the data elsewhere from other replicas.

I'm thinking of consuming the CRD as a general way to signal NodeMaintenance for our storage operator with or without the operator. Preferably, the node maintenance operator would also be deployed alongside it, but I'm imagining our consumption of it wouldn't be affected by it's absence.

@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 28, 2019
@yanirq
Copy link
Contributor

yanirq commented Jul 28, 2019

/remove-lifecycle stale

@kubevirt-bot kubevirt-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 28, 2019
@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 26, 2019
@yanirq
Copy link
Contributor

yanirq commented Oct 27, 2019

/remove-lifecycle stale

@kubevirt-bot kubevirt-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 27, 2019
@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 25, 2020
@kubevirt-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kubevirt-bot kubevirt-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 24, 2020
@MarSik
Copy link

MarSik commented Feb 24, 2020

/remove-lifecycle stale

@MarSik
Copy link

MarSik commented Feb 24, 2020

/remove-lifecycle rotten

@kubevirt-bot kubevirt-bot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 24, 2020
@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 24, 2020
@slintes
Copy link
Contributor

slintes commented Jun 3, 2020

I understand this is still an interesting feature

/remove-lifecycle stale
/lifecycle frozen

@kubevirt-bot kubevirt-bot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

5 participants