-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
allow scaling custom resource #2833
base: master
Are you sure you want to change the base?
Conversation
// we must block until all started informers' caches were synced | ||
_ = fac.WaitForCacheSync(f.stopChan) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason for blocking here is that command
call the internal/dao/registry::Meta.loadCRDs
method during initialization (and as part of the init of the whole app), and if it doesn't block here, it will result in an incorrect list of CRDs being read (and theoretically, other resources will have the same problem...).
But this also creates additional problems, in my cluster with a slightly higher load, a total of 96 CRDs takes about ~2s to synchronize.
I didn't think of any good way (or I'm missing something) to solve this problem, does anyone have any ideas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wjiec This could indeed be a problem. As you've noted any cluster with lots of resources will become a pb.
The idea here is to leverage k9s eventual consistance so we would not have to block til all caches are updated.
I think we could perhaps lazy eval the crd instead to gauge whether it is scalable once the user decides to actually view it.
Would this make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@derailed I've also considered checking if the crd is scalable when the view is created, but consider a scenario where if k9s needs to display a crd resource that supports scalable as soon as it starts up, and at that point the cache may not have finished synced, then factory.Get
will either return a NotFound error or will need to block until that crd finish synced. Doesn't seem like an optimal approach in this case either?
The current logic decides whether to wait for the cache to finish sync based on the wait
parameter, which I see is false in most cases, and only in some special cases does it have wait = true
, which I think might be acceptable?
Or we make the key binding dynamic, and when the crds sync is done, we update the view to add the scale cmd.
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wjiec Thank you for taking this on Jayson!
// we must block until all started informers' caches were synced | ||
_ = fac.WaitForCacheSync(f.stopChan) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wjiec This could indeed be a problem. As you've noted any cluster with lots of resources will become a pb.
The idea here is to leverage k9s eventual consistance so we would not have to block til all caches are updated.
I think we could perhaps lazy eval the crd instead to gauge whether it is scalable once the user decides to actually view it.
Would this make sense?
This PR implements the features described in #2779.