Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

allow scaling custom resource #2833

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

wjiec
Copy link
Contributor

@wjiec wjiec commented Aug 19, 2024

This PR implements the features described in #2779.

Comment on lines +145 to +146
// we must block until all started informers' caches were synced
_ = fac.WaitForCacheSync(f.stopChan)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason for blocking here is that command call the internal/dao/registry::Meta.loadCRDs method during initialization (and as part of the init of the whole app), and if it doesn't block here, it will result in an incorrect list of CRDs being read (and theoretically, other resources will have the same problem...).

But this also creates additional problems, in my cluster with a slightly higher load, a total of 96 CRDs takes about ~2s to synchronize.

I didn't think of any good way (or I'm missing something) to solve this problem, does anyone have any ideas?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wjiec This could indeed be a problem. As you've noted any cluster with lots of resources will become a pb.
The idea here is to leverage k9s eventual consistance so we would not have to block til all caches are updated.

I think we could perhaps lazy eval the crd instead to gauge whether it is scalable once the user decides to actually view it.
Would this make sense?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@derailed I've also considered checking if the crd is scalable when the view is created, but consider a scenario where if k9s needs to display a crd resource that supports scalable as soon as it starts up, and at that point the cache may not have finished synced, then factory.Get will either return a NotFound error or will need to block until that crd finish synced. Doesn't seem like an optimal approach in this case either?

The current logic decides whether to wait for the cache to finish sync based on the wait parameter, which I see is false in most cases, and only in some special cases does it have wait = true, which I think might be acceptable?

Or we make the key binding dynamic, and when the crds sync is done, we update the view to add the scale cmd.

What do you think?

Copy link
Owner

@derailed derailed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wjiec Thank you for taking this on Jayson!

Comment on lines +145 to +146
// we must block until all started informers' caches were synced
_ = fac.WaitForCacheSync(f.stopChan)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wjiec This could indeed be a problem. As you've noted any cluster with lots of resources will become a pb.
The idea here is to leverage k9s eventual consistance so we would not have to block til all caches are updated.

I think we could perhaps lazy eval the crd instead to gauge whether it is scalable once the user decides to actually view it.
Would this make sense?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants