-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider rewriting scraper goroutine scheduling #778
Comments
Sounds good, We really need to consider and test properly before committing, and so when we plan to start it? |
I don't think this is nesesery blocked by #777 as they are pretty independent changes. I think we can start to do performance testing. |
ok, Let me take a look, and try to do it? |
I don't think this is possible with the current storage implementation.
We could check for leaks with: https://github.com/uber-go/goleak |
/assign |
I am working on this issue recently, I have a few questions
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
This should implement |
/remove-lifecycle frozen |
/lifecycle frozen |
Sorry, I delayed this issue due to lack of time and other things. I modified it at |
Metrics Server scraper is responsible for creating paralleling metric collection. Every cycle it lists all the nodes in the cluster and creates goroutine to scrape each node. Each go-routine waits some random time (to avoid scrapping at the same time), collects metrics and then is removed. Problems with this approach:
What would you like to be added:
Instead of listing nodes every cycle and churning goroutines, we should maintain a goroutine per node and use informer event handler to add/remove goroutines as nodes are updated. Similar approach is used by Prometheus Server.
Things to consider:
Why is this needed:
I have listed some problems, but we should test it properly before committing to it. We should create benchmarks and validate scraper and storage performance before merging any code, as in worst case we could result in more complicated code without addressing any of the problems.
/cc @yangjunmyfm192085 @dgrisonnet
/kind feature
The text was updated successfully, but these errors were encountered: