Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rewriting scraper goroutine scheduling #972

Conversation

yangjunmyfm192085
Copy link
Contributor

What this PR does / why we need it:
Consider rewriting scraper goroutine scheduling , and we should test it properly before committing to it. We should create benchmarks and validate scraper and storage performance before merging any code, as in worst case we could result in more complicated code without addressing any of the problems.
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #778

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Feb 25, 2022
@yangjunmyfm192085 yangjunmyfm192085 force-pushed the rewritingscraper branch 4 times, most recently from f7ec7d3 to a579b29 Compare February 25, 2022 08:34
@yangjunmyfm192085
Copy link
Contributor Author

yangjunmyfm192085 commented Mar 1, 2022

I did a comparative performance test for the modification of metrics-server.
envirnoment:

  • kind v1.23.0
  • 6 workers, 1 control-plane(Limited by resources, create up to 6 workers)
  • 1000 pods
  • running watch kubectl top pods --all-namespaces

results:
cpu(v0.6.1: 20mcore, after modification: 19mcore -- command kubectl top pod statistics result):
v0.6.1
pprof-cpu1.tar.gz

after modification
pprof-cpu2.tar.gz

memory(v0.6.1: 45Mi, after modification: 40Mi -- command kubectl top pod statistics result):
v0.6.1
pprof-heap1.tar.gz

after modification
pprof-heap2.tar.gz

The number of nodes may be small. From the test results, there is no obvious difference in CPU and memory usage.
/cc @serathius @dgrisonnet

nodeLister: nodeLister,
kubeletClient: client,
scrapeTimeout: scrapeTimeout,
func NewManageNodeScraper(client client.KubeletMetricsGetter, scrapeTimeout time.Duration, metricResolution time.Duration, store storage.Storage) *manageNodeScraper {
Copy link
Contributor

@serathius serathius Mar 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename to ManagedNodeScraper is not needed, I don't think we will want to have both implementations in the same codebase.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, I will update it.

m.nodeLock.Lock()
defer m.nodeLock.Unlock()
if working, exists := m.isWorking[node.UID]; exists && working {
klog.V(1).ErrorS(fmt.Errorf("Scrape in node is already running"), "", "node", klog.KObj(node))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should not call verbose V(1) with ErrorS.

Let's change it to Info log.

ticker := time.NewTicker(m.metricResolution)
defer ticker.Stop()

res, _ := m.ScrapeData(node)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why return error from ScrapeData if it's silenced everywhere?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will update it.

}
}()
if _, exists := m.isWorking[node.UID]; !exists || !m.isWorking[node.UID] {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if _, exists := m.isWorking[node.UID]; !exists || !m.isWorking[node.UID] {
if !m.isWorking[node.UID] {

startTime := myClock.Now()
baseCtx, cancel := context.WithCancel(context.Background())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why use WithCancel if cancel is used anywhere, also 2 lines below you use WithTimeout that also gives you cancel.

requestTotal.WithLabelValues("false").Inc()
return nil, err
}
requestTotal.WithLabelValues("true").Inc()
return ms, nil
res := &storage.MetricsBatch{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This merging is not needed at all.

klog.V(6).InfoS("Storing metrics from node", "node", klog.KObj(node))
m.storage.Store(res)
}
func (m *manageNodeScraper) UpdateNodeScraper(node *corev1.Node) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure how this differs from AddNodeScraper

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have defined three functions AddNodeScraper/UpdateNodeScraper/DeleteNodeScraper based on three events in pkg/server/config.go, but I am not sure if we need to deal with it based on the update event. If not, it should be deleted. it is similar to AddNodeScraper

func (m *manageNodeScraper) DeleteNodeScraper(node *corev1.Node) {
m.nodeLock.Lock()
defer m.nodeLock.Unlock()
if working, exists := m.isWorking[node.UID]; exists && working {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if working, exists := m.isWorking[node.UID]; exists && working {
if m.isWorking[node.UID] {

Comment on lines 214 to 215
if _, exists := m.stop[node.UID]; exists {
m.stop[node.UID] <- struct{}{}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if _, exists := m.stop[node.UID]; exists {
m.stop[node.UID] <- struct{}{}
if stop, exists := m.stop[node.UID]; exists {
stop <- struct{}{}

@@ -69,8 +72,8 @@ func (s *nodeStorage) GetMetrics(nodes ...*corev1.Node) ([]metrics.NodeMetrics,
}

func (s *nodeStorage) Store(batch *MetricsBatch) {
lastNodes := make(map[string]MetricsPoint, len(batch.Nodes))
prevNodes := make(map[string]MetricsPoint, len(batch.Nodes))
lastNodes := make(map[string]MetricsPoint, len(batch.Nodes)+len(s.last))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to allocate space for batch.Nodes as the size of data stored should stay around the same level.

@@ -96,6 +99,17 @@ func (s *nodeStorage) Store(batch *MetricsBatch) {
}
}
}
newTimeStamp := time.Now()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code is pretty inefficient. Every time we scrape a node, we scan all nodes if their metrics are outdated. This gives us O(n^2) time and memory complexity to scrape and store metrics from n nodes.

If this was not picked up in the tests you prepared, means that they were not accurate. Please look into writing a benchmark for this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. I will optimize it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's focus on creating a benchmark that would detect the difference.

@serathius
Copy link
Contributor

Current code vs results make me think that we need to write a proper benchmark to reliably measure improvement.

Note: I treat this PR as a prof of concept that can be used to compare performance. @yangjunmyfm192085 no need to update/fix the tests.

/cc @shuaich

@k8s-ci-robot
Copy link
Contributor

@serathius: GitHub didn't allow me to request PR reviews from the following users: shuaich.

Note that only kubernetes-sigs members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to this:

Current code vs results make me think that we need to write a proper benchmark to reliably measure improvement.

Note: I treat this PR as a prof of concept that can be used to compare performance. @yangjunmyfm192085 no need to update/fix the tests.

/cc @shuaich

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@yangjunmyfm192085 yangjunmyfm192085 force-pushed the rewritingscraper branch 4 times, most recently from ca43dde to 6f2b94e Compare March 7, 2022 12:08
Copy link
Contributor Author

@yangjunmyfm192085 yangjunmyfm192085 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/test pull-metrics-server-test-e2e

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 9, 2022
@k8s-ci-robot k8s-ci-robot removed the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label Mar 9, 2022
@yangjunmyfm192085
Copy link
Contributor Author

In pr #993 and pr #972 I have added a benchmark for scrape, please help to review it. If the benchmarks are ok, I'll do a comparison test

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 25, 2022
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 27, 2022
Copy link
Contributor Author

@yangjunmyfm192085 yangjunmyfm192085 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/retest

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 4, 2022
@yangjunmyfm192085
Copy link
Contributor Author

/remove-lifecycle stale
Waiting to find a good way to compare the performance before and after modification

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 4, 2022
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 16, 2022
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 17, 2022
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: yangjunmyfm192085
Once this PR has been reviewed and has the lgtm label, please assign s-urbaniak for approval by writing /assign @s-urbaniak in a comment. For more information see:The Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 16, 2022
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 10, 2023
@k8s-ci-robot
Copy link
Contributor

@yangjunmyfm192085: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 9, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jasonliu747
Copy link

/reopen
/remove-lifecycle rotten

@k8s-ci-robot
Copy link
Contributor

@jasonliu747: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen
/remove-lifecycle rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 20, 2023

func (s *server) tick(ctx context.Context, startTime time.Time) {
s.tickStatusMux.Lock()
s.tickLastStart = startTime

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct me if I am wrong.
In this implementation, probeMetricCollectionTimely() will always return an error, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Consider rewriting scraper goroutine scheduling
6 participants