Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ddtrace/tracer: report number of instrumentations used as health metric #3021

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

hannahkm
Copy link
Contributor

@hannahkm hannahkm commented Dec 9, 2024

What does this PR do?

Creates and reports a new health metric, datadog.tracer.instrumentations, that counts the number of contribs the user is implementing.

The metric will contain the tags instrumentation and version. The instrumentation tag contains the name of the contrib that the user has implemented, for example chi or http. The version tag will contain the version of the contrib package that the user has imported into their code.

Motivation

This information is currently in the startup logs, but we want to also give this to our support teams without the extra step of requesting this info from the customer.

Reviewer's Checklist

  • Changed code has unit tests for its functionality at or near 100% coverage.
  • System-Tests covering this feature have been added and enabled with the va.b.c-dev version tag.
  • There is a benchmark for any new code, or changes to existing code.
  • If this interacts with the agent in a new way, a system test has been added.
  • Add an appropriate team label so this PR gets put in the right place for the release notes.
  • Non-trivial go.mod changes, e.g. adding new modules, are reviewed by @DataDog/dd-trace-go-guild.
  • For internal contributors, a matching PR should be created to the v2-dev branch and reviewed by @DataDog/apm-go.

Unsure? Have a question? Request a review!

@pr-commenter
Copy link

pr-commenter bot commented Dec 9, 2024

Benchmarks

Benchmark execution time: 2024-12-17 21:50:35

Comparing candidate commit 0327e10 in PR branch apm-rd/health-metrics with baseline commit 7f02289 in branch main.

Found 1 performance improvements and 0 performance regressions! Performance is the same for 58 metrics, 0 unstable metrics.

scenario:BenchmarkSetTagStringer-24

  • 🟩 execution_time [-5.395ns; -3.105ns] or [-3.744%; -2.154%]

@hannahkm hannahkm marked this pull request as ready for review December 10, 2024 22:10
@hannahkm hannahkm requested a review from a team as a code owner December 10, 2024 22:10
@hannahkm hannahkm requested a review from mtoffl01 December 10, 2024 22:10
@hannahkm hannahkm marked this pull request as draft December 12, 2024 18:06
@datadog-datadog-prod-us1
Copy link

datadog-datadog-prod-us1 bot commented Dec 12, 2024

Datadog Report

Branch report: apm-rd/health-metrics
Commit report: 9a8c842
Test service: dd-trace-go

✅ 0 Failed, 5113 Passed, 70 Skipped, 2m 27s Total Time

Copy link
Contributor

@mtoffl01 mtoffl01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 comments.

  1. I thought the tag name we agreed upon was integration, not instrumentation 😆 . Whatever it is, it should be consistent between your PR and mine. That way, users can more effectively correlate their data.
  2. I wonder if iterating over all integrations (for name, conf := range c.integrations) will introduce a performance cost that is.. not worth it. We can get this information from startup logs (Limitations: We need to solicit these logs from the customer and they can be disabled [rare]) and we can get this information from telemetry (Limitations: .... Metabase is not the easiest to use...). I'm not even sure that a "metric" is the right data type to support this information. Curious what other team members, e.g. @darccio or @rodfalcon think about this.

ddtrace/tracer/tracer.go Outdated Show resolved Hide resolved
@hannahkm
Copy link
Contributor Author

@mtoffl01 Ah, you're right. All these I-words look the same 😵 . Let me adapt that.

Regarding your second point, I was hoping that, by putting the for loop into newTracer, we could reduce how much this is getting run (i.e. once, when the tracer starts for the first time, I think). But I agree that there's probably a very slim margin of profit that we could get from this, especially given that all of this is in the startup logs already.

I'm not even sure that a "metric" is the right data type to support this information.

Yeah... is this even a health metric? 🤔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants