Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added KafkaMetrics internal support with MetricsReporter #815

Closed
wants to merge 1 commit into from

Conversation

LeonDaniel
Copy link

No description provided.

@ivantopo
Copy link
Contributor

Hey @LeonDaniel, thanks a lot for the effort!

I would like to propose a change of direction for this PR: instead of sampling the pre-calculated metric values using the MetricsReporter interface, why don't we instrument Kafka's Sensor class and update the corresponding Kamon metrics on every call to Sensor.record()?

The reasoning for the proposal is that we are loosing all the juicy details of the data by only sampling the aggregated values. For example, see the Join Sensor initialization from Kafka:

            this.joinSensor = metrics.sensor("join-latency");
            this.joinSensor.add(metrics.metricName("join-time-avg",
                this.metricGrpName,
                "The average time taken for a group rejoin"), new Avg());
            this.joinSensor.add(metrics.metricName("join-time-max",
                this.metricGrpName,
                "The max time taken for a group rejoin"), new Max());
            this.joinSensor.add(createMeter(metrics, metricGrpName, "join", "group joins"));

It creates four metrics for the same data:

  • join-time-avg
  • join-time-max
  • join-rate
  • join-total

But in Kamon, all of those individual metrics would usually be a single Histogram-backed metric. The entire distribution of values would be covered, so we get min, max, all percentiles and so on. The rate and total metrics can be derived from the count of recordings in the histogram. We would be getting a lot more output from the same input data.

I think the goal from the Kafka folks was to expose pre-computed stats that can be used exactly as they are, but the Kamon way is more about collecting the raw data in the best possible form and the ship it somewhere else to get the stats.

Would you like to give it a try to these changes? I'm all in for having a more in-depth chat about it and guide you through the implementation :)

@leon-daniel
Copy link
Contributor

Sounds like a good idea. I think I understand the general direction as it is more in tune with the rest of the instrumentation modules.
I am a bit unfamiliar with the Kanela InstrumentationBuilder part, but I am willing to give it a try. When you have some time for a quick sync, I am available for a google meet video call(on [email protected]). I can set up the meeting, but I don't have your google mail address. Let me know what is a good time for you. Thank you!

@LeonDaniel
Copy link
Author

I will close this PR and continue the work here: #820

@LeonDaniel LeonDaniel closed this Jul 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants