Skip to content

Conversation

@kkondaka
Copy link
Collaborator

@kkondaka kkondaka commented Dec 1, 2025

Description

Improved performance of PrometheusTimeSeries class by

  • removing repeated map flattening which is called multiple times for different attribute maps,
    creating new HashMaps each time
  • removing string concatenations in loops
  • redundant Label creation

Also added negative credentials check integration test

Issues Resolved

Resolves #[Issue number to be closed when this PR is merged]

Check List

  • [X ] New functionality includes testing.
  • New functionality has a documentation issue. Please link to it in this PR.
    • New functionality has javadoc added
  • [ X] Commits are signed with a real name per the DCO

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

public List<TimeSeries> getTimeSeriesList() {
return timeSeriesList;
private int estimateLabelSize(String name, String value) {
return name.length() + value.length() + 8; // Approximate protobuf overhead
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make 8 a constant and name it APPROXIMATE_PROTOBUF_OVERHEAD.

size += label.toByteArray().length;
Sample sample = Sample.newBuilder().setValue(sampleValue).setTimestamp(timestamp).build();
size += sample.toByteArray().length;
size += estimateLabelSize(labelName, labelValue) + 16; // Sample overhead
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, make 16 into a constant named SAMPLE_OVERHEAD. Better yet, is there a way to derive it as a constant?

return timestamp;
}
public List<TimeSeries> getTimeSeriesList() { return timeSeriesList; }
public long getTimeStamp() { return timestamp; }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Timestamp" is a single word, so this should be getTimestamp().

public List<TimeSeries> getTimeSeriesList() {
return timeSeriesList;
private int estimateLabelSize(String name, String value) {
return name.length() + value.length() + 8; // Approximate protobuf overhead
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you looking for length or size? This may not give you the actual value you are looking for.

I appreciate the desire to cut the extra buffer. But, the calculation may be incorrect. Could you add to the Remote.WriteRequest.Builder and get the value at that point to avoid double buffering?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 8 byte overhead includes on-wire overhead for labels for tags and length markers. Anyways, it would be approximate.

Sample sample = Sample.newBuilder().setValue(sampleValue).setTimestamp(timestamp).build();
size += sample.toByteArray().length;
final String labelValue, final Double sampleValue) {
size += estimateLabelSize(NAME_LABEL, metricName) + estimateLabelSize(labelName, labelValue) + 16;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is 16 for? Similar to my other comments, make this a constant and give it a helpful name. e.g. INDIVIDUAL_TIME_SERIES_OVERHEAD.

@dlvenable
Copy link
Member

These are good performance improvements.

You should include a JMH test. See data-prepper-expression and http-source for examples of doing this.

@kkondaka
Copy link
Collaborator Author

kkondaka commented Dec 2, 2025

@dlvenable will add JMH in a separate PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants