-
Notifications
You must be signed in to change notification settings - Fork 906
Eventually visible benchmark #7700
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #7700 +/- ##
=========================================
Coverage 90.12% 90.13%
- Complexity 7187 7192 +5
=========================================
Files 814 814
Lines 21700 21713 +13
Branches 2123 2127 +4
=========================================
+ Hits 19557 19570 +13
Misses 1477 1477
Partials 666 666 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
b2204c8
to
2b1b386
Compare
2b1b386
to
78c668d
Compare
78c668d
to
e5706e0
Compare
So, non-volatile is the fastest, but we know may not satisfy "eventually visible", correct? Did your benchmark measure "time to visibility" across multiple threads, or just throughput with different implementations? |
the "eventual visibility" implementations rely on a non-volatile access counter to be "eventual", though that's also what destroys the performance |
I guess I'm asking what the "score" represents in the benchmarks... time to visibility? |
ah, number of nanoseconds to perform 100 boolean reads on the same thread this is where non-volatile shines because the JIT compiler can optimize it to do a single memory read |
If I'm reading the results correctly, the non-volatile is much much faster than everything else. This is what I suspected based on research about perf penalty of the volatile keyword, but didn't test myself. I'm reluctant to add this to penalty when only a small percent of users will ever take advantage of the dynamism that requires the hit. The penalty has to be paid on the hot path of metrics, logs, and traces. See my old big post on metric systems for some ballpark figures on time to record measurements. I think the volatile keyword moves the needle in a meaningful way: https://opentelemetry.io/blog/2024/java-metric-systems-compared/#metrics-primer |
We could make it a setting. I.e. when you initialize the sdk, indicate whether you intend to use dynamic config. If so, we substitute the implementation to guarantee eventually consistency. If not, we use an implementation which is fast and doesnt waste perf checking for config changes that will never come. |
Do you really believe that the nanoseconds overhead is something that would be noticeable? What is the sense of adding a feature there is no guarantee that it will work? I suggest to check what is the speedup factor between non-volatile vs volatile in a end to end scenario (e.g. emitting a log record on a logger that is disabled). |
keep in mind what jmh prints out
Firstly I didn't try to run these or attempt to verify whey the numbers are like they are.
|
Yes especially for metrics. Perf arguments are brought up as reasons not to use otel. As for the guaranteed to work argument:
|
For transparancy: open-telemetry/opentelemetry-specification#4645 (comment) |
Investigating implementation options for open-telemetry/opentelemetry-specification#4645
TL;DR - I couldn't conjure up an implementation that satisfies "eventually visible" that was faster than just using volatile (immediately visible)
Java 17 Results
BooleanStateBenchmark
Java 24 Results
BooleanStateBenchmark