-
Notifications
You must be signed in to change notification settings - Fork 936
Rework and publish metric benchmarks #8000
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| with: | ||
| tool: 'jmh' | ||
| output-file-path: sdk/trace/build/jmh-result.json | ||
| output-file-path: sdk/all/build/jmh-result.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tylerbenson this benchmark-action only allows you to have a single output file path. This means we need to all the published benchmarks to be in a single module, such that we can run with them a single java -jar *-jmh.jar ... command. I think the opentelemetry-sdk artifact is a good spot for this.
This turns out to be a useful constraint as I think it will be nice to have all the public benchmarks colocated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LTGM!
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #8000 +/- ##
=========================================
Coverage 90.13% 90.14%
- Complexity 7469 7471 +2
=========================================
Files 833 833
Lines 22523 22524 +1
Branches 2234 2235 +1
=========================================
+ Hits 20301 20304 +3
+ Misses 1517 1515 -2
Partials 705 705 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
tylerbenson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While you're at it, you might consider squashing the commit history for the results branch as it continues to grow each run. Especially since this change will invalidate prior results.
| with: | ||
| tool: 'jmh' | ||
| output-file-path: sdk/trace/build/jmh-result.json | ||
| output-file-path: sdk/all/build/jmh-result.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LTGM!
As mentioned #7986, I've been working through some ideas to improve the performance of the metric SDK under high contention.
To illustrate the impact on these changes, I've reworked
MetricsBenchmarkto include dimensions that impact record performance. The set of dimensions that play some role include:That forms 2 * 2 * 2 * 2 * 2 * 2 * 5 = 320 unique test cases, which is just impractical. And so I narrow it down to the most meaningful dimensions:
With these eliminated, were down to 222*5 = 40 test cases, which is more reasonable.
I'm also using this as an opportunity to finish what @tylerbenson started and get into the routine of running benchmarks on each change on dedicated hardwhere, and publishing the results on https://open-telemetry.github.io/opentelemetry-java/benchmarks/
The unfinished problem was that the benchmarks in this repo are micro benchmarks. Their not very meaningful for end users and may even do more harm then good. What we need is a curated set of somewhat high level benchmarks, intentionally built to demonstrate / report on the types of performance characteristics that matter to end users.
This revamped
MetricRecordBenchmarkis the first of these. I will followup with dedicated benchmarks for other areas:For reference, here are the results of the revamped
MetricRecordBenchmarkon my machine: