Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BENCHMARK] Add support for opensearch-benchmark's compare feature #4837

Closed
rishabh6788 opened this issue Jul 10, 2024 · 2 comments
Closed
Assignees
Labels
enhancement New Enhancement

Comments

@rishabh6788
Copy link
Collaborator

Is your feature request related to a problem? Please describe

As of today we are using opensearch-benchmark's execute-test api to run performance benchmark tests on nightly and as well as ad-hoc basis.
We are also adding support to run performance benchmark on OpenSearch pull requests.

So, while a developer is working on their changes, apart from just getting performance benchmark numbers published on the PR, they might also want to know how their changes compare against a baseline.

Describe the solution you'd like

Fortunately, opensearch-benchmark's compare command solves this problem for us. It only requires baseline and contender test-execution-ids as input params and publishes a detailed comparison result.

We need to add this feature into existing benchmark workflow and add a job in jenkins, similar to benchmark-test to execute this workflow on ad-hoc basis and as well as on pull request.

Describe alternatives you've considered

No response

Additional context

No response

@rishabh6788 rishabh6788 added enhancement New Enhancement untriaged Issues that have not yet been triaged labels Jul 10, 2024
@rishabh6788
Copy link
Collaborator Author

Assigning this to @OVI3D0

@OVI3D0
Copy link
Member

OVI3D0 commented Jul 15, 2024

Commenting for assignment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New Enhancement
Projects
Status: ✅ Done
Development

No branches or pull requests

3 participants