You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary
In order to summarize on query engine performance and get avoid from performance rollback, we should have some generic microbenchmark framework to run during each nightly release.
Microbenchmark is designed as regular approach to get avoid of performance rollback, which perform as supplementary role in performance engineering. Comparing to e2e benchmark, microbenchmark rely on static data and trying to resolve performance rollback issue during each release(or commit) by running same query to different storage backend.
each performance test should support to target on different storage backend given number of iteration and should be scalable to cluster level tests.
performance benchmark should behave like the following Pseudo code
Possible implementation
Adopting some CI tool such as cirrus CI which provide consistent container env run on k8s to run benchmarks in uniform way.
Critirion.rs(https://bheisler.github.io/criterion.rs/book/criterion_rs.html) maybe a good library for microbenchmark implementation
Summary
In order to summarize on query engine performance and get avoid from performance rollback, we should have some generic microbenchmark framework to run during each nightly release.
Microbenchmark is designed as regular approach to get avoid of performance rollback, which perform as supplementary role in performance engineering. Comparing to e2e benchmark, microbenchmark rely on static data and trying to resolve performance rollback issue during each release(or commit) by running same query to different storage backend.
each performance test should support to target on different storage backend given number of iteration and should be scalable to cluster level tests.
performance benchmark should behave like the following Pseudo code
def bench_s3(i : iteration, r: reference) { c = init_query_cluster(s3) execute_query(i) collect_metrics(c) report compare_with(r) }
Output:
Query metrics for each benchmark query
generic summarizations
Possible implementation
Adopting some CI tool such as cirrus CI which provide consistent container env run on k8s to run benchmarks in uniform way.
Critirion.rs(https://bheisler.github.io/criterion.rs/book/criterion_rs.html) maybe a good library for microbenchmark implementation
Should fix:
#3084
Reference:
http://www.vldb.org/pvldb/vol13/p3285-gruenheid.pdf
The text was updated successfully, but these errors were encountered: