Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Record Arrow FFI metrics #1128

Closed
wants to merge 2 commits into from

Conversation

andygrove
Copy link
Member

@andygrove andygrove commented Nov 29, 2024

Which issue does this PR close?

N/A

Rationale for this change

This is a subset of #1111, separated out to make reviews easier.

What changes are included in this PR?

Record time spent performing Arrow FFI to transfer batches between JVM and Rust code.

Note that these timings won't be fully exposed to Spark UI until we merge #1111.

How are these changes tested?

@andygrove andygrove marked this pull request as ready for review November 29, 2024 17:32
@@ -88,6 +90,7 @@ impl ScanExec {
) -> Result<Self, CometError> {
let metrics_set = ExecutionPlanMetricsSet::default();
let baseline_metrics = BaselineMetrics::new(&metrics_set, 0);
let arrow_ffi_time = MetricBuilder::new(&metrics_set).subset_time("arrow_ffi_time", 0);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see now that this isn't just FFI time. It is the cost of calling CometBatchIterator.next() so includes the cost of that method getting the next input batch as well as the FFI export cost ...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moving this to draft for now while I think about this more

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be astonished if arrow_ffi has a substantial cost. It is, after all, zero-copy.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is zero copy for the data buffers, but the schema does get serialized with each batch. However, it does not look to be an issue after all

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but the schema does get serialized with each batch

fair point

Copy link
Contributor

@comphead comphead left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm thanks @andygrove
one thing I'd like to mention are you planning to have it permanently or enable this internal metrics based on some spark key so as to spend resources on metrics only when its really needed

@andygrove andygrove marked this pull request as draft December 2, 2024 18:27
@andygrove andygrove closed this Dec 2, 2024
@andygrove andygrove deleted the arrow-ffi-metric branch January 14, 2025 18:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants