-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-24117][SQL] Unified the getSizePerRow #21189
Conversation
Test build #89955 has finished for PR 21189 at commit
|
@@ -178,7 +179,7 @@ class MemoryDataWriter(partition: Int, outputMode: OutputMode) | |||
* Used to query the data that has been written into a [[MemorySinkV2]]. | |||
*/ | |||
case class MemoryPlanV2(sink: MemorySinkV2, override val output: Seq[Attribute]) extends LeafNode { | |||
private val sizePerRow = output.map(_.dataType.defaultSize).sum |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't think it's possible.
|
||
sink.addBatch(1, 4 to 6) | ||
plan.invalidateStatsCache() | ||
assert(plan.stats.sizeInBytes === 24) | ||
assert(plan.stats.sizeInBytes === 72) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MemorySinkV2
is mainly for testing. I think the stats changes will not impact anything, right? @tdas @jose-torres
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It shouldn't impact anything, but abstractly it seems strange that this unification would cause the stats to change? What are we doing differently to cause this, and how confident are we this won't happen to production sinks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems we forgot to count the row object overhead (8 bytes) before in memory stream.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM then
val childRowSize = p.child.output.map(_.dataType.defaultSize).sum + 8 | ||
val outputRowSize = p.output.map(_.dataType.defaultSize).sum + 8 | ||
val childRowSize = EstimationUtils.getSizePerRow(p.child.output) | ||
val outputRowSize = EstimationUtils.getSizePerRow(p.output) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM |
Test build #90365 has finished for PR 21189 at commit
|
thanks, merging to master! |
What changes were proposed in this pull request?
This pr unified the
getSizePerRow
becausegetSizePerRow
is used in many places. For example:How was this patch tested?
Exist tests