-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-38124][SQL][SS] Introduce StatefulOpClusteredDistribution and apply to stream-stream join #35419
[SPARK-38124][SQL][SS] Introduce StatefulOpClusteredDistribution and apply to stream-stream join #35419
Changes from 3 commits
90595dd
4639b7e
adfe796
d722b2c
4b68d28
753a5b6
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
@@ -90,6 +90,34 @@ case class ClusteredDistribution( | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
} | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
} | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
/** | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* Represents the requirement of distribution on the stateful operator. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* Each partition in stateful operator initializes state store(s), which are independent with state | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* store(s) in other partitions. Since it is not possible to repartition the data in state store, | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* Spark should make sure the physical partitioning of the stateful operator is unchanged across | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* Spark versions. Violation of this requirement may bring silent correctness issue. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* Since this distribution relies on [[HashPartitioning]] on the physical partitioning of the | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* stateful operator, only [[HashPartitioning]] can satisfy this distribution. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. do we want to also explain briefly this only applies to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Where to leave a comment is the issue. It is unlikely that we often look at StatefulOpClusteredDistribution - probably giving more chance to get attention if we put a comment on every stateful operators wherever using ClusteredDistribution. Totally redundant, but gives a sign of warning whenever they try to change it. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah either works for me. The comment is also non-blocking for this PR, as this is an improvement for documentation. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
*/ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
case class StatefulOpClusteredDistribution( | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I like the new name :) thanks for making it more specific. Do we also need to update There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Each input must follow the required distribution provided from stateful operator to respect the requirement of state partitioning. State partitioning is the first class, so even both sides of the streaming join are co-partitioned, Spark must perform shuffle if they don't match with state partitioning. (If that was the previous behavior, we broke something at some time point.) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually, I think this PR will skip shuffle if both sides of a streaming join are co-partitioned. In
In the step 2) we'd only consider
I'm not quite sure about this. Shouldn't we retain the behavior before #32875? Quoting the comment from @cloud-fan:
If we respect co-partitioning and avoid shuffle before #32875 but start shuffle after this PR, I think similar issue like described in the comment can happen? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
For stream-stream join, once each input satisfy the required "hash" distribution of each, they will be co-partitioned. stream-stream join must guarantee this. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The problem brought up because ClusteredDistribution has much more relaxed requirement; what we really need to require for "any" stateful operator including stream-stream join is that for all children a specific tuple having specific grouping key must be bound to the deterministic partition "ID", which only HashClusteredDistribution could guarantee. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think one behavior difference between this PR and the state before #32875 is that, previously, we'd also check There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. HashClusteredDistribution also has a requirement of the number of partitions, so step 1) should fulfill it. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see, it should be good then! |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
expressions: Seq[Expression], | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
requiredNumPartitions: Option[Int] = None) extends Distribution { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
require( | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
expressions != Nil, | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
"The expressions for hash of a StatefulOpClusteredDistribution should not be Nil. " + | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
"An AllTuples should be used to represent a distribution that only has " + | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
"a single partition.") | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
override def createPartitioning(numPartitions: Int): Partitioning = { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
assert(requiredNumPartitions.isEmpty || requiredNumPartitions.get == numPartitions, | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm, is there any chance we specify empty There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In sense of "defensive programming", it shouldn't. I just didn't change the implementation of HashClusteredPartition, but now I think it worths to do. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
s"This StatefulOpClusteredDistribution requires ${requiredNumPartitions.get} " + | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
s"partitions, but the actual number of partitions is $numPartitions.") | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
HashPartitioning(expressions, numPartitions) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
} | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
} | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
/** | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* Represents data where tuples have been ordered according to the `ordering` | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* [[Expression Expressions]]. Its requirement is defined as the following: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
@@ -200,6 +228,11 @@ case object SinglePartition extends Partitioning { | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* Represents a partitioning where rows are split up across partitions based on the hash | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* of `expressions`. All rows where `expressions` evaluate to the same values are guaranteed to be | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* in the same partition. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* Since [[StatefulOpClusteredDistribution]] relies on this partitioning and Spark requires | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* stateful operators to retain the same physical partitioning during the lifetime of the query | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* (including restart), the result of evaluation on `partitionIdExpression` must be unchanged | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* across Spark versions. Violation of this requirement may bring silent correctness issue. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Comment on lines
+237
to
+238
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. shall we enforce this assumption in unit test as well? e.g. in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We have a test for verifying this, although it is not exhaustive. spark/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingJoinSuite.scala Lines 574 to 605 in 2e703ae
If we want to be exhaustive, I can make a combination of repartitions which could have not triggered shuffle with hash partitioning against joining keys if stream-stream join uses ClusteredDistribution. It may not be exhaustive for future-proof indeed. Instead, if we are pretty sure StateOpClusteredDistribution would work as expected, we can simply check the required child distribution of the physical plan of stream-stream join, and additionally check the output partitioning of each child to be HashPartitioning with joining keys (this effectively verifies StateOpClusteredDistribution indeed). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. oh actually I was referring to the assumption:
It would be just to add some logic to check There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We checked HashPartitioning and partitionExpression here - the remaining is partitionIdExpression, which is the implementation of HashPartitioning. That said, it would be nice if we have a separate test against HashPartitioning if we don't have one. Could you please check and craft one if we don't have it? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sure I can add one later this week. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
*/ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
case class HashPartitioning(expressions: Seq[Expression], numPartitions: Int) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
extends Expression with Partitioning with Unevaluable { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
@@ -211,6 +244,10 @@ case class HashPartitioning(expressions: Seq[Expression], numPartitions: Int) | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
override def satisfies0(required: Distribution): Boolean = { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
super.satisfies0(required) || { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
required match { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
case h: StatefulOpClusteredDistribution => | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
expressions.length == h.expressions.length && expressions.zip(h.expressions).forall { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
case (l, r) => l.semanticEquals(r) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
} | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
case ClusteredDistribution(requiredClustering, _) => | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
expressions.forall(x => requiredClustering.exists(_.semanticEquals(x))) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
case _ => false | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -185,8 +185,8 @@ case class StreamingSymmetricHashJoinExec( | |
val nullRight = new GenericInternalRow(right.output.map(_.withNullability(true)).length) | ||
|
||
override def requiredChildDistribution: Seq[Distribution] = | ||
ClusteredDistribution(leftKeys, stateInfo.map(_.numPartitions)) :: | ||
ClusteredDistribution(rightKeys, stateInfo.map(_.numPartitions)) :: Nil | ||
StatefulOpClusteredDistribution(leftKeys, stateInfo.map(_.numPartitions)) :: | ||
StatefulOpClusteredDistribution(rightKeys, stateInfo.map(_.numPartitions)) :: Nil | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There is other There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please refer the long comment thread #35419 (comment) We have to fix them, but we should have a plan to avoid introducing "silently broken" on existing queries. We need more time to think through how to address the "already broken" thing. They seem to be broken from their introduction (Spark 2.2+), so it could be possible someone is even leveraging the relaxed requirement as a "feature", despite it would be very risky if they tried to adjust partitioning by theirselves. Even for this case we can't simply break their query. I'll create a new JIRA ticket, and/or initiate discussion thread in dev@ regarding this. I need some time to build a plan (with options) to address this safely. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suppose they have a stream of event logs having (userId, time, blabla), and do time-window aggregation like this:
groupBy won't trigger shuffle for various output partitionings of df, since streaming aggregation requires ClusteredDistribution. The thing is, it could be from the intention to 1) reduce shuffle in any way, or 2) try to control the partitioning to deal with skew. (I can't easily think of skew from applying hash function against "grouping keys + time window", but once they see it, they will try to fix it. ...Technically saying, they must not try to fix it as state partitioning will be no longer the same with operator's partitioning...) Both are very risky (as of now, changing the partitioning during query lifetime would lead to correctness issue), but it's still from users' intention and they already did it anyway so we can't simply enforce the partitioning and silently break this again. Furthermore, we seem to allow data source to produce output partitioning by itself, which can satisfy ClusteredDistribution. This is still very risky for stateful operator's perspective, but once the output partitioning is guaranteed to be not changed, it's still a great change to reduce (unnecessary) shuffle. |
||
|
||
override def output: Seq[Attribute] = joinType match { | ||
case _: InnerLike => left.output ++ right.output | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -29,7 +29,7 @@ import org.apache.spark.sql.catalyst.InternalRow | |
import org.apache.spark.sql.catalyst.expressions._ | ||
import org.apache.spark.sql.catalyst.expressions.codegen.GenerateUnsafeProjection | ||
import org.apache.spark.sql.catalyst.plans.logical.EventTimeWatermark | ||
import org.apache.spark.sql.catalyst.plans.physical.{AllTuples, ClusteredDistribution, Distribution, Partitioning} | ||
import org.apache.spark.sql.catalyst.plans.physical.{AllTuples, Distribution, Partitioning, StatefulOpClusteredDistribution} | ||
import org.apache.spark.sql.catalyst.streaming.InternalOutputModes._ | ||
import org.apache.spark.sql.errors.QueryExecutionErrors | ||
import org.apache.spark.sql.execution._ | ||
|
@@ -337,7 +337,7 @@ case class StateStoreRestoreExec( | |
if (keyExpressions.isEmpty) { | ||
AllTuples :: Nil | ||
} else { | ||
ClusteredDistribution(keyExpressions, stateInfo.map(_.numPartitions)) :: Nil | ||
StatefulOpClusteredDistribution(keyExpressions, stateInfo.map(_.numPartitions)) :: Nil | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am wondering if this change could introduce extra shuffle for streaming aggregate. Previously the operator requires
For example:
and the query plan:
One can argue the previous behavior for streaming aggregate is not wrong. As long as all rows for same keys are colocated in same partition, There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. But we don't define "repartition just before stateful operator" as "unsupported operation across query lifetime", no? The thing is that once the query is run, the partitioning of stateful operator must not be changed during lifetime. Since we don't store the information of partitioning against stateful operator in the checkpoint, we have no way around other than enforcing the partitioning of stateful operator as the "one" what we basically expect. As I said in #32875, there is a room for improvement, but the effort on improvement must be performed after we fix this issue. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I ran the query in
I agree with you @HeartSaVioR. I want to raise a concern here it might change the query plan for certain streaming aggregate query (as above synthetic query), and it could break existing state store when running with next Spark 3.3 code, based on my limited understanding. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Unfortunately yes. We may need to craft some tools to analyze the state and repartition if the partitioning is already messed up. But leaving this as it is would bring more chances to let users' state be indeterministic. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
That is the problem we have. We didn't disallow the case where it brings silent correctness issue. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
This is just a synthetic example I composed to verify my theory. But I think it might break for more cases, such as There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Even nowadays multiple stateful operators don't work properly due to the global watermark, so we don't need to worry about the partitioning between stateful operators. We just need to worry about the partitioning between upstream (in most cases non stateful) and the stateful operator. |
||
} | ||
} | ||
|
||
|
@@ -496,7 +496,7 @@ case class StateStoreSaveExec( | |
if (keyExpressions.isEmpty) { | ||
AllTuples :: Nil | ||
} else { | ||
ClusteredDistribution(keyExpressions, stateInfo.map(_.numPartitions)) :: Nil | ||
StatefulOpClusteredDistribution(keyExpressions, stateInfo.map(_.numPartitions)) :: Nil | ||
} | ||
} | ||
|
||
|
@@ -573,7 +573,8 @@ case class SessionWindowStateStoreRestoreExec( | |
} | ||
|
||
override def requiredChildDistribution: Seq[Distribution] = { | ||
ClusteredDistribution(keyWithoutSessionExpressions, stateInfo.map(_.numPartitions)) :: Nil | ||
StatefulOpClusteredDistribution(keyWithoutSessionExpressions, | ||
stateInfo.map(_.numPartitions)) :: Nil | ||
} | ||
|
||
override def requiredChildOrdering: Seq[Seq[SortOrder]] = { | ||
|
@@ -684,7 +685,7 @@ case class SessionWindowStateStoreSaveExec( | |
override def outputPartitioning: Partitioning = child.outputPartitioning | ||
|
||
override def requiredChildDistribution: Seq[Distribution] = { | ||
ClusteredDistribution(keyExpressions, stateInfo.map(_.numPartitions)) :: Nil | ||
StatefulOpClusteredDistribution(keyExpressions, stateInfo.map(_.numPartitions)) :: Nil | ||
} | ||
|
||
override def shouldRunAnotherBatch(newMetadata: OffsetSeqMetadata): Boolean = { | ||
|
@@ -742,7 +743,7 @@ case class StreamingDeduplicateExec( | |
|
||
/** Distribute by grouping attributes */ | ||
override def requiredChildDistribution: Seq[Distribution] = | ||
ClusteredDistribution(keyExpressions, stateInfo.map(_.numPartitions)) :: Nil | ||
StatefulOpClusteredDistribution(keyExpressions, stateInfo.map(_.numPartitions)) :: Nil | ||
|
||
override protected def doExecute(): RDD[InternalRow] = { | ||
metrics // force lazy init at driver | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: do we need to put "structured streaming" before "stateful operator"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think "stateful" is already representing the streaming context, but no big deal if we repeat here.