Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-38124][SQL][SS] Introduce StatefulOpClusteredDistribution and apply to stream-stream join #35419

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,37 @@ case class ClusteredDistribution(
}
}

/**
* Represents the requirement of distribution on the stateful operator in Structured Streaming.
*
* Each partition in stateful operator initializes state store(s), which are independent with state
* store(s) in other partitions. Since it is not possible to repartition the data in state store,
* Spark should make sure the physical partitioning of the stateful operator is unchanged across
* Spark versions. Violation of this requirement may bring silent correctness issue.
*
* Since this distribution relies on [[HashPartitioning]] on the physical partitioning of the
* stateful operator, only [[HashPartitioning]] (and HashPartitioning in
* [[PartitioningCollection]]) can satisfy this distribution.
*/
case class StatefulOpClusteredDistribution(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the new name :) thanks for making it more specific.

Do we also need to update HashShuffleSpec so that two HashPartitionings can be compatible with each other when checking against StatefulOpClusteredDistributions? this is the previous behavior where Spark would avoid shuffle if both sides of the streaming join are co-partitioned.

Copy link
Contributor Author

@HeartSaVioR HeartSaVioR Feb 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we also need to update HashShuffleSpec so that two HashPartitionings can be compatible with each other when checking against StatefulOpClusteredDistributions? this is the previous behavior where Spark would avoid shuffle if both sides of the streaming join are co-partitioned.

Each input must follow the required distribution provided from stateful operator to respect the requirement of state partitioning. State partitioning is the first class, so even both sides of the streaming join are co-partitioned, Spark must perform shuffle if they don't match with state partitioning. (If that was the previous behavior, we broke something at some time point.)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I think this PR will skip shuffle if both sides of a streaming join are co-partitioned. In EnsureRequirements, we currently mainly do two things:

  1. check if output partitioning can satisfy the required distribution
  2. if there are two children, check if they are compatible with each other, and insert shuffle if not.

In the step 2) we'd only consider ClusteredDistribution at the moment, so in case of StatefulOpClusteredDistributions this step is simply skipped. Consequently, Spark will skip shuffle even if only step 1) is successful.

State partitioning is the first class, so even both sides of the streaming join are co-partitioned, Spark must perform shuffle if they don't match with state partitioning.

I'm not quite sure about this. Shouldn't we retain the behavior before #32875? Quoting the comment from @cloud-fan:

I think this is kind of a potential bug. Let's say that we have 2 tables that can report hash partitioning optionally (e.g. controlled by a flag). Assume a streaming query is first run with the flag off, which means the tables do not report hash partitioning, then Spark will add shuffles before the stream-stream join, and the join state (steaming checkpoint) is partitioned by Spark's murmur3 hash function. Then we restart the streaming query with the flag on, and the 2 tables report hash partitioning (not the same as Spark's murmur3). Spark will not add shuffles before stream-stream join this time, and leads to wrong result, because the left/right join child is not co-partitioned with the join state in the previous run.

If we respect co-partitioning and avoid shuffle before #32875 but start shuffle after this PR, I think similar issue like described in the comment can happen?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. check if output partitioning can satisfy the required distribution

For stream-stream join, once each input satisfy the required "hash" distribution of each, they will be co-partitioned. stream-stream join must guarantee this.

Copy link
Contributor Author

@HeartSaVioR HeartSaVioR Feb 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem brought up because ClusteredDistribution has much more relaxed requirement; what we really need to require for "any" stateful operator including stream-stream join is that for all children a specific tuple having specific grouping key must be bound to the deterministic partition "ID", which only HashClusteredDistribution could guarantee.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think one behavior difference between this PR and the state before #32875 is that, previously, we'd also check spark.sql.shuffle.partitions and insert shuffle if there's not enough parallelism from the input. However, this PR doesn't do that since it skips the step 2) above.

Copy link
Contributor Author

@HeartSaVioR HeartSaVioR Feb 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HashClusteredDistribution also has a requirement of the number of partitions, so step 1) should fulfill it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, StatefulOpClusteredDistributions is very strict and requires numPartitions as well. I don't think we need extra co-partition check for it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, it should be good then!

expressions: Seq[Expression],
_requiredNumPartitions: Int) extends Distribution {
require(
expressions != Nil,
"The expressions for hash of a StatefulOpClusteredDistribution should not be Nil. " +
"An AllTuples should be used to represent a distribution that only has " +
"a single partition.")

override val requiredNumPartitions: Option[Int] = Some(_requiredNumPartitions)

override def createPartitioning(numPartitions: Int): Partitioning = {
assert(_requiredNumPartitions == numPartitions,
s"This StatefulOpClusteredDistribution requires ${_requiredNumPartitions} " +
s"partitions, but the actual number of partitions is $numPartitions.")
HashPartitioning(expressions, numPartitions)
}
}

/**
* Represents data where tuples have been ordered according to the `ordering`
* [[Expression Expressions]]. Its requirement is defined as the following:
Expand Down Expand Up @@ -200,6 +231,11 @@ case object SinglePartition extends Partitioning {
* Represents a partitioning where rows are split up across partitions based on the hash
* of `expressions`. All rows where `expressions` evaluate to the same values are guaranteed to be
* in the same partition.
*
* Since [[StatefulOpClusteredDistribution]] relies on this partitioning and Spark requires
* stateful operators to retain the same physical partitioning during the lifetime of the query
* (including restart), the result of evaluation on `partitionIdExpression` must be unchanged
* across Spark versions. Violation of this requirement may bring silent correctness issue.
Comment on lines +237 to +238
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shall we enforce this assumption in unit test as well? e.g. in StreamingJoinSuite. It's great to highlight in comment here, but people always forget and the unit test will fail loudly when we introduce any invalid change.

Copy link
Contributor Author

@HeartSaVioR HeartSaVioR Feb 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a test for verifying this, although it is not exhaustive.

test("streaming join should require HashClusteredDistribution from children") {
val input1 = MemoryStream[Int]
val input2 = MemoryStream[Int]
val df1 = input1.toDF.select('value as 'a, 'value * 2 as 'b)
val df2 = input2.toDF.select('value as 'a, 'value * 2 as 'b).repartition('b)
val joined = df1.join(df2, Seq("a", "b")).select('a)
testStream(joined)(
AddData(input1, 1.to(1000): _*),
AddData(input2, 1.to(1000): _*),
CheckAnswer(1.to(1000): _*),
Execute { query =>
// Verify the query plan
def partitionExpressionsColumns(expressions: Seq[Expression]): Seq[String] = {
expressions.flatMap {
case ref: AttributeReference => Some(ref.name)
}
}
val numPartitions = spark.sqlContext.conf.getConf(SQLConf.SHUFFLE_PARTITIONS)
assert(query.lastExecution.executedPlan.collect {
case j @ StreamingSymmetricHashJoinExec(_, _, _, _, _, _, _, _,
ShuffleExchangeExec(opA: HashPartitioning, _, _),
ShuffleExchangeExec(opB: HashPartitioning, _, _))
if partitionExpressionsColumns(opA.expressions) === Seq("a", "b")
&& partitionExpressionsColumns(opB.expressions) === Seq("a", "b")
&& opA.numPartitions == numPartitions && opB.numPartitions == numPartitions => j
}.size == 1)
})
}

If we want to be exhaustive, I can make a combination of repartitions which could have not triggered shuffle with hash partitioning against joining keys if stream-stream join uses ClusteredDistribution. It may not be exhaustive for future-proof indeed.

Instead, if we are pretty sure StateOpClusteredDistribution would work as expected, we can simply check the required child distribution of the physical plan of stream-stream join, and additionally check the output partitioning of each child to be HashPartitioning with joining keys (this effectively verifies StateOpClusteredDistribution indeed).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh actually I was referring to the assumption:

  • HashPartitioning.partitionIdExpression has to be exactly Pmod(new Murmur3Hash(expressions), Literal(numPartitions)).

It would be just to add some logic to check opA/opB.partitionIdExpression for the opA/opB at Line 598/599. I can also do it later if it's not clear to you.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We checked HashPartitioning and partitionExpression here - the remaining is partitionIdExpression, which is the implementation of HashPartitioning.

That said, it would be nice if we have a separate test against HashPartitioning if we don't have one. Could you please check and craft one if we don't have it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure I can add one later this week.

*/
case class HashPartitioning(expressions: Seq[Expression], numPartitions: Int)
extends Expression with Partitioning with Unevaluable {
Expand All @@ -211,6 +247,10 @@ case class HashPartitioning(expressions: Seq[Expression], numPartitions: Int)
override def satisfies0(required: Distribution): Boolean = {
super.satisfies0(required) || {
required match {
case h: StatefulOpClusteredDistribution =>
expressions.length == h.expressions.length && expressions.zip(h.expressions).forall {
case (l, r) => l.semanticEquals(r)
}
case ClusteredDistribution(requiredClustering, _) =>
expressions.forall(x => requiredClustering.exists(_.semanticEquals(x)))
case _ => false
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -185,8 +185,8 @@ case class StreamingSymmetricHashJoinExec(
val nullRight = new GenericInternalRow(right.output.map(_.withNullability(true)).length)

override def requiredChildDistribution: Seq[Distribution] =
ClusteredDistribution(leftKeys, stateInfo.map(_.numPartitions)) ::
ClusteredDistribution(rightKeys, stateInfo.map(_.numPartitions)) :: Nil
StatefulOpClusteredDistribution(leftKeys, getStateInfo.numPartitions) ::
StatefulOpClusteredDistribution(rightKeys, getStateInfo.numPartitions) :: Nil

override def output: Seq[Attribute] = joinType match {
case _: InnerLike => left.output ++ right.output
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -571,7 +571,7 @@ class StreamingInnerJoinSuite extends StreamingJoinSuite {
CheckNewAnswer((5, 10, 5, 15, 5, 25)))
}

test("streaming join should require HashClusteredDistribution from children") {
test("streaming join should require StatefulOpClusteredDistribution from children") {
val input1 = MemoryStream[Int]
val input2 = MemoryStream[Int]

Expand Down