-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-38124][SQL][SS] Introduce StatefulOpClusteredDistribution and apply to stream-stream join #35419
Conversation
…apply to all stateful operators
TODO: |
Implementation-wise, the PR is ready to review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @HeartSaVioR for the PR!
Sorry for my late response on the original PR: I was working on a PR but haven't figured out a way to test it.
* Since this distribution relies on [[HashPartitioning]] on the physical partitioning of the | ||
* stateful operator, only [[HashPartitioning]] can satisfy this distribution. | ||
*/ | ||
case class StatefulOpClusteredDistribution( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the new name :) thanks for making it more specific.
Do we also need to update HashShuffleSpec
so that two HashPartitioning
s can be compatible with each other when checking against StatefulOpClusteredDistribution
s? this is the previous behavior where Spark would avoid shuffle if both sides of the streaming join are co-partitioned.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we also need to update HashShuffleSpec so that two HashPartitionings can be compatible with each other when checking against StatefulOpClusteredDistributions? this is the previous behavior where Spark would avoid shuffle if both sides of the streaming join are co-partitioned.
Each input must follow the required distribution provided from stateful operator to respect the requirement of state partitioning. State partitioning is the first class, so even both sides of the streaming join are co-partitioned, Spark must perform shuffle if they don't match with state partitioning. (If that was the previous behavior, we broke something at some time point.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I think this PR will skip shuffle if both sides of a streaming join are co-partitioned. In EnsureRequirements
, we currently mainly do two things:
- check if output partitioning can satisfy the required distribution
- if there are two children, check if they are compatible with each other, and insert shuffle if not.
In the step 2) we'd only consider ClusteredDistribution
at the moment, so in case of StatefulOpClusteredDistributions
this step is simply skipped. Consequently, Spark will skip shuffle even if only step 1) is successful.
State partitioning is the first class, so even both sides of the streaming join are co-partitioned, Spark must perform shuffle if they don't match with state partitioning.
I'm not quite sure about this. Shouldn't we retain the behavior before #32875? Quoting the comment from @cloud-fan:
I think this is kind of a potential bug. Let's say that we have 2 tables that can report hash partitioning optionally (e.g. controlled by a flag). Assume a streaming query is first run with the flag off, which means the tables do not report hash partitioning, then Spark will add shuffles before the stream-stream join, and the join state (steaming checkpoint) is partitioned by Spark's murmur3 hash function. Then we restart the streaming query with the flag on, and the 2 tables report hash partitioning (not the same as Spark's murmur3). Spark will not add shuffles before stream-stream join this time, and leads to wrong result, because the left/right join child is not co-partitioned with the join state in the previous run.
If we respect co-partitioning and avoid shuffle before #32875 but start shuffle after this PR, I think similar issue like described in the comment can happen?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- check if output partitioning can satisfy the required distribution
For stream-stream join, once each input satisfy the required "hash" distribution of each, they will be co-partitioned. stream-stream join must guarantee this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem brought up because ClusteredDistribution has much more relaxed requirement; what we really need to require for "any" stateful operator including stream-stream join is that for all children a specific tuple having specific grouping key must be bound to the deterministic partition "ID", which only HashClusteredDistribution could guarantee.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think one behavior difference between this PR and the state before #32875 is that, previously, we'd also check spark.sql.shuffle.partitions
and insert shuffle if there's not enough parallelism from the input. However, this PR doesn't do that since it skips the step 2) above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HashClusteredDistribution also has a requirement of the number of partitions, so step 1) should fulfill it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, StatefulOpClusteredDistributions
is very strict and requires numPartitions as well. I don't think we need extra co-partition check for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, it should be good then!
@@ -337,7 +337,7 @@ case class StateStoreRestoreExec( | |||
if (keyExpressions.isEmpty) { | |||
AllTuples :: Nil | |||
} else { | |||
ClusteredDistribution(keyExpressions, stateInfo.map(_.numPartitions)) :: Nil | |||
StatefulOpClusteredDistribution(keyExpressions, stateInfo.map(_.numPartitions)) :: Nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering if this change could introduce extra shuffle for streaming aggregate. Previously the operator requires ClusteredDistribution
, and right now it requires StatefulOpClusteredDistribution
/HashClusteredDistribution
.
ClusteredDistribution
is more relaxed than HashClusteredDistribution
in the sense that a HashPartitioning(c1)
can satisfy ClusteredDistribution(c1, c2)
, but cannot satisfy HashClusteredDistribution(c1, c2)
. In short, ClusteredDistribution
allows child to be hash-partitioned on subset of required keys. So for aggregate, if the plan is already shuffled on subset of group-by columns, Spark will not add a shuffle again before group-by.
For example:
MemoryStream[(Int, Int)].toDF()
.repartition($"_1")
.groupBy($"_1", $"_2")
.agg(count("*"))
.as[(Int, Int, Long)]
and the query plan:
WriteToDataSourceV2 org.apache.spark.sql.execution.streaming.sources.MicroBatchWrite@5940f7c2, org.apache.spark.sql.execution.datasources.v2.DataSourceV2Strategy$$Lambda$1940/1200613952@4861dac3
+- *(4) HashAggregate(keys=[_1#588, _2#589], functions=[count(1)], output=[_1#588, _2#589, count(1)#596L])
+- StateStoreSave [_1#588, _2#589], state info [ checkpoint = file:/private/var/folders/y5/hnsw8mz93vs57ngcd30y6y9c0000gn/T/streaming.metadata-0d7cb004-92dd-4b0d-9d90-5a65c0d2934c/state, runId = 68598bd1-cf35-4bf7-a167-5f73dc9f4d84, opId = 0, ver = 0, numPartitions = 5], Complete, 0, 1
+- *(3) HashAggregate(keys=[_1#588, _2#589], functions=[merge_count(1)], output=[_1#588, _2#589, count#663L])
+- StateStoreRestore [_1#588, _2#589], state info [ checkpoint = file:/private/var/folders/y5/hnsw8mz93vs57ngcd30y6y9c0000gn/T/streaming.metadata-0d7cb004-92dd-4b0d-9d90-5a65c0d2934c/state, runId = 68598bd1-cf35-4bf7-a167-5f73dc9f4d84, opId = 0, ver = 0, numPartitions = 5], 1
+- *(2) HashAggregate(keys=[_1#588, _2#589], functions=[merge_count(1)], output=[_1#588, _2#589, count#663L])
+- *(2) HashAggregate(keys=[_1#588, _2#589], functions=[partial_count(1)], output=[_1#588, _2#589, count#663L])
+- Exchange hashpartitioning(_1#588, 5), REPARTITION_BY_COL, [id=#2008]
+- *(1) Project [_1#588, _2#589]
+- MicroBatchScan[_1#588, _2#589] MemoryStreamDataSource
One can argue the previous behavior for streaming aggregate is not wrong. As long as all rows for same keys are colocated in same partition, StateStoreRestore/Store
should output correct answer for streaming aggregate. If we make the change here, I assume one extra shuffle on ($"_1", $"_2")
would be introduced, and it might yield incorrect result when running the new query plan against the existing state store?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But we don't define "repartition just before stateful operator" as "unsupported operation across query lifetime", no?
The thing is that once the query is run, the partitioning of stateful operator must not be changed during lifetime. Since we don't store the information of partitioning against stateful operator in the checkpoint, we have no way around other than enforcing the partitioning of stateful operator as the "one" what we basically expect.
As I said in #32875, there is a room for improvement, but the effort on improvement must be performed after we fix this issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But we don't define "repartition just before stateful operator" as "unsupported operation across query lifetime", no?
I ran the query in StreamingAggregationSuite.scala
and it seems fine. I briefly checked UnsupportedOperationChecker.scala
and didn't find we disallow "repartition just before stateful operator".
The thing is that once the query is run, the partitioning of stateful operator must not be changed during lifetime. Since we don't store the information of partitioning against stateful operator in the checkpoint, we have no way around other than enforcing the partitioning of stateful operator as the "one" what we basically expect.
I agree with you @HeartSaVioR. I want to raise a concern here it might change the query plan for certain streaming aggregate query (as above synthetic query), and it could break existing state store when running with next Spark 3.3 code, based on my limited understanding.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we make the change here, I assume one extra shuffle on (
$"_1", $ "_2") would be introduced, and it might yield incorrect result when running the new query plan against the existing state store?
Unfortunately yes. We may need to craft some tools to analyze the state and repartition if the partitioning is already messed up. But leaving this as it is would bring more chances to let users' state be indeterministic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But we don't define "repartition just before stateful operator" as "unsupported operation across query lifetime", no?
I ran the query in StreamingAggregationSuite.scala and it seems fine. I briefly checked UnsupportedOperationChecker.scala and didn't find we disallow "repartition just before stateful operator".
That is the problem we have. We didn't disallow the case where it brings silent correctness issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is the problem we have. We didn't disallow the case where it brings silent correctness issue.
This is just a synthetic example I composed to verify my theory. But I think it might break for more cases, such as GROUP BY c1, c2
after JOIN ON c1
. I am trying to say it would break for the queries which are partitioned on subset of group-by keys.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even nowadays multiple stateful operators don't work properly due to the global watermark, so we don't need to worry about the partitioning between stateful operators. We just need to worry about the partitioning between upstream (in most cases non stateful) and the stateful operator.
I see the concern of fixing stateful operators due to the state on existing queries. stream-stream join has been using HashClusteredDistribution and it shouldn't suffer this long standing problem. So if we want to split down the problem with stream-stream join and others, I'll move out the changes on other stateful operators, and in there we have to also come up with the plan how to deal with state on existing queries. Would it work for all? |
+1, SGTM. |
11eabf0
to
d0aa192
Compare
Just updated JIRA ticket, PR description, PR code diff to only contain the change of stream-stream join. Now this PR is effectively a partial revert of SPARK-35703, which hasn't been released. That said, this PR doesn't bring any breaking change. I'll file another JIRA ticket for dealing with other stateful operators. |
* (including restart), the result of evaluation on `partitionIdExpression` must be unchanged | ||
* across Spark versions. Violation of this requirement may bring silent correctness issue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shall we enforce this assumption in unit test as well? e.g. in StreamingJoinSuite
. It's great to highlight in comment here, but people always forget and the unit test will fail loudly when we introduce any invalid change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have a test for verifying this, although it is not exhaustive.
spark/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingJoinSuite.scala
Lines 574 to 605 in 2e703ae
test("streaming join should require HashClusteredDistribution from children") { | |
val input1 = MemoryStream[Int] | |
val input2 = MemoryStream[Int] | |
val df1 = input1.toDF.select('value as 'a, 'value * 2 as 'b) | |
val df2 = input2.toDF.select('value as 'a, 'value * 2 as 'b).repartition('b) | |
val joined = df1.join(df2, Seq("a", "b")).select('a) | |
testStream(joined)( | |
AddData(input1, 1.to(1000): _*), | |
AddData(input2, 1.to(1000): _*), | |
CheckAnswer(1.to(1000): _*), | |
Execute { query => | |
// Verify the query plan | |
def partitionExpressionsColumns(expressions: Seq[Expression]): Seq[String] = { | |
expressions.flatMap { | |
case ref: AttributeReference => Some(ref.name) | |
} | |
} | |
val numPartitions = spark.sqlContext.conf.getConf(SQLConf.SHUFFLE_PARTITIONS) | |
assert(query.lastExecution.executedPlan.collect { | |
case j @ StreamingSymmetricHashJoinExec(_, _, _, _, _, _, _, _, | |
ShuffleExchangeExec(opA: HashPartitioning, _, _), | |
ShuffleExchangeExec(opB: HashPartitioning, _, _)) | |
if partitionExpressionsColumns(opA.expressions) === Seq("a", "b") | |
&& partitionExpressionsColumns(opB.expressions) === Seq("a", "b") | |
&& opA.numPartitions == numPartitions && opB.numPartitions == numPartitions => j | |
}.size == 1) | |
}) | |
} |
If we want to be exhaustive, I can make a combination of repartitions which could have not triggered shuffle with hash partitioning against joining keys if stream-stream join uses ClusteredDistribution. It may not be exhaustive for future-proof indeed.
Instead, if we are pretty sure StateOpClusteredDistribution would work as expected, we can simply check the required child distribution of the physical plan of stream-stream join, and additionally check the output partitioning of each child to be HashPartitioning with joining keys (this effectively verifies StateOpClusteredDistribution indeed).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh actually I was referring to the assumption:
HashPartitioning.partitionIdExpression
has to be exactlyPmod(new Murmur3Hash(expressions), Literal(numPartitions))
.
It would be just to add some logic to check opA/opB.partitionIdExpression
for the opA/opB
at Line 598/599. I can also do it later if it's not clear to you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We checked HashPartitioning and partitionExpression here - the remaining is partitionIdExpression, which is the implementation of HashPartitioning.
That said, it would be nice if we have a separate test against HashPartitioning if we don't have one. Could you please check and craft one if we don't have it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure I can add one later this week.
* Spark versions. Violation of this requirement may bring silent correctness issue. | ||
* | ||
* Since this distribution relies on [[HashPartitioning]] on the physical partitioning of the | ||
* stateful operator, only [[HashPartitioning]] can satisfy this distribution. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to also explain briefly this only applies to StreamingSymmetricHashJoinExec
now, and the challenge to apply it to other stateful operators? Maybe we can also file a JIRA for other stateful operators and leave a TODO here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where to leave a comment is the issue. It is unlikely that we often look at StatefulOpClusteredDistribution - probably giving more chance to get attention if we put a comment on every stateful operators wherever using ClusteredDistribution. Totally redundant, but gives a sign of warning whenever they try to change it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah either works for me. The comment is also non-blocking for this PR, as this is an improvement for documentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
d0aa192
to
d722b2c
Compare
* Spark versions. Violation of this requirement may bring silent correctness issue. | ||
* | ||
* Since this distribution relies on [[HashPartitioning]] on the physical partitioning of the | ||
* stateful operator, only [[HashPartitioning]] can satisfy this distribution. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only [[HashPartitioning]] can satisfy this distribution.
-> only [[HashPartitioning]], and [[PartitioningCollection]] of [[HashPartitioning]] can satisfy this distribution.
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @HeartSaVioR for the fix!
In the meanwhile I'll think of how to deal with other stateful operators with existing state. One rough idea is having the functionality of validation on state partitioning against child output partitioning, although it is not "future-proof" validation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @HeartSaVioR !
"a single partition.") | ||
|
||
override def createPartitioning(numPartitions: Int): Partitioning = { | ||
assert(requiredNumPartitions.isEmpty || requiredNumPartitions.get == numPartitions, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, is there any chance we specify empty requiredNumPartitions
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In sense of "defensive programming", it shouldn't. I just didn't change the implementation of HashClusteredPartition, but now I think it worths to do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Only a few question.
@@ -90,6 +90,35 @@ case class ClusteredDistribution( | |||
} | |||
} | |||
|
|||
/** | |||
* Represents the requirement of distribution on the stateful operator. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: do we need to put "structured streaming" before "stateful operator"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think "stateful" is already representing the streaming context, but no big deal if we repeat here.
StatefulOpClusteredDistribution(leftKeys, stateInfo.map(_.numPartitions)) :: | ||
StatefulOpClusteredDistribution(rightKeys, stateInfo.map(_.numPartitions)) :: Nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is other ClusteredDistribution
usages in statefulOperators, e.g. ClusteredDistribution
, do we need to update them too? As they are also stateful operators, they also need strict partition requirement?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please refer the long comment thread #35419 (comment)
We have to fix them, but we should have a plan to avoid introducing "silently broken" on existing queries. We need more time to think through how to address the "already broken" thing. They seem to be broken from their introduction (Spark 2.2+), so it could be possible someone is even leveraging the relaxed requirement as a "feature", despite it would be very risky if they tried to adjust partitioning by theirselves. Even for this case we can't simply break their query.
I'll create a new JIRA ticket, and/or initiate discussion thread in dev@ regarding this. I need some time to build a plan (with options) to address this safely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it could be possible someone is even leveraging the relaxed requirement as a "feature"
Suppose they have a stream of event logs having (userId, time, blabla), and do time-window aggregation like this:
df
.withWatermark("time", "1 hour")
.groupBy("userId", window("time", "10 minutes"))
.agg(count("*"))
groupBy won't trigger shuffle for various output partitionings of df, since streaming aggregation requires ClusteredDistribution. The thing is, it could be from the intention to 1) reduce shuffle in any way, or 2) try to control the partitioning to deal with skew. (I can't easily think of skew from applying hash function against "grouping keys + time window", but once they see it, they will try to fix it. ...Technically saying, they must not try to fix it as state partitioning will be no longer the same with operator's partitioning...)
Both are very risky (as of now, changing the partitioning during query lifetime would lead to correctness issue), but it's still from users' intention and they already did it anyway so we can't simply enforce the partitioning and silently break this again.
Furthermore, we seem to allow data source to produce output partitioning by itself, which can satisfy ClusteredDistribution. This is still very risky for stateful operator's perspective, but once the output partitioning is guaranteed to be not changed, it's still a great change to reduce (unnecessary) shuffle.
(Just saying hypothetically; stateful operator has to require specific output partitioning once the state is built, so it's unlikely that we can leverage the partitioning of data source. We may find a way later but not now.)
Thanks @HeartSaVioR . LGTM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks! Merging to master. |
…distribution of stateful operator ### What changes were proposed in this pull request? This PR proposes to add the context of current challenge on fixing distribution of stateful operator, even the distribution is a sort of "broken" now. This PR addresses the review comment #35419 (comment) ### Why are the changes needed? In SPARK-38124 we figured out the existing long-standing problem in stateful operator, but it is not easy to fix since the fix may break the existing query if the fix is not carefully designed. Anyone should also be pretty much careful when touching the required distribution. We want to document this explicitly to help others to be careful whenever someone is around the codebase. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Code comment only changes. Closes #35512 from HeartSaVioR/SPARK-38124-followup. Authored-by: Jungtaek Lim <[email protected]> Signed-off-by: Jungtaek Lim <[email protected]>
…tioning requirement ### What changes were proposed in this pull request? This is a followup of #35419 (comment), to add unit test to harden the assumption of SS partitioning and distribution requirement: * Check the `HashPartitioning.partitionIdExpression` to be exactly expected format * Check all different kinds of `Partitioning` against `StatefulOpClusteredDistribution`. Also add a minor comment for `StatefulOpClusteredDistribution`, as `SinglePartition` can also satisfy the distribution. ### Why are the changes needed? Document our assumption of SS in code as unit test. So next time when we introduce intrusive code change, the unit test can save us by failing loudly. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? The added unit test itself. Closes #35529 from c21/partition-test. Authored-by: Cheng Su <[email protected]> Signed-off-by: Jungtaek Lim <[email protected]>
What changes were proposed in this pull request?
This PR revives
HashClusteredDistribution
and renames toStatefulOpClusteredDistribution
so that the rationalization of the distribution is clear from the name. Renaming is safe because this class no longer needs to be general one - in SPARK-35703 we moved out the usages ofHashClusteredDistribution
toClusteredDistribution
; stateful operators are exceptions.Only
HashPartitioning
with same expressions and number of partitions can satisfyStatefulOpClusteredDistribution
. That said, we cannot modifyHashPartitioning
unless we cloneHashPartitioning
and assign the clone toStatefulOpClusteredDistribution
.This PR documents the expectation of stateful operator on partitioning in the classdoc of
StatefulOpClusteredDistribution
.This PR also changes stream-stream join to use
StatefulOpClusteredDistribution
instead ofClusteredDistribution
. This effectively reverts a part of SPARK-35703 which hasn't been shipped to any releases. This PR doesn't deal with other stateful operators since it has been long standing issue (probably Spark 2.2.0+) and we need a plan for dealing with existing state.Why are the changes needed?
Spark does not guarantee stable physical partitioning for stateful operators across query lifetime, and due to the relaxed distribution requirement it is hard to expect what would be the current physical partitioning of the state.
(We expect hash partitioning with grouping keys, but ClusteredDistribution does not "guarantee" the partitioning. It is much more relaxed.)
This PR will enforce the physical partitioning of stream-stream join operators to be hash partition with grouping keys, which is our general expectation of state store partitioning.
Does this PR introduce any user-facing change?
No, since SPARK-35703 hasn't been shipped to any release yet.
How was this patch tested?
Existing tests.