-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Corrected windows scripts to match their shell counterparts. #239
Conversation
Can one of the admins verify this patch? |
Jenkins, test this please |
@doctapp good catches, thanks! |
Merged build triggered. |
Merged build started. |
Merged build finished. |
One or more automated tests failed |
Jenkins, test this please. |
Jenkins, test this. |
I am merging this in, since this change doesnt affect any of the tests that Jenkins runs. |
I merged it in, but then realized that this was unnecessary. This PR was against scala-2.10 branch, which we dont use any more. In fact the sbt/sbt.cmd script does not exist in the Spark master any more, as we dropped support for building on Windows. And the scala version is correct on master spark-class2.cmd |
Applied from incubator-spark:scala-2.10 Author: Martin Tapp <[email protected]> Closes #239 from doctapp/scala-2.10 and squashes the following commits: 4b6a983 [Martin Tapp] Corrected windows scripts to match their shell counterparts.
Hi @doctapp the scala 2.10 branch is no longer maintaned. Do you mind closing this? |
## What changes were proposed in this pull request? Ports fix in apache@1a3f5f8 to Kafka 0.8, since the code for Kafka 0.10 and 0.8 is identical ## How was this patch tested? Unit tests in open source are also ported Author: Burak Yavuz <[email protected]> Closes apache#239 from brkyvz/SPARK-19517.
…red-by-default clusters (apache#239)
…ck (apache#239) * Set datastore version for terraform trove job by query from devstack
…red-by-default clusters (apache#239)
…n't be optimized ### What changes were proposed in this pull request? Eg: ```scala sql("create view t(c1, c2) as values (0, 1), (0, 2), (1, 2)") sql("select c1, c2, (select count(*) cnt from t t2 where t1.c1 = t2.c1 " + "having cnt = 0) from t t1").show() ``` The error will throw: ``` [PLAN_VALIDATION_FAILED_RULE_IN_BATCH] Rule org.apache.spark.sql.catalyst.optimizer.RewriteCorrelatedScalarSubquery in batch Operator Optimization before Inferring Filters generated an invalid plan: The plan becomes unresolved: 'Project [toprettystring(c1#224, Some(America/Los_Angeles)) AS toprettystring(c1)#238, toprettystring(c2#225, Some(America/Los_Angeles)) AS toprettystring(c2)#239, toprettystring(cnt#246L, Some(America/Los_Angeles)) AS toprettystring(scalarsubquery(c1))#240] +- 'Project [c1#224, c2#225, CASE WHEN isnull(alwaysTrue#245) THEN 0 WHEN NOT (cnt#222L = 0) THEN null ELSE cnt#222L END AS cnt#246L] +- 'Join LeftOuter, (c1#224 = c1#224#244) :- Project [col1#226 AS c1#224, col2#227 AS c2#225] : +- LocalRelation [col1#226, col2#227] +- Project [cnt#222L, c1#224#244, cnt#222L, c1#224, true AS alwaysTrue#245] +- Project [cnt#222L, c1#224 AS c1#224#244, cnt#222L, c1#224] +- Aggregate [c1#224], [count(1) AS cnt#222L, c1#224] +- Project [col1#228 AS c1#224] +- LocalRelation [col1#228, col2#229]The previous plan: Project [toprettystring(c1#224, Some(America/Los_Angeles)) AS toprettystring(c1)#238, toprettystring(c2#225, Some(America/Los_Angeles)) AS toprettystring(c2)#239, toprettystring(scalar-subquery#223 [c1#224 && (c1#224 = c1#224#244)], Some(America/Los_Angeles)) AS toprettystring(scalarsubquery(c1))#240] : +- Project [cnt#222L, c1#224 AS c1#224#244] : +- Filter (cnt#222L = 0) : +- Aggregate [c1#224], [count(1) AS cnt#222L, c1#224] : +- Project [col1#228 AS c1#224] : +- LocalRelation [col1#228, col2#229] +- Project [col1#226 AS c1#224, col2#227 AS c2#225] +- LocalRelation [col1#226, col2#227] ``` The reason of error is the unresolved expression in `Join` node which generate by subquery decorrelation. The `duplicateResolved` in `Join` node are false. That's meaning the `Join` left and right have same `Attribute`, in this eg is `c1#224`. The right `c1#224` `Attribute` generated by having Inputs, because there are wrong having Inputs. This problem only occurs when there contain having clause. also do some code format fix. ### Why are the changes needed? Fix subquery bug on single table when use having clause ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Add new test Closes #41347 from Hisoka-X/SPARK-43838_subquery_having. Lead-authored-by: Jia Fan <[email protected]> Co-authored-by: Wenchen Fan <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
…n't be optimized ### What changes were proposed in this pull request? Eg: ```scala sql("create view t(c1, c2) as values (0, 1), (0, 2), (1, 2)") sql("select c1, c2, (select count(*) cnt from t t2 where t1.c1 = t2.c1 " + "having cnt = 0) from t t1").show() ``` The error will throw: ``` [PLAN_VALIDATION_FAILED_RULE_IN_BATCH] Rule org.apache.spark.sql.catalyst.optimizer.RewriteCorrelatedScalarSubquery in batch Operator Optimization before Inferring Filters generated an invalid plan: The plan becomes unresolved: 'Project [toprettystring(c1#224, Some(America/Los_Angeles)) AS toprettystring(c1)apache#238, toprettystring(c2#225, Some(America/Los_Angeles)) AS toprettystring(c2)apache#239, toprettystring(cnt#246L, Some(America/Los_Angeles)) AS toprettystring(scalarsubquery(c1))apache#240] +- 'Project [c1#224, c2#225, CASE WHEN isnull(alwaysTrue#245) THEN 0 WHEN NOT (cnt#222L = 0) THEN null ELSE cnt#222L END AS cnt#246L] +- 'Join LeftOuter, (c1#224 = c1#224#244) :- Project [col1#226 AS c1#224, col2#227 AS c2#225] : +- LocalRelation [col1#226, col2#227] +- Project [cnt#222L, c1#224#244, cnt#222L, c1#224, true AS alwaysTrue#245] +- Project [cnt#222L, c1#224 AS c1#224#244, cnt#222L, c1#224] +- Aggregate [c1#224], [count(1) AS cnt#222L, c1#224] +- Project [col1#228 AS c1#224] +- LocalRelation [col1#228, col2#229]The previous plan: Project [toprettystring(c1#224, Some(America/Los_Angeles)) AS toprettystring(c1)apache#238, toprettystring(c2#225, Some(America/Los_Angeles)) AS toprettystring(c2)apache#239, toprettystring(scalar-subquery#223 [c1#224 && (c1#224 = c1#224#244)], Some(America/Los_Angeles)) AS toprettystring(scalarsubquery(c1))apache#240] : +- Project [cnt#222L, c1#224 AS c1#224#244] : +- Filter (cnt#222L = 0) : +- Aggregate [c1#224], [count(1) AS cnt#222L, c1#224] : +- Project [col1#228 AS c1#224] : +- LocalRelation [col1#228, col2#229] +- Project [col1#226 AS c1#224, col2#227 AS c2#225] +- LocalRelation [col1#226, col2#227] ``` The reason of error is the unresolved expression in `Join` node which generate by subquery decorrelation. The `duplicateResolved` in `Join` node are false. That's meaning the `Join` left and right have same `Attribute`, in this eg is `c1#224`. The right `c1#224` `Attribute` generated by having Inputs, because there are wrong having Inputs. This problem only occurs when there contain having clause. also do some code format fix. ### Why are the changes needed? Fix subquery bug on single table when use having clause ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Add new test Closes apache#41347 from Hisoka-X/SPARK-43838_subquery_having. Lead-authored-by: Jia Fan <[email protected]> Co-authored-by: Wenchen Fan <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
Applied from incubator-spark:scala-2.10