-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-17409] [SQL] Do Not Optimize Query in CTAS More Than Once #15048
Conversation
Test build #65217 has finished for PR 15048 at commit
|
@@ -37,7 +38,9 @@ case class CreateTable(tableDesc: CatalogTable, mode: SaveMode, query: Option[Lo | |||
|
|||
override def output: Seq[Attribute] = Seq.empty[Attribute] | |||
|
|||
override def children: Seq[LogicalPlan] = query.toSeq | |||
override def children: Seq[LogicalPlan] = Seq.empty[LogicalPlan] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
extend LeafNode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. : )
@gatorsmile so should we check all commands? It might also be an idea to have |
@hvanhovell Sure, will do it. Thanks! |
Test build #65233 has finished for PR 15048 at commit
|
@@ -68,7 +68,7 @@ class ResolveDataSource(sparkSession: SparkSession) extends Rule[LogicalPlan] { | |||
/** | |||
* Preprocess some DDL plans, e.g. [[CreateTable]], to do some normalization and checking. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should update the comments to say that this rule will also analyze the query.(we may also wanna update the rule name)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, let me do it now. Thanks!
Test build #65283 has finished for PR 15048 at commit
|
thanks, merging to master! |
### What changes were proposed in this pull request? As explained in apache#14797: >Some analyzer rules have assumptions on logical plans, optimizer may break these assumption, we should not pass an optimized query plan into QueryExecution (will be analyzed again), otherwise we may some weird bugs. For example, we have a rule for decimal calculation to promote the precision before binary operations, use PromotePrecision as placeholder to indicate that this rule should not apply twice. But a Optimizer rule will remove this placeholder, that break the assumption, then the rule applied twice, cause wrong result. We should not optimize the query in CTAS more than once. For example, ```Scala spark.range(99, 101).createOrReplaceTempView("tab1") val sqlStmt = "SELECT id, cast(id as long) * cast('1.0' as decimal(38, 18)) as num FROM tab1" sql(s"CREATE TABLE tab2 USING PARQUET AS $sqlStmt") checkAnswer(spark.table("tab2"), sql(sqlStmt)) ``` Before this PR, the results do not match ``` == Results == !== Correct Answer - 2 == == Spark Answer - 2 == ![100,100.000000000000000000] [100,null] [99,99.000000000000000000] [99,99.000000000000000000] ``` After this PR, the results match. ``` +---+----------------------+ |id |num | +---+----------------------+ |99 |99.000000000000000000 | |100|100.000000000000000000| +---+----------------------+ ``` In this PR, we do not treat the `query` in CTAS as a child. Thus, the `query` will not be optimized when optimizing CTAS statement. However, we still need to analyze it for normalizing and verifying the CTAS in the Analyzer. Thus, we do it in the analyzer rule `PreprocessDDL`, because so far only this rule needs the analyzed plan of the `query`. ### How was this patch tested? Added a test Author: gatorsmile <[email protected]> Closes apache#15048 from gatorsmile/ctasOptimized.
@gatorsmile We should also backport this to branch 2.0, right? |
@gatorsmile Also, does it affect |
Yeah. We should backport it to 2.0 Yeah. It affects both data source tables and hive serde tables. To fix it in Spark 2.0, we need to rewrite the fix since Spark 2.0 does not have a unified logical plan, afaik. Let me submit a PR to backport it. |
Thanks! btw, does this patch cover hive tables? |
Also, another good test for this is
Without this fix, you will have an exception like |
Also, can we add a test for hive tables? |
Yeah, based on my understanding, it should cover the hive serde table. I will submit a PR to make sure it and also include the test case you provided above. Thank you! |
…15048 ### What changes were proposed in this pull request? This PR is to backport #15048 and #15459. However, in 2.0, we do not have a unified logical node `CreateTable` and the analyzer rule `PreWriteCheck` is also different. To minimize the code changes, this PR adds a new rule `AnalyzeCreateTableAsSelect`. Please treat it as a new PR to review. Thanks! As explained in #14797: >Some analyzer rules have assumptions on logical plans, optimizer may break these assumption, we should not pass an optimized query plan into QueryExecution (will be analyzed again), otherwise we may some weird bugs. For example, we have a rule for decimal calculation to promote the precision before binary operations, use PromotePrecision as placeholder to indicate that this rule should not apply twice. But a Optimizer rule will remove this placeholder, that break the assumption, then the rule applied twice, cause wrong result. We should not optimize the query in CTAS more than once. For example, ```Scala spark.range(99, 101).createOrReplaceTempView("tab1") val sqlStmt = "SELECT id, cast(id as long) * cast('1.0' as decimal(38, 18)) as num FROM tab1" sql(s"CREATE TABLE tab2 USING PARQUET AS $sqlStmt") checkAnswer(spark.table("tab2"), sql(sqlStmt)) ``` Before this PR, the results do not match ``` == Results == !== Correct Answer - 2 == == Spark Answer - 2 == ![100,100.000000000000000000] [100,null] [99,99.000000000000000000] [99,99.000000000000000000] ``` After this PR, the results match. ``` +---+----------------------+ |id |num | +---+----------------------+ |99 |99.000000000000000000 | |100|100.000000000000000000| +---+----------------------+ ``` In this PR, we do not treat the `query` in CTAS as a child. Thus, the `query` will not be optimized when optimizing CTAS statement. However, we still need to analyze it for normalizing and verifying the CTAS in the Analyzer. Thus, we do it in the analyzer rule `PreprocessDDL`, because so far only this rule needs the analyzed plan of the `query`. ### How was this patch tested? Author: gatorsmile <[email protected]> Closes #15502 from gatorsmile/ctasOptimize2.0.
… Once ### What changes were proposed in this pull request? This follow-up PR is for addressing the [comment](apache#15048). We added two test cases based on the suggestion from yhuai . One is a new test case using the `saveAsTable` API to create a data source table. Another is for CTAS on Hive serde table. Note: No need to backport this PR to 2.0. Will submit a new PR to backport the whole fix with new test cases to Spark 2.0 ### How was this patch tested? N/A Author: gatorsmile <[email protected]> Closes apache#15459 from gatorsmile/ctasOptimizedTestCases.
… Once ### What changes were proposed in this pull request? This follow-up PR is for addressing the [comment](apache#15048). We added two test cases based on the suggestion from yhuai . One is a new test case using the `saveAsTable` API to create a data source table. Another is for CTAS on Hive serde table. Note: No need to backport this PR to 2.0. Will submit a new PR to backport the whole fix with new test cases to Spark 2.0 ### How was this patch tested? N/A Author: gatorsmile <[email protected]> Closes apache#15459 from gatorsmile/ctasOptimizedTestCases.
… Once ### What changes were proposed in this pull request? This follow-up PR is for addressing the [comment](apache#15048). We added two test cases based on the suggestion from yhuai . One is a new test case using the `saveAsTable` API to create a data source table. Another is for CTAS on Hive serde table. Note: No need to backport this PR to 2.0. Will submit a new PR to backport the whole fix with new test cases to Spark 2.0 ### How was this patch tested? N/A Author: gatorsmile <[email protected]> Closes apache#15459 from gatorsmile/ctasOptimizedTestCases.
## What changes were proposed in this pull request? We could get incorrect results by running DecimalPrecision twice. This PR resolves the original found in apache#15048 and apache#14797. After this PR, it becomes easier to change it back using `children` instead of using `innerChildren`. ## How was this patch tested? The existing test. Author: gatorsmile <[email protected]> Closes apache#20000 from gatorsmile/keepPromotePrecision.
What changes were proposed in this pull request?
As explained in #14797:
We should not optimize the query in CTAS more than once. For example,
Before this PR, the results do not match
After this PR, the results match.
In this PR, we do not treat the
query
in CTAS as a child. Thus, thequery
will not be optimized when optimizing CTAS statement. However, we still need to analyze it for normalizing and verifying the CTAS in the Analyzer. Thus, we do it in the analyzer rulePreprocessDDL
, because so far only this rule needs the analyzed plan of thequery
.How was this patch tested?
Added a test