-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-21603][SQL]The wholestage codegen will be much slower then that is closed when the function is too long #18810
Changes from 18 commits
ca9eff6
1b0ac5e
5582f00
7c185c6
7e84753
c4235dc
52da6b2
d0c753a
d3238e9
d44a2f8
ce544a5
08f5ddf
4fbe5f8
5c180ac
b83cd1c
6814047
8b32b54
32813b0
b879dbf
44ce894
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -572,6 +572,14 @@ object SQLConf { | |
"disable logging or -1 to apply no limit.") | ||
.createWithDefault(1000) | ||
|
||
val WHOLESTAGE_MAX_LINES_PER_FUNCTION = buildConf("spark.sql.codegen.maxLinesPerFunction") | ||
.internal() | ||
.doc("The maximum lines of a single Java function generated by whole-stage codegen. " + | ||
"When the generated function exceeds this threshold, " + | ||
"the whole-stage codegen is deactivated for this subtree of the current query plan.") | ||
.intConf | ||
.createWithDefault(1500) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Would it be possible to explain why 1500 is the good value as default? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not confident about this default value. Is it too small? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I tend to not change current behavior of whole-stage codegen. This might suddenly let user codes not run in whole-stage codegen unintentionally. Shall we make There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this value ( There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it applies to other Java programs using JAVA HotSpot VM. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Or maybe There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @gatorsmile, Which do you think is better to use for the default value, 1500 or Int.Max ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think that this value depends on what code is generated by the whole-stage codegen for each query. In other words, when Java byte code per line is larger than 6 (= There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @eatoncys Let us do it in a more conservative way. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @kiszk, you're right, it depends on how much byte code per line. |
||
|
||
val FILES_MAX_PARTITION_BYTES = buildConf("spark.sql.files.maxPartitionBytes") | ||
.doc("The maximum number of bytes to pack into a single partition when reading files.") | ||
.longConf | ||
|
@@ -1014,6 +1022,8 @@ class SQLConf extends Serializable with Logging { | |
|
||
def loggingMaxLinesForCodegen: Int = getConf(CODEGEN_LOGGING_MAX_LINES) | ||
|
||
def maxLinesPerFunction: Int = getConf(WHOLESTAGE_MAX_LINES_PER_FUNCTION) | ||
|
||
def tableRelationCacheSize: Int = | ||
getConf(StaticSQLConf.FILESOURCE_TABLE_RELATION_CACHE_SIZE) | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -370,6 +370,14 @@ case class WholeStageCodegenExec(child: SparkPlan) extends UnaryExecNode with Co | |
|
||
override def doExecute(): RDD[InternalRow] = { | ||
val (ctx, cleanedSource) = doCodeGen() | ||
if (ctx.isTooLongGeneratedFunction) { | ||
logWarning("Found too long generated codes and JIT optimization might not work, " + | ||
"Whole-stage codegen disabled for this plan, " + | ||
"You can change the config spark.sql.codegen.MaxFunctionLength " + | ||
"to adjust the function length limit:\n " | ||
+ s"$treeString") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This could be very big. Please follow what did in #18658 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @gatorsmile , thank you for review, the treeString not contains the code, it only contains the tree string of the Physical plan like below: |
||
return child.execute() | ||
} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We need to add a test in which we create a query with long generated function, and check if whole-stage codegen is disabled for it. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it can be tested by " max function length of wholestagecodegen" added in AggregateBenchmark.scala, thanks. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. AggregateBenchmark is more like a benchmark than a test. It won't run every time. We need a test to prevent regression brought by future change. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @viirya, it is hard to check if whole-stage codegen is disabled or not for me, would you like to give me some suggestion, thanks. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We can check if there is a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok. I'll take a look later. Thanks. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There are multiple ways to verify it. One of the solution is using SQL metrics. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am fine about your proposed way, but needs to simplify it. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @gatorsmile Do you mean There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes |
||
// try to compile and fallback if it failed | ||
try { | ||
CodeGenerator.compile(cleanedSource) | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -20,6 +20,7 @@ package org.apache.spark.sql.execution | |
import org.apache.spark.sql.{Column, Dataset, Row} | ||
import org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute | ||
import org.apache.spark.sql.catalyst.expressions.{Add, Literal, Stack} | ||
import org.apache.spark.sql.catalyst.expressions.codegen.CodegenContext | ||
import org.apache.spark.sql.execution.aggregate.HashAggregateExec | ||
import org.apache.spark.sql.execution.joins.BroadcastHashJoinExec | ||
import org.apache.spark.sql.execution.joins.SortMergeJoinExec | ||
|
@@ -149,4 +150,60 @@ class WholeStageCodegenSuite extends SparkPlanTest with SharedSQLContext { | |
assert(df.collect() === Array(Row(1), Row(2))) | ||
} | ||
} | ||
|
||
def genGroupByCodeGenContext(caseNum: Int): CodegenContext = { | ||
val caseExp = (1 to caseNum).map { i => | ||
s"case when id > $i and id <= ${i + 1} then 1 else 0 end as v$i" | ||
}.toList | ||
val keyExp = List( | ||
"id", | ||
"(id & 1023) as k1", | ||
"cast(id & 1023 as double) as k2", | ||
"cast(id & 1023 as int) as k3") | ||
|
||
val ds = spark.range(10) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Also add There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok, I have modified it as you suggested above all, would you like to review it again, thanks. |
||
.selectExpr(keyExp:::caseExp: _*) | ||
.groupBy("k1", "k2", "k3") | ||
.sum() | ||
val plan = ds.queryExecution.executedPlan | ||
|
||
val wholeStageCodeGenExec = plan.find(p => p match { | ||
case wp: WholeStageCodegenExec => wp.child match { | ||
case hp: HashAggregateExec if (hp.child.isInstanceOf[ProjectExec]) => true | ||
case _ => false | ||
} | ||
case _ => false | ||
}) | ||
|
||
assert(wholeStageCodeGenExec.isDefined) | ||
wholeStageCodeGenExec.get.asInstanceOf[WholeStageCodegenExec].doCodeGen()._1 | ||
} | ||
|
||
test("SPARK-21603 check there is a too long generated function") { | ||
withSQLConf(SQLConf.WHOLESTAGE_MAX_LINES_PER_FUNCTION.key -> "1500") { | ||
val ctx = genGroupByCodeGenContext(30) | ||
assert(ctx.isTooLongGeneratedFunction === true) | ||
} | ||
} | ||
|
||
test("SPARK-21603 check there is not a too long generated function") { | ||
withSQLConf(SQLConf.WHOLESTAGE_MAX_LINES_PER_FUNCTION.key -> "1500") { | ||
val ctx = genGroupByCodeGenContext(1) | ||
assert(ctx.isTooLongGeneratedFunction === false) | ||
} | ||
} | ||
|
||
test("SPARK-21603 check there is not a too long generated function when threshold is Int.Max") { | ||
withSQLConf(SQLConf.WHOLESTAGE_MAX_LINES_PER_FUNCTION.key -> Int.MaxValue.toString) { | ||
val ctx = genGroupByCodeGenContext(30) | ||
assert(ctx.isTooLongGeneratedFunction === false) | ||
} | ||
} | ||
|
||
test("SPARK-21603 check there is a too long generated function when threshold is 0") { | ||
withSQLConf(SQLConf.WHOLESTAGE_MAX_LINES_PER_FUNCTION.key -> "0") { | ||
val ctx = genGroupByCodeGenContext(1) | ||
assert(ctx.isTooLongGeneratedFunction === true) | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add one more empty line
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, added, thanks