-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-23179][SQL] Support option to throw exception if overflow occurs during Decimal arithmetic #20350
[SPARK-23179][SQL] Support option to throw exception if overflow occurs during Decimal arithmetic #20350
Changes from 1 commit
449b69c
fcd665e
610a595
c73471d
2c8e2c7
bd8b645
d6bc7e9
aa84034
069b861
37f47ef
bc25c0d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1074,6 +1074,16 @@ object SQLConf { | |
.booleanConf | ||
.createWithDefault(true) | ||
|
||
val DECIMAL_OPERATIONS_NULL_ON_OVERFLOW = | ||
buildConf("spark.sql.decimalOperations.nullOnOverflow") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. overflow can happen with non-decimal operations, do we need a new config? cc @JoshRosen There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for taking a look at this @cloud-fan ! Yes, that case (non-decimal) is handled in #21599. I'd say that, in the non-decimal case, the situation is pretty different. Indeed, overflow in decimal operation is handled by Spark now, converting overflow operations to In non-decimal operations, indeed we return a wrong value (the java way). So IMHO, the non-decimal case current behavior doesn't make any sense at all (considering this is SQL and not a low level language like Java/Scala) and keeping its current behavior makes no sense (we already discussed this in that PR actually). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. A DB does not have to follow the SQL standard completely in every corners. The current behavior in Spark is by design and I don't think that's nonsense. I do agree that it's a valid requirement that some users want overflow to fail, but it should be protected by a config. My question is if we need one config for overflow, or 2 configs for decimal and non-decimal. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I am sorry, but I don't really agree with you on this. I see the discussion is a bit OT, but I'd like just to explain the reasons of my opinion. SQL is a declarative language and here we are coupling the result/behavior to the specific execution language we are using. Spark is cross-language, but for arithmetic operations overflow works in a very peculiar way of the language we use which is:
So there in no Spark user other than Scala/Java ones who might understand the behavior Spark has in those cases. Sorry for being a bit OT, anyway.
Yes, this is the main point here. IMHO, I'd prefer 2 configs because when the config is turned off, the behavior is completely different: in once case it returns null, in the other we return wrong results. But I see also the value in reducing as much as possible the number of configs, which is already pretty big. So I'd prefer 2 configs, but if you and the community thinks 1 it is better, I can update the PR in order to make this config more generic. Thanks for your feedbacks and the discussion! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For now, I think separate flags are okay. Here's why:
I'm interested in whichever option allows us to make incremental progress by getting this merged (even if flagged off by default) so that we can rely on this functionality being available in 3.x instead of having to maintain it indefinitely in our own fork (with all of the associated long-term maintenance and testing burdens). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. One followup question regarding flag naming: is "overflow" the most precise term for the change made here? Or does this flag also change behavior in precision-loss scenarios? Maybe I'm getting tripped up on terminology here, since insufficient precision to represent small fractional quantities is essentially an "overflow" of the digit space reserved to represent the fractional part. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for your comments @JoshRosen. |
||
.internal() | ||
.doc("When true (default), if an overflow on a decimal occurs, then NULL is returned. " + | ||
"Spark's older versions and Hive behave in this way. If turned to false, SQL ANSI 2011 " + | ||
"specification, will be followed instead: an arithmetic exception is thrown. This is " + | ||
"what most of the SQL databases do.") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Tiny nit: If turned to false, SQL ANSI 2011 specification, will be followed instead This should be If turned to false, SQL ANSI 2011 specification will be followed instead |
||
.booleanConf | ||
.createWithDefault(true) | ||
|
||
val SQL_STRING_REDACTION_PATTERN = | ||
ConfigBuilder("spark.sql.redaction.string.regex") | ||
.doc("Regex to decide which parts of strings produced by Spark contain sensitive " + | ||
|
@@ -1453,6 +1463,8 @@ class SQLConf extends Serializable with Logging { | |
|
||
def decimalOperationsAllowPrecisionLoss: Boolean = getConf(DECIMAL_OPERATIONS_ALLOW_PREC_LOSS) | ||
|
||
def decimalOperationsNullOnOverflow: Boolean = getConf(DECIMAL_OPERATIONS_NULL_ON_OVERFLOW) | ||
|
||
def continuousStreamingExecutorQueueSize: Int = getConf(CONTINUOUS_STREAMING_EXECUTOR_QUEUE_SIZE) | ||
|
||
def continuousStreamingExecutorPollIntervalMs: Long = | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -21,6 +21,7 @@ import java.lang.{Long => JLong} | |
import java.math.{BigInteger, MathContext, RoundingMode} | ||
|
||
import org.apache.spark.annotation.InterfaceStability | ||
import org.apache.spark.internal.Logging | ||
import org.apache.spark.sql.AnalysisException | ||
|
||
/** | ||
|
@@ -32,7 +33,7 @@ import org.apache.spark.sql.AnalysisException | |
* - Otherwise, the decimal value is longVal / (10 ** _scale) | ||
*/ | ||
@InterfaceStability.Unstable | ||
final class Decimal extends Ordered[Decimal] with Serializable { | ||
final class Decimal extends Ordered[Decimal] with Serializable with Logging { | ||
import org.apache.spark.sql.types.Decimal._ | ||
|
||
private var decimalVal: BigDecimal = null | ||
|
@@ -237,14 +238,26 @@ final class Decimal extends Ordered[Decimal] with Serializable { | |
/** | ||
* Create new `Decimal` with given precision and scale. | ||
* | ||
* @return a non-null `Decimal` value if successful or `null` if overflow would occur. | ||
* @return a non-null `Decimal` value if successful. Otherwise, if `nullOnOverflow` is true, null | ||
* is returned; if `nullOnOverflow` is false, an `ArithmeticException` is thrown. | ||
*/ | ||
private[sql] def toPrecision( | ||
precision: Int, | ||
scale: Int, | ||
roundMode: BigDecimal.RoundingMode.Value = ROUND_HALF_UP): Decimal = { | ||
roundMode: BigDecimal.RoundingMode.Value = ROUND_HALF_UP, | ||
nullOnOverflow: Boolean = true): Decimal = { | ||
val copy = clone() | ||
if (copy.changePrecision(precision, scale, roundMode)) copy else null | ||
if (copy.changePrecision(precision, scale, roundMode)) { | ||
copy | ||
} else { | ||
val message = s"$toDebugString cannot be represented as Decimal($precision, $scale)." | ||
if (nullOnOverflow) { | ||
logWarning(s"$message NULL is returned.") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am not sure if we should log this message. If we hit this often we'll end up with huge logs. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If we hit it often, the result we get is quite useless. I added it only to notify the user of something which is an unexpected/undesired situation and now happens silently. I think it is bad that the user cannot know if a NULL is a result of an operation involving NULLs or the result of an overflow. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree that a result becomes less useful if we return nulls often. My problem is more that if we process a million non convertible decimals we log the same message a million times, which is going to cause a significant regression. Moreover this is logged on the executor, an end-user typically does not look at those logs (there is also no reason to do so since the job does not throw an error). My suggestion would be to not log at all, or just log once. I prefer not to log at all. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see your point. And I agree with you. But I wanted to put some traces of what was happening What about using DEBUG as log level? In this case most of the time we are not logging anything, but if we want to check is an overflow is happening we can. What do you think? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am ok with using debug/trace level logging. Can you make sure we do not construct the message unless we are logging or throwing the exception (changing |
||
null | ||
mgaido91 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
} else { | ||
throw new ArithmeticException(message) | ||
} | ||
} | ||
} | ||
|
||
/** | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -49,7 +49,6 @@ select 1e35 / 0.1; | |
|
||
-- arithmetic operations causing a precision loss are truncated | ||
select 123456789123456789.1234567890 * 1.123456789123456789; | ||
select 0.001 / 9876543210987654321098765432109876543.2 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it is missing a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes, unfortunately I missed it somehow previously... |
||
|
||
-- return NULL instead of rounding, according to old Spark versions' behavior | ||
set spark.sql.decimalOperations.allowPrecisionLoss=false; | ||
|
@@ -75,6 +74,30 @@ select 1e35 / 0.1; | |
|
||
-- arithmetic operations causing a precision loss return NULL | ||
select 123456789123456789.1234567890 * 1.123456789123456789; | ||
select 0.001 / 9876543210987654321098765432109876543.2 | ||
|
||
-- throw an exception instead of returning NULL, according to SQL ANSI 2011 | ||
set spark.sql.decimalOperations.nullOnOverflow=false; | ||
|
||
-- test decimal operations | ||
select id, a+b, a-b, a*b, a/b from decimals_test order by id; | ||
|
||
-- test operations between decimals and constants | ||
select id, a*10, b/10 from decimals_test order by id; | ||
|
||
-- test operations on constants | ||
select 10.3 * 3.0; | ||
select 10.3000 * 3.0; | ||
select 10.30000 * 30.0; | ||
select 10.300000000000000000 * 3.000000000000000000; | ||
select 10.300000000000000000 * 3.0000000000000000000; | ||
|
||
-- arithmetic operations causing an overflow throw exception | ||
select (5e36 + 0.1) + 5e36; | ||
select (-4e36 - 0.1) - 7e36; | ||
select 12345678901234567890.0 * 12345678901234567890.0; | ||
select 1e35 / 0.1; | ||
|
||
-- arithmetic operations causing a precision loss throw exception | ||
select 123456789123456789.1234567890 * 1.123456789123456789; | ||
|
||
drop table decimals_test; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you not just calling
Decimal.toPrecision
here? There seems to be very little value in code generating this (no specialization).