-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-24924][SQL] Add mapping for built-in Avro data source #21878
Conversation
cc @gengliangwang and @gatorsmile |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, I was exactly looking in the same problem. LGTM
Test build #93575 has finished for PR 21878 at commit
|
Test build #93577 has finished for PR 21878 at commit
|
@@ -635,12 +637,6 @@ object DataSource extends Logging { | |||
"Hive built-in ORC data source must be used with Hive support enabled. " + | |||
"Please use the native ORC data source by setting 'spark.sql.orc.impl' to " + | |||
"'native'") | |||
} else if (provider1.toLowerCase(Locale.ROOT) == "avro" || | |||
provider1 == "com.databricks.spark.avro") { | |||
throw new AnalysisException( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we show message to user for loading the built-in spark-avro jar?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I think it would be okay. If user provide avro
, it will show an error like:
17/05/10 09:47:44 WARN DataSource: Multiple sources found for csv (org.apache.spark.sql.execution.datasources.csv.CSVFileFormat,
com.databricks.spark.csv.DefaultSource15), defaulting to the internal datasource (org.apache.spark.sql.execution.datasources.csv.CSVFileFormat).
in most cases (see #17916)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, I mean by default the avro package is not loaded. E.g. If we start spark-shell without loading the jar, then it will show error "Failed to find data source: avro. Please find an Avro package at http://spark.apache.org/third-party-projects.html" if we use format("avro")
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eh, if users were using the external avro, they will likely meet the error if they directly upgrade Spark.
Otherwise, users will see the release note that Avro pacakge is included in 2.4.0, and they will not provide this jar.
If users miss this release note, then they will try to explicitly provide the thirdparty jar which will give the error message above.
FWIW, if it's fully qualified path, the thridparty jar will still be used in theory.
Did I misunderstand or miss something maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or is the Avro jar meant to be separately distributed? I thought it'd be included within Spark.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I totally agree with the mapping, we should do it.
The comment here is about when Spark can't find any avro package, we should show a message for loading the spark-avro jar(org.apache.spark.sql.avro).
Different from CSV, the package spark-avro is not loaded by default within Spark(at least as I tried spark-shell).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see. Okay. Then, how about this: I assume the documentation about Avro will be done to Spark for 2.4.0 soon. When it's done, we add a message here for Avro like please see the documentation to use Spark's avro package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks same thing could happen in Kafka too already.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @gengliangwang for clarifying this.
LGTM. |
Merged to master. |
Thank you, @gengliangwang and @HyukjinKwon . For the message, I follow the way to use Apache Spark
@gengliangwang . I assumed that you are the best person to add |
This PR aims to the followings. 1. Like `com.databricks.spark.csv` mapping, we had better map `com.databricks.spark.avro` to built-in Avro data source. 2. Remove incorrect error message, `Please find an Avro package at ...`. Pass the newly added tests. Author: Dongjoon Hyun <[email protected]> Closes apache#21878 from dongjoon-hyun/SPARK-24924. (cherry picked from commit 58353d7)
What changes were proposed in this pull request?
This PR aims to the followings.
com.databricks.spark.csv
mapping, we had better mapcom.databricks.spark.avro
to built-in Avro data source.Please find an Avro package at ...
.How was this patch tested?
Pass the newly added tests.