Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-24924][SQL] Add mapping for built-in Avro data source #21878

Closed
wants to merge 2 commits into from
Closed

[SPARK-24924][SQL] Add mapping for built-in Avro data source #21878

wants to merge 2 commits into from

Conversation

dongjoon-hyun
Copy link
Member

What changes were proposed in this pull request?

This PR aims to the followings.

  1. Like com.databricks.spark.csv mapping, we had better map com.databricks.spark.avro to built-in Avro data source.
  2. Remove incorrect error message, Please find an Avro package at ....

How was this patch tested?

Pass the newly added tests.

@dongjoon-hyun
Copy link
Member Author

cc @gengliangwang and @gatorsmile

Copy link
Member

@HyukjinKwon HyukjinKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, I was exactly looking in the same problem. LGTM

@SparkQA
Copy link

SparkQA commented Jul 26, 2018

Test build #93575 has finished for PR 21878 at commit d95ba40.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Jul 26, 2018

Test build #93577 has finished for PR 21878 at commit d2759cc.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@@ -635,12 +637,6 @@ object DataSource extends Logging {
"Hive built-in ORC data source must be used with Hive support enabled. " +
"Please use the native ORC data source by setting 'spark.sql.orc.impl' to " +
"'native'")
} else if (provider1.toLowerCase(Locale.ROOT) == "avro" ||
provider1 == "com.databricks.spark.avro") {
throw new AnalysisException(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we show message to user for loading the built-in spark-avro jar?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I think it would be okay. If user provide avro, it will show an error like:

17/05/10 09:47:44 WARN DataSource: Multiple sources found for csv (org.apache.spark.sql.execution.datasources.csv.CSVFileFormat,
com.databricks.spark.csv.DefaultSource15), defaulting to the internal datasource (org.apache.spark.sql.execution.datasources.csv.CSVFileFormat).

in most cases (see #17916)

Copy link
Member

@gengliangwang gengliangwang Jul 26, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I mean by default the avro package is not loaded. E.g. If we start spark-shell without loading the jar, then it will show error "Failed to find data source: avro. Please find an Avro package at http://spark.apache.org/third-party-projects.html" if we use format("avro").

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eh, if users were using the external avro, they will likely meet the error if they directly upgrade Spark.
Otherwise, users will see the release note that Avro pacakge is included in 2.4.0, and they will not provide this jar.
If users miss this release note, then they will try to explicitly provide the thirdparty jar which will give the error message above.

FWIW, if it's fully qualified path, the thridparty jar will still be used in theory.

Did I misunderstand or miss something maybe?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or is the Avro jar meant to be separately distributed? I thought it'd be included within Spark.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I totally agree with the mapping, we should do it.
The comment here is about when Spark can't find any avro package, we should show a message for loading the spark-avro jar(org.apache.spark.sql.avro).
Different from CSV, the package spark-avro is not loaded by default within Spark(at least as I tried spark-shell).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see. Okay. Then, how about this: I assume the documentation about Avro will be done to Spark for 2.4.0 soon. When it's done, we add a message here for Avro like please see the documentation to use Spark's avro package?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks same thing could happen in Kafka too already.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @gengliangwang for clarifying this.

@gengliangwang
Copy link
Member

LGTM.
Let's add a message for how to load avro package when the documentation for 2.4 is done :)

@HyukjinKwon
Copy link
Member

Merged to master.

@asfgit asfgit closed this in 58353d7 Jul 26, 2018
@dongjoon-hyun
Copy link
Member Author

Thank you, @gengliangwang and @HyukjinKwon .

For the message, I follow the way to use Apache Spark external module; e.g. we need to specify kafka like the following. avro is also designed as external module. We should follow that.

./bin/spark-shell --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.1

@gengliangwang . I assumed that you are the best person to add avro document. If you need me to do something, please feel free to let me know. :)

@dongjoon-hyun dongjoon-hyun deleted the SPARK-24924 branch July 26, 2018 16:28
otterc pushed a commit to linkedin/spark that referenced this pull request Mar 22, 2023
This PR aims to the followings.
1. Like `com.databricks.spark.csv` mapping, we had better map `com.databricks.spark.avro` to built-in Avro data source.
2. Remove incorrect error message, `Please find an Avro package at ...`.

Pass the newly added tests.

Author: Dongjoon Hyun <[email protected]>

Closes apache#21878 from dongjoon-hyun/SPARK-24924.

(cherry picked from commit 58353d7)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants