Skip to content

Commit

Permalink
Changes for 0.17.0 release
Browse files Browse the repository at this point in the history
  • Loading branch information
srowen committed Sep 7, 2023
1 parent 994e357 commit b2611bd
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 20 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,15 @@ You can link against this library in your program at the following coordinates:
```
groupId: com.databricks
artifactId: spark-xml_2.12
version: 0.16.0
version: 0.17.0
```

## Using with Spark shell

This package can be added to Spark using the `--packages` command line option. For example, to include it when starting the spark shell:

```
$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-xml_2.12:0.16.0
$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-xml_2.12:0.17.0
```

## Features
Expand Down Expand Up @@ -399,7 +399,7 @@ Automatically infer schema (data types)
```R
library(SparkR)

sparkR.session("local[4]", sparkPackages = c("com.databricks:spark-xml_2.12:0.16.0"))
sparkR.session("local[4]", sparkPackages = c("com.databricks:spark-xml_2.12:0.17.0"))

df <- read.df("books.xml", source = "xml", rowTag = "book")

Expand All @@ -411,7 +411,7 @@ You can manually specify schema:
```R
library(SparkR)

sparkR.session("local[4]", sparkPackages = c("com.databricks:spark-xml_2.12:0.16.0"))
sparkR.session("local[4]", sparkPackages = c("com.databricks:spark-xml_2.12:0.17.0"))
customSchema <- structType(
structField("_id", "string"),
structField("author", "string"),
Expand Down
19 changes: 3 additions & 16 deletions build.sbt
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ import com.typesafe.tools.mima.core.MissingClassProblem

name := "spark-xml"

version := "0.16.0"
version := "0.17.0"

organization := "com.databricks"

Expand Down Expand Up @@ -81,21 +81,8 @@ fork := true
// Prints JUnit tests in output
Test / testOptions := Seq(Tests.Argument(TestFrameworks.JUnit, "-v"))

mimaPreviousArtifacts := Set("com.databricks" %% "spark-xml" % "0.15.0")
mimaPreviousArtifacts := Set("com.databricks" %% "spark-xml" % "0.16.0")

mimaBinaryIssueFilters ++= {
import com.typesafe.tools.mima.core.DirectMissingMethodProblem
import com.typesafe.tools.mima.core.ProblemFilters.exclude
Seq(
"com.databricks.spark.xml.util.CompressionCodecs",
"com.databricks.spark.xml.util.CompressionCodecs$",
"com.databricks.spark.xml.util.DropMalformedMode",
"com.databricks.spark.xml.util.DropMalformedMode$",
"com.databricks.spark.xml.util.FailFastMode",
"com.databricks.spark.xml.util.FailFastMode$",
"com.databricks.spark.xml.util.ParseMode",
"com.databricks.spark.xml.util.ParseMode$",
"com.databricks.spark.xml.util.PermissiveMode",
"com.databricks.spark.xml.util.PermissiveMode$"
).map(exclude[MissingClassProblem](_))
Seq()
}

0 comments on commit b2611bd

Please sign in to comment.