Skip to content

Commit

Permalink
[SPARK-23435][SPARKR][TESTS][2.4] Update testthat to >= 2.0.0
Browse files Browse the repository at this point in the history
- Update `testthat` to >= 2.0.0
- Replace of `testthat:::run_tests` with `testthat:::test_package_dir`
- Add trivial assertions for tests, without any expectations, to avoid skipping.
- Update related docs.

`testthat` version has been frozen by [SPARK-22817](https://issues.apache.org/jira/browse/SPARK-22817) / apache#20003, but 1.0.2 is pretty old, and we shouldn't keep things in this state forever.

No.

- Existing CI pipeline:
     - Windows build on AppVeyor, R 3.6.2, testthtat 2.3.1
     - Linux build on Jenkins, R 3.1.x, testthat 1.0.2

- Additional builds with thesthat 2.3.1  using [sparkr-build-sandbox](https://github.com/zero323/sparkr-build-sandbox) on c7ed64a

   R 3.4.4  (image digest ec9032f8cf98)
   ```
   docker pull zero323/sparkr-build-sandbox:3.4.4
   docker run zero323/sparkr-build-sandbox:3.4.4 zero323 --branch SPARK-23435 --commit c7ed64a --public-key https://keybase.io/zero323/pgp_keys.asc
    ```
    3.5.3 (image digest 0b1759ee4d1d)

    ```
    docker pull zero323/sparkr-build-sandbox:3.5.3
    docker run zero323/sparkr-build-sandbox:3.5.3 zero323 --branch SPARK-23435 --commit
    c7ed64a --public-key https://keybase.io/zero323/pgp_keys.asc
    ```

   and 3.6.2 (image digest 6594c8ceb72f)
    ```
   docker pull zero323/sparkr-build-sandbox:3.6.2
   docker run zero323/sparkr-build-sandbox:3.6.2 zero323 --branch SPARK-23435 --commit c7ed64a --public-key https://keybase.io/zero323/pgp_keys.asc
   ````

   Corresponding [asciicast](https://asciinema.org/) are available as 10.5281/zenodo.3629431

     [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3629431.svg)](https://doi.org/10.5281/zenodo.3629431)

   (a bit to large to burden asciinema.org, but can run locally via `asciinema play`).

----------------------------

Continued from apache#27328

Closes apache#27359 from zero323/SPARK-23435.

Authored-by: zero323 <[email protected]>
Signed-off-by: HyukjinKwon <[email protected]>
  • Loading branch information
zero323 authored and HyukjinKwon committed Jan 29, 2020
1 parent ad9f578 commit 5385c4a
Show file tree
Hide file tree
Showing 8 changed files with 31 additions and 15 deletions.
4 changes: 4 additions & 0 deletions R/pkg/tests/fulltests/test_context.R
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,7 @@ test_that("rdd GC across sparkR.stop", {
countRDD(rdd3)
countRDD(rdd4)
sparkR.session.stop()
expect_true(TRUE)
})

test_that("job group functions can be called", {
Expand All @@ -105,6 +106,7 @@ test_that("job group functions can be called", {
suppressWarnings(cancelJobGroup(sc, "groupId"))
suppressWarnings(clearJobGroup(sc))
sparkR.session.stop()
expect_true(TRUE)
})

test_that("job description and local properties can be set and got", {
Expand Down Expand Up @@ -143,6 +145,7 @@ test_that("utility function can be called", {
sparkR.sparkContext(master = sparkRTestMaster)
setLogLevel("ERROR")
sparkR.session.stop()
expect_true(TRUE)
})

test_that("getClientModeSparkSubmitOpts() returns spark-submit args from whitelist", {
Expand Down Expand Up @@ -246,4 +249,5 @@ test_that("SPARK-25234: parallelize should not have integer overflow", {
# 47000 * 47000 exceeds integer range
parallelize(sc, 1:47000, 47000)
sparkR.session.stop()
expect_true(TRUE)
})
2 changes: 2 additions & 0 deletions R/pkg/tests/fulltests/test_includePackage.R
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ test_that("include inside function", {
data <- lapplyPartition(rdd, generateData)
actual <- collectRDD(data)
}
expect_true(TRUE)
})

test_that("use include package", {
Expand All @@ -55,6 +56,7 @@ test_that("use include package", {
data <- lapplyPartition(rdd, generateData)
actual <- collectRDD(data)
}
expect_true(TRUE)
})

sparkR.session.stop()
1 change: 1 addition & 0 deletions R/pkg/tests/fulltests/test_sparkSQL.R
Original file line number Diff line number Diff line change
Expand Up @@ -1409,6 +1409,7 @@ test_that("column operators", {
c5 <- c2 ^ c3 ^ c4
c6 <- c2 %<=>% c3
c7 <- !c6
expect_true(TRUE)
})

test_that("column functions", {
Expand Down
1 change: 1 addition & 0 deletions R/pkg/tests/fulltests/test_textFile.R
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,7 @@ test_that("several transformations on RDD created by textFile()", {
collectRDD(rdd)

unlink(fileName)
expect_true(TRUE)
})

test_that("textFile() followed by a saveAsTextFile() returns the same content", {
Expand Down
23 changes: 17 additions & 6 deletions R/pkg/tests/run-all.R
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ library(SparkR)

# SPARK-25572
if (identical(Sys.getenv("NOT_CRAN"), "true")) {

# Turn all warnings into errors
options("warn" = 2)

Expand Down Expand Up @@ -60,11 +59,23 @@ if (identical(Sys.getenv("NOT_CRAN"), "true")) {
if (identical(Sys.getenv("NOT_CRAN"), "true")) {
# set random seed for predictable results. mostly for base's sample() in tree and classification
set.seed(42)
# for testthat 1.0.2 later, change reporter from "summary" to default_reporter()
testthat:::run_tests("SparkR",
file.path(sparkRDir, "pkg", "tests", "fulltests"),
NULL,
"summary")

# TODO (SPARK-30663) To be removed once testthat 1.x is removed from all builds
if (grepl("^1\\..*", packageVersion("testthat"))) {
# testthat 1.x
test_runner <- testthat:::run_tests
reporter <- "summary"

} else {
# testthat >= 2.0.0
test_runner <- testthat:::test_package_dir
reporter <- testthat::default_reporter()
}

test_runner("SparkR",
file.path(sparkRDir, "pkg", "tests", "fulltests"),
NULL,
reporter)
}

SparkR:::uninstallDownloadedSpark()
Expand Down
7 changes: 3 additions & 4 deletions appveyor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,10 +42,9 @@ install:
# Install maven and dependencies
- ps: .\dev\appveyor-install-dependencies.ps1
# Required package for R unit tests
- cmd: R -e "install.packages(c('knitr', 'rmarkdown', 'devtools', 'e1071', 'survival'), repos='https://cloud.r-project.org/')"
# Here, we use the fixed version of testthat. For more details, please see SPARK-22817.
- cmd: R -e "devtools::install_version('testthat', version = '1.0.2', repos='https://cloud.r-project.org/')"
- cmd: R -e "packageVersion('knitr'); packageVersion('rmarkdown'); packageVersion('testthat'); packageVersion('e1071'); packageVersion('survival')"
- cmd: R -e "install.packages(c('knitr', 'rmarkdown', 'e1071', 'survival', 'arrow'), repos='https://cloud.r-project.org/')"
- cmd: R -e "install.packages(c('crayon', 'praise', 'R6', 'testthat'), repos='https://cloud.r-project.org/')"
- cmd: R -e "packageVersion('knitr'); packageVersion('rmarkdown'); packageVersion('testthat'); packageVersion('e1071'); packageVersion('survival'); packageVersion('arrow')"

build_script:
- cmd: mvn -DskipTests -Psparkr -Phive package
Expand Down
3 changes: 1 addition & 2 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,8 @@ $ sudo gem install jekyll jekyll-redirect-from pygments.rb
$ sudo pip install Pygments
# Following is needed only for generating API docs
$ sudo pip install sphinx pypandoc mkdocs
$ sudo Rscript -e 'install.packages(c("knitr", "devtools", "rmarkdown"), repos="https://cloud.r-project.org/")'
$ sudo Rscript -e 'install.packages(c("knitr", "devtools", "testthat", "rmarkdown"), repos="https://cloud.r-project.org/")'
$ sudo Rscript -e 'devtools::install_version("roxygen2", version = "5.0.1", repos="https://cloud.r-project.org/")'
$ sudo Rscript -e 'devtools::install_version("testthat", version = "1.0.2", repos="https://cloud.r-project.org/")'
```

Note: If you are on a system with both Ruby 1.9 and Ruby 2.0 you may need to replace gem with gem2.0.
Expand Down
5 changes: 2 additions & 3 deletions docs/building-spark.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ This will build Spark distribution along with Python pip and R packages. For mor
You can specify the exact version of Hadoop to compile against through the `hadoop.version` property.
If unset, Spark will build against Hadoop 2.6.X by default.

You can enable the `yarn` profile and optionally set the `yarn.version` property if it is different
You can enable the `yarn` profile and optionally set the `yarn.version` property if it is different
from `hadoop.version`.

Examples:
Expand Down Expand Up @@ -236,8 +236,7 @@ The run-tests script also can be limited to a specific Python version or a speci

To run the SparkR tests you will need to install the [knitr](https://cran.r-project.org/package=knitr), [rmarkdown](https://cran.r-project.org/package=rmarkdown), [testthat](https://cran.r-project.org/package=testthat), [e1071](https://cran.r-project.org/package=e1071) and [survival](https://cran.r-project.org/package=survival) packages first:

Rscript -e "install.packages(c('knitr', 'rmarkdown', 'devtools', 'e1071', 'survival'), repos='https://cloud.r-project.org/')"
Rscript -e "devtools::install_version('testthat', version = '1.0.2', repos='https://cloud.r-project.org/')"
Rscript -e "install.packages(c('knitr', 'rmarkdown', 'devtools', 'testthat', 'e1071', 'survival'), repos='https://cloud.r-project.org/')"

You can run just the SparkR tests using the command:

Expand Down

0 comments on commit 5385c4a

Please sign in to comment.