Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[R-package] Add sparse feature contribution predictions #5108

Merged
merged 23 commits into from
Jun 17, 2022

Conversation

david-cortes
Copy link
Contributor

ref #4982

This PR adds a missing C-level function for predictions on sparse inputs to the R interface.

Docs are not updated in order to avoid future merge conflicts with following PRs.

Copy link
Collaborator

@jameslamb jameslamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks very much for this!

To fix errors like this in CI:

── 1. Error (test_Predictor.R:85:5): start_iteration works correctly ───────────
Error in `predictor$predict(data = data, start_iteration = start_iteration, 
    num_iteration = num_iteration, rawscore = rawscore, predleaf = predleaf, 
    predcontrib = predcontrib, header = header, reshape = reshape)`: object 'LGBM_BoosterPredictSparseOutput_R' not found
Backtrace:
 1. stats::predict(bst, test$data, predcontrib = TRUE)
      at test_Predictor.R:85:4
 2. lightgbm:::predict.lgb.Booster(bst, test$data, predcontrib = TRUE)
 3. object$predict(...)
 4. predictor$predict(...)

── 2. Error (test_Predictor.R:130:5): Feature contributions from sparse inputs p
Error in `predictor$predict(data = data, start_iteration = start_iteration, 
    num_iteration = num_iteration, rawscore = rawscore, predleaf = predleaf, 
    predcontrib = predcontrib, header = header, reshape = reshape)`: object 'LGBM_BoosterPredictSparseOutput_R' not found
Backtrace:
 1. stats::predict(bst, Xcsc, predcontrib = TRUE)
      at test_Predictor.R:130:4
 2. lightgbm:::predict.lgb.Booster(bst, Xcsc, predcontrib = TRUE)
 3. object$predict(...)
 4. predictor$predict(...)

I think you need to add LGBM_BoosterPredictSparseOutput_R() in this registration table:

static const R_CallMethodDef CallEntries[] = {

@david-cortes
Copy link
Contributor Author

Added.

Copy link
Collaborator

@jameslamb jameslamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great to me! Just left two small sugestions.

Can you please merge in the latest changes from master? After that, I'll run this project's comment-triggered valgrind and Solaris tests.

I'd also like a second opinion from a maintainer who's more experience with C/C++. @shiyu1994 and/or @guolinke can you help review this PR?


Xspv <- as(X[1L, , drop = FALSE], "sparseVector")
pred_spv <- predict(bst, Xspv, predcontrib = TRUE)
expect_s4_class(pred_spv, "dsparseVector")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Beyond just testing the type of the returned objects, can you also please add assertions that the predicted values are the same for all of these cases, and that they're the same as those predicted for a regular R matrix?

Those .Call() calls involve passing a lot of positional arguments with similar values, so such assertions would give us greater confidence that this is working correctly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought missing data was handled the same way as xgboost, which means predictions for sparse outputs should be different from those of dense inputs.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the sparse and dense data structures here are just different representations in memory of the exact same matrices, and that specialized methods for them in LightGBM are just intended to allow that sparse data to stay sparse throughout training + scoring.

And I believe that's not directly related to the handling of missing data (which is described in more detail in the discussion at #2921 (comment) and at https://lightgbm.readthedocs.io/en/latest/Advanced-Topics.html?highlight=missing#missing-value-handle).

Consider the following example:

library(lightgbm)
library(Matrix)

set.seed(708L)

data("EuStockMarkets")

stockDF <- as.data.frame(EuStockMarkets)
feature_names <- c("SMI", "CAC", "FTSE")
target_name <- "DAX"

# randomly set a portion of each feature to NA or 0
for (col_name in feature_names) {
    stockDF[
        sample(
            x = seq_len(nrow(stockDF))
            , size = as.integer(0.01 * nrow(stockDF))
            , replace = FALSE
        )
        , col_name
    ] <- NA_real_
    stockDF[
        sample(
            x = seq_len(nrow(stockDF))
            , size = as.integer(0.01 * nrow(stockDF))
            , replace = FALSE
        )
        , col_name
    ] <- 0.0
}

X_mat <- data.matrix(stockDF[, feature_names])
y <- stockDF[[target_name]]
X_dgCMatrix <- as(X_mat, "dgCMatrix")

bst_mat <- lightgbm::lightgbm(
    data = X_mat
    , label = y
    , objective = "regression"
    , nrounds = 10L
)

bst_dgCMatrix <- lightgbm::lightgbm(
    data = X_dgCMatrix
    , label = y
    , objective = "regression"
    , nrounds = 10L
)


# predicted values don't depend on input type from training time or the type of newdata
preds_mat_mat <- predict(bst_mat, X_mat)
preds_dgCMatrix_mat <- predict(bst_mat, X_dgCMatrix)
preds_mat_dgCMatrix <- predict(bst_dgCMatrix, X_mat)
preds_dgCMatrix_dgCMatrix <- predict(bst_dgCMatrix, X_dgCMatrix)

stopifnot(
    all(
        all(preds_mat_mat == preds_dgCMatrix_mat)
        , all(preds_dgCMatrix_mat == preds_mat_dgCMatrix)
        , all(preds_mat_dgCMatrix == preds_dgCMatrix_dgCMatrix)
    )
)

If you find a case where this is not true and LightGBM is creating different predictions for sparse and, I'd consider that a bug worth addressing.

@shiyu1994 @guolinke @StrikerRUS please correct me if I've misspoken.

Copy link
Contributor Author

@david-cortes david-cortes Apr 4, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good to know, would be useful to have that in the docs, since xgboost works differently (treats non-present sparse entries as missing instead of as zeros) and one might assume both libraries would work the same way.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh interesting, I did not know that. https://xgboost.readthedocs.io/en/stable/faq.html#why-do-i-see-different-results-with-sparse-and-dense-data

“Sparse” elements are treated as if they were “missing” by the tree booster, and as zeros by the linear booster. For tree models, it is important to use consistent data formats during training and scoring.


would be useful to have that in the docs

LightGBM's documentation does already describe this behavior directly. Please see https://lightgbm.readthedocs.io/en/latest/Advanced-Topics.html#missing-value-handle

  • LightGBM uses NA (NaN) to represent missing values by default. Change it to use zero by setting zero_as_missing=true
  • When zero_as_missing=false (default), the unrecorded values in sparse matrices (and LightSVM) are treated as zeros.

R-package/src/lightgbm_R.cpp Outdated Show resolved Hide resolved
@jameslamb jameslamb requested review from shiyu1994 and guolinke April 2, 2022 16:43
@jameslamb
Copy link
Collaborator

jameslamb commented Apr 4, 2022

/gha run r-solaris

Workflow Solaris CRAN check has been triggered! 🚀
https://github.com/microsoft/LightGBM/actions/runs/2086975644

solaris-x86-patched: https://builder.r-hub.io/status/lightgbm_3.3.2.99.tar.gz-f68864de656e497890f659ea1d7a1c83
solaris-x86-patched-ods: https://builder.r-hub.io/status/lightgbm_3.3.2.99.tar.gz-42d22bc664b9485fa8d7c2a982d84405
Reports also have been sent to LightGBM public e-mail: https://yopmail.com?lightgbm_rhub_checks
Status: success ✔️.

@jameslamb
Copy link
Collaborator

jameslamb commented Apr 4, 2022

/gha run r-valgrind

Workflow R valgrind tests has been triggered! 🚀
https://github.com/microsoft/LightGBM/actions/runs/2086975964

Status: success ✔️.

@david-cortes
Copy link
Contributor Author

Since PR #4977 is going to be merged, I've now modified this PR to also keep row names when applicable, and added a comparison against dense inputs in the tests.

@david-cortes
Copy link
Contributor Author

Failing CI checks are not related to this PR:
https://github.com/microsoft/LightGBM/runs/5819840473?check_suite_focus=true

@jameslamb
Copy link
Collaborator

failing CI checks are not related to this PR

I agree, we see this occasionally with our CI jobs that use Homebrew to set up LaTeX.

LaTeX errors when creating PDF version.
This typically indicates Rd problems.
LaTeX errors found:
! LaTeX Error: File `inconsolata.sty' not found.

Type X to quit or <RETURN> to proceed,
or enter new name. (Default extension: sty)

! Emergency stop.
<read *> 
         
l.287 ^^M
         

Will manually re-run them.

@jameslamb jameslamb self-requested a review April 9, 2022 02:07
Copy link
Collaborator

@jameslamb jameslamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great, thanks so much! I left a few more suggested changes.

I'm very happy that the Solaris and valgrind tests are passing 🎉

Since this PR introduces so much new C++ code, I'd like my review to not be the only one that counts towards a merge. @shiyu1994 @guolinke could one of you please help review this as well?

Comment on lines +229 to +231
if (NROW(row.names(data))) {
out@Dimnames[[1L]] <- row.names(data)
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please test this behavior, by adding these new combinations of predcontrib = TRUE + new {Matrix} classes to the tests from #4977?

# sparse matrix with row names

# sparse matrix without row names

Every PR adding new behavior to the package should include tests on that behavior, to catch unexpected issues with the implementation and to prevent future development from accidentally breaking that behavior.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are already tested in the tests from the PR for row names.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are not. The links in my comment point to tests on "CsparseMatrix" objects, but if you click them you won't see tests on the types referenced in this PR: "dsparseMatrix", "dsparseVector", "dgRMatrix", "dgCMatrix".

Is there something I've misunderstood about the relationship between these classes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are not. The links in my comment point to tests on "CsparseMatrix" objects, but if you click them you won't see tests on the types referenced in this PR: "dsparseMatrix", "dsparseVector", "dgRMatrix", "dgCMatrix".

Is there something I've misunderstood about the relationship between these classes?

There's a class hierarchy...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please tell us specifically what you mean by "there's a class hierarchy", and why it means that you don't want to add the tests I'm asking you to add.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dgCMatrix is a subclass of CsparseMatrix, which is a subclass of sparseMatrix, and so on. Classes like dsparseMatrix are abstract.

Copy link
Collaborator

@jameslamb jameslamb Apr 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for that. I'm still struggling to understand how that means that the tests I'm asking for shouldn't be added.

Consider the case added in this PR beginning with } else if (inherits(data, "dgRMatrix")) {. It doesn't contain a return() statement, so that at the end of the if - else if block, the "possibly add row names" code (the line this comment thread is on) will run.

If someone were to add return(out) on line 193, that "possibly add row names" code wouldn't be reached. I think it's desirable for a test to fail in that case, to inform us that adding that return statement is a breaking change that causes row names to not be set on the predictions.

The tests I linked test a regular dense R matrix and a CsparseMatrix. As the example below shows, that means they don't currently cover the cases where the input to predict() is a "dgRMatrix" or a "dsparseVector".

library(Matrix)

# the first batch of test cases use a regular R dense matrix
X <- matrix(rnorm(100), ncol = 4)
inherits(X, "dgRMatrix")       # FALSE
inherits(X, "dsparseMatrix")  # FALSE
inherits(X, "dsparseVector")  # FALSE
inherits(X, "dgCMatrix")       # FALSE

# the second batch of test cases converts that to a CsparseMatrix
Xcsc <- as(X, "CsparseMatrix")
inherits(Xcsc, "dgRMatrix")       # FALSE
inherits(Xcsc, "dsparseMatrix")  # TRUE
inherits(Xcsc, "dsparseVector")  # FALSE
inherits(Xcsc, "dgCMatrix")       # TRUE

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, if you really want to have tests for everything, I've added a test for CSR matrices. A vector representing a single row cannot have row names so I left that out of tests.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you really want to have tests for everything

Thank you, yes I do. The more of this project's behaviors are reflected in tests, the less likely it is that future changes silently break that behavior. This project is too large for all of these concerns to just be kept in maintainers' heads and enforced through PR comments.

R-package/R/lgb.Predictor.R Outdated Show resolved Hide resolved
R-package/R/lgb.Predictor.R Outdated Show resolved Hide resolved
R-package/R/lgb.Predictor.R Outdated Show resolved Hide resolved
R-package/R/lgb.Predictor.R Show resolved Hide resolved
R-package/R/lgb.Predictor.R Show resolved Hide resolved

Xspv <- as(X[1L, , drop = FALSE], "sparseVector")
pred_spv <- predict(bst, Xspv, predcontrib = TRUE)
expect_s4_class(pred_spv, "dsparseVector")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh interesting, I did not know that. https://xgboost.readthedocs.io/en/stable/faq.html#why-do-i-see-different-results-with-sparse-and-dense-data

“Sparse” elements are treated as if they were “missing” by the tree booster, and as zeros by the linear booster. For tree models, it is important to use consistent data formats during training and scoring.


would be useful to have that in the docs

LightGBM's documentation does already describe this behavior directly. Please see https://lightgbm.readthedocs.io/en/latest/Advanced-Topics.html#missing-value-handle

  • LightGBM uses NA (NaN) to represent missing values by default. Change it to use zero by setting zero_as_missing=true
  • When zero_as_missing=false (default), the unrecorded values in sparse matrices (and LightSVM) are treated as zeros.

@david-cortes
Copy link
Contributor Author

Failing CI checks like this are definitely not from this PR: https://github.com/microsoft/LightGBM/runs/6029069414?check_suite_focus=true

@jameslamb
Copy link
Collaborator

Failing CI checks like this are definitely not from this PR

Yep, a recent security patch to git broke some of our containerized CI jobs (#5151). Once #5152 is merged, they should be fixed.

@david-cortes
Copy link
Contributor Author

The linter raised a curious comment:

warning: Function "cbind" is undesirable. As an alternative, cbind is an unsafe way to build up a data frame. merge() or direct column assignment is preferred..

Says that "cbind is an unsafe way to build up a data frame", but the code is not building a data.frame, nor is it meant to do so.

It then suggests merge, which does not do the same operation as cbind.

Then it suggests direct column assignment, which will not work if the LHS is a matrix instead of data.frame.

The linter thus leaves one out of efficient options for concatenating matrices or other non-data-frame objects.

@jameslamb jameslamb self-requested a review May 22, 2022 04:59
Copy link
Collaborator

@jameslamb jameslamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks very much for adding tests. I left one more suggestion for those tests...otherwise, I approve of this from the R side.

Before merging though, I would still like some help from @guolinke or @shiyu1994 to review the changes in lightgbm_R.cpp (#5108 (review)).


The linter raised a curious comment ... The linter thus leaves one out of efficient options for concatenating matrices

Since you haven't mentioned the specific place where you wanted to use cbind() and what you did instead, it's not possible for maintainers to judge whether whatever pattern you used instead is less efficient than using cbind().

The warning you've mentioned was added a while ago in this project, to nudge contributors who were not familiar with R into using safer patterns when creating data frames. If you have a specific proposal for why that restriction should be removed, with a specific example of a place in {lightgbm}'s code that would benefit from the use of cbind(), I'd be happy to consider removing that restriction.


X_wrong <- X[, c(1L:10L, 1L:10L)]
X_wrong <- as(X_wrong, "CsparseMatrix")
expect_error(predict(bst, X_wrong, predcontrib = TRUE))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for these expect_error() calls, can you please use argument regexp to be sure that these tests are matching the specific error message they're intended to catch?

Similar to these test cases:

expect_error({
bst <- lgb.train(
data = dtrain
, params = list(
objective_type = "not_a_real_objective"
, verbosity = VERBOSITY
)
)
}, regexp = "Unknown objective type name: not_a_real_objective")

with pytest.raises(lgb.basic.LightGBMError,
match="Cannot find parser class 'dummy', please register first or check config format"):
data.construct()

We've found that approach to error-catching tests useful to prevent the case where tests silently pass in the presence of other, unexpected errors happening on the codepaths the test touches.

R-package/R/lgb.Predictor.R Show resolved Hide resolved
@david-cortes
Copy link
Contributor Author

Thanks very much for adding tests. I left one more suggestion for those tests...otherwise, I approve of this from the R side.

Before merging though, I would still like some help from @guolinke or @shiyu1994 to review the changes in lightgbm_R.cpp (#5108 (review)).

The linter raised a curious comment ... The linter thus leaves one out of efficient options for concatenating matrices

Since you haven't mentioned the specific place where you wanted to use cbind() and what you did instead, it's not possible for maintainers to judge whether whatever pattern you used instead is less efficient than using cbind().

The warning you've mentioned was added a while ago in this project, to nudge contributors who were not familiar with R into using safer patterns when creating data frames. If you have a specific proposal for why that restriction should be removed, with a specific example of a place in {lightgbm}'s code that would benefit from the use of cbind(), I'd be happy to consider removing that restriction.

Any usage of cbind with CSC matrices would be more efficient and take fewer lines of code than creating an empty matrix and assigning columns from the cbinded objects. Don't see any usage for that right now, but something like a test checking that it throws an error with inputs of incorrect shape would benefit from calling cbind.

@jameslamb
Copy link
Collaborator

@shiyu1994 @guolinke can you please provide a review?

@guolinke
Copy link
Collaborator

C++ part looks good to me

Copy link
Collaborator

@jameslamb jameslamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for looking @guolinke !

Ok @david-cortes , I think we can move forward with this! Since it's been a few weeks since the last commit on this PR, can you please update it to the latest state of master? If CI passes after that, I'll merge this up.

Thanks for all the hard work!

@david-cortes
Copy link
Contributor Author

Updated.

@david-cortes
Copy link
Contributor Author

The linter is interpreting regular expressions as file paths:

[[1]]
/home/runner/work/LightGBM/LightGBM/R-package/tests/testthat/test_Predictor.R:167:71: warning: [non_portable_path] Use file.path() to construct portable file paths.
    expect_error(predict(bst, X_wrong, predcontrib = TRUE), regexp = "input data has \\d+ columns")
                                                                      ^~~~~~~~~~~~~~~~~~~~~~~~~~~

@jameslamb
Copy link
Collaborator

The linter is interpreting regular expressions as file paths:

I noticed something similar upgrading to {lintr} 3.0, in #5294 . In that PR, I used a # nolint comment. But for this PR, could you please just replace \\d+ with the literal number of columns expected? Since you'll have to touch that line anyway, might as well also make the test a bit stricter.

@david-cortes
Copy link
Contributor Author

Updated.

@jameslamb jameslamb merged commit 6f92d47 into microsoft:master Jun 17, 2022
@jameslamb jameslamb mentioned this pull request Oct 7, 2022
40 tasks
@github-actions
Copy link

This pull request has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 19, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants