-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[R-package] Add sparse feature contribution predictions #5108
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks very much for this!
To fix errors like this in CI:
── 1. Error (test_Predictor.R:85:5): start_iteration works correctly ───────────
Error in `predictor$predict(data = data, start_iteration = start_iteration,
num_iteration = num_iteration, rawscore = rawscore, predleaf = predleaf,
predcontrib = predcontrib, header = header, reshape = reshape)`: object 'LGBM_BoosterPredictSparseOutput_R' not found
Backtrace:
1. stats::predict(bst, test$data, predcontrib = TRUE)
at test_Predictor.R:85:4
2. lightgbm:::predict.lgb.Booster(bst, test$data, predcontrib = TRUE)
3. object$predict(...)
4. predictor$predict(...)
── 2. Error (test_Predictor.R:130:5): Feature contributions from sparse inputs p
Error in `predictor$predict(data = data, start_iteration = start_iteration,
num_iteration = num_iteration, rawscore = rawscore, predleaf = predleaf,
predcontrib = predcontrib, header = header, reshape = reshape)`: object 'LGBM_BoosterPredictSparseOutput_R' not found
Backtrace:
1. stats::predict(bst, Xcsc, predcontrib = TRUE)
at test_Predictor.R:130:4
2. lightgbm:::predict.lgb.Booster(bst, Xcsc, predcontrib = TRUE)
3. object$predict(...)
4. predictor$predict(...)
I think you need to add LGBM_BoosterPredictSparseOutput_R()
in this registration table:
LightGBM/R-package/src/lightgbm_R.cpp
Line 937 in 60244e4
static const R_CallMethodDef CallEntries[] = { |
Added. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great to me! Just left two small sugestions.
Can you please merge in the latest changes from master
? After that, I'll run this project's comment-triggered valgrind and Solaris tests.
I'd also like a second opinion from a maintainer who's more experience with C/C++. @shiyu1994 and/or @guolinke can you help review this PR?
|
||
Xspv <- as(X[1L, , drop = FALSE], "sparseVector") | ||
pred_spv <- predict(bst, Xspv, predcontrib = TRUE) | ||
expect_s4_class(pred_spv, "dsparseVector") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Beyond just testing the type of the returned objects, can you also please add assertions that the predicted values are the same for all of these cases, and that they're the same as those predicted for a regular R matrix?
Those .Call()
calls involve passing a lot of positional arguments with similar values, so such assertions would give us greater confidence that this is working correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought missing data was handled the same way as xgboost, which means predictions for sparse outputs should be different from those of dense inputs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the sparse and dense data structures here are just different representations in memory of the exact same matrices, and that specialized methods for them in LightGBM are just intended to allow that sparse data to stay sparse throughout training + scoring.
And I believe that's not directly related to the handling of missing data (which is described in more detail in the discussion at #2921 (comment) and at https://lightgbm.readthedocs.io/en/latest/Advanced-Topics.html?highlight=missing#missing-value-handle).
Consider the following example:
library(lightgbm)
library(Matrix)
set.seed(708L)
data("EuStockMarkets")
stockDF <- as.data.frame(EuStockMarkets)
feature_names <- c("SMI", "CAC", "FTSE")
target_name <- "DAX"
# randomly set a portion of each feature to NA or 0
for (col_name in feature_names) {
stockDF[
sample(
x = seq_len(nrow(stockDF))
, size = as.integer(0.01 * nrow(stockDF))
, replace = FALSE
)
, col_name
] <- NA_real_
stockDF[
sample(
x = seq_len(nrow(stockDF))
, size = as.integer(0.01 * nrow(stockDF))
, replace = FALSE
)
, col_name
] <- 0.0
}
X_mat <- data.matrix(stockDF[, feature_names])
y <- stockDF[[target_name]]
X_dgCMatrix <- as(X_mat, "dgCMatrix")
bst_mat <- lightgbm::lightgbm(
data = X_mat
, label = y
, objective = "regression"
, nrounds = 10L
)
bst_dgCMatrix <- lightgbm::lightgbm(
data = X_dgCMatrix
, label = y
, objective = "regression"
, nrounds = 10L
)
# predicted values don't depend on input type from training time or the type of newdata
preds_mat_mat <- predict(bst_mat, X_mat)
preds_dgCMatrix_mat <- predict(bst_mat, X_dgCMatrix)
preds_mat_dgCMatrix <- predict(bst_dgCMatrix, X_mat)
preds_dgCMatrix_dgCMatrix <- predict(bst_dgCMatrix, X_dgCMatrix)
stopifnot(
all(
all(preds_mat_mat == preds_dgCMatrix_mat)
, all(preds_dgCMatrix_mat == preds_mat_dgCMatrix)
, all(preds_mat_dgCMatrix == preds_dgCMatrix_dgCMatrix)
)
)
If you find a case where this is not true and LightGBM is creating different predictions for sparse and, I'd consider that a bug worth addressing.
@shiyu1994 @guolinke @StrikerRUS please correct me if I've misspoken.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good to know, would be useful to have that in the docs, since xgboost works differently (treats non-present sparse entries as missing instead of as zeros) and one might assume both libraries would work the same way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh interesting, I did not know that. https://xgboost.readthedocs.io/en/stable/faq.html#why-do-i-see-different-results-with-sparse-and-dense-data
“Sparse” elements are treated as if they were “missing” by the tree booster, and as zeros by the linear booster. For tree models, it is important to use consistent data formats during training and scoring.
would be useful to have that in the docs
LightGBM's documentation does already describe this behavior directly. Please see https://lightgbm.readthedocs.io/en/latest/Advanced-Topics.html#missing-value-handle
- LightGBM uses NA (NaN) to represent missing values by default. Change it to use zero by setting
zero_as_missing=true
- When
zero_as_missing=false
(default), the unrecorded values in sparse matrices (and LightSVM) are treated as zeros.
/gha run r-solaris Workflow Solaris CRAN check has been triggered! 🚀 solaris-x86-patched: https://builder.r-hub.io/status/lightgbm_3.3.2.99.tar.gz-f68864de656e497890f659ea1d7a1c83 |
/gha run r-valgrind Workflow R valgrind tests has been triggered! 🚀 Status: success ✔️. |
Since PR #4977 is going to be merged, I've now modified this PR to also keep row names when applicable, and added a comparison against dense inputs in the tests. |
Failing CI checks are not related to this PR: |
I agree, we see this occasionally with our CI jobs that use Homebrew to set up LaTeX.
Will manually re-run them. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great, thanks so much! I left a few more suggested changes.
I'm very happy that the Solaris and valgrind tests are passing 🎉
Since this PR introduces so much new C++ code, I'd like my review to not be the only one that counts towards a merge. @shiyu1994 @guolinke could one of you please help review this as well?
if (NROW(row.names(data))) { | ||
out@Dimnames[[1L]] <- row.names(data) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please test this behavior, by adding these new combinations of predcontrib = TRUE
+ new {Matrix}
classes to the tests from #4977?
# sparse matrix with row names |
# sparse matrix without row names |
Every PR adding new behavior to the package should include tests on that behavior, to catch unexpected issues with the implementation and to prevent future development from accidentally breaking that behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those are already tested in the tests from the PR for row names.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are not. The links in my comment point to tests on "CsparseMatrix"
objects, but if you click them you won't see tests on the types referenced in this PR: "dsparseMatrix"
, "dsparseVector"
, "dgRMatrix"
, "dgCMatrix"
.
Is there something I've misunderstood about the relationship between these classes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are not. The links in my comment point to tests on
"CsparseMatrix"
objects, but if you click them you won't see tests on the types referenced in this PR:"dsparseMatrix"
,"dsparseVector"
,"dgRMatrix"
,"dgCMatrix"
.Is there something I've misunderstood about the relationship between these classes?
There's a class hierarchy...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please tell us specifically what you mean by "there's a class hierarchy", and why it means that you don't want to add the tests I'm asking you to add.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dgCMatrix
is a subclass of CsparseMatrix
, which is a subclass of sparseMatrix
, and so on. Classes like dsparseMatrix
are abstract.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for that. I'm still struggling to understand how that means that the tests I'm asking for shouldn't be added.
Consider the case added in this PR beginning with } else if (inherits(data, "dgRMatrix")) {
. It doesn't contain a return()
statement, so that at the end of the if - else if
block, the "possibly add row names" code (the line this comment thread is on) will run.
If someone were to add return(out)
on line 193, that "possibly add row names" code wouldn't be reached. I think it's desirable for a test to fail in that case, to inform us that adding that return
statement is a breaking change that causes row names to not be set on the predictions.
The tests I linked test a regular dense R matrix and a CsparseMatrix
. As the example below shows, that means they don't currently cover the cases where the input to predict()
is a "dgRMatrix"
or a "dsparseVector"
.
library(Matrix)
# the first batch of test cases use a regular R dense matrix
X <- matrix(rnorm(100), ncol = 4)
inherits(X, "dgRMatrix") # FALSE
inherits(X, "dsparseMatrix") # FALSE
inherits(X, "dsparseVector") # FALSE
inherits(X, "dgCMatrix") # FALSE
# the second batch of test cases converts that to a CsparseMatrix
Xcsc <- as(X, "CsparseMatrix")
inherits(Xcsc, "dgRMatrix") # FALSE
inherits(Xcsc, "dsparseMatrix") # TRUE
inherits(Xcsc, "dsparseVector") # FALSE
inherits(Xcsc, "dgCMatrix") # TRUE
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, if you really want to have tests for everything, I've added a test for CSR matrices. A vector representing a single row cannot have row names so I left that out of tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you really want to have tests for everything
Thank you, yes I do. The more of this project's behaviors are reflected in tests, the less likely it is that future changes silently break that behavior. This project is too large for all of these concerns to just be kept in maintainers' heads and enforced through PR comments.
|
||
Xspv <- as(X[1L, , drop = FALSE], "sparseVector") | ||
pred_spv <- predict(bst, Xspv, predcontrib = TRUE) | ||
expect_s4_class(pred_spv, "dsparseVector") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh interesting, I did not know that. https://xgboost.readthedocs.io/en/stable/faq.html#why-do-i-see-different-results-with-sparse-and-dense-data
“Sparse” elements are treated as if they were “missing” by the tree booster, and as zeros by the linear booster. For tree models, it is important to use consistent data formats during training and scoring.
would be useful to have that in the docs
LightGBM's documentation does already describe this behavior directly. Please see https://lightgbm.readthedocs.io/en/latest/Advanced-Topics.html#missing-value-handle
- LightGBM uses NA (NaN) to represent missing values by default. Change it to use zero by setting
zero_as_missing=true
- When
zero_as_missing=false
(default), the unrecorded values in sparse matrices (and LightSVM) are treated as zeros.
Co-authored-by: James Lamb <[email protected]>
Co-authored-by: James Lamb <[email protected]>
Co-authored-by: James Lamb <[email protected]>
Failing CI checks like this are definitely not from this PR: https://github.com/microsoft/LightGBM/runs/6029069414?check_suite_focus=true |
The linter raised a curious comment:
Says that "cbind is an unsafe way to build up a data frame", but the code is not building a data.frame, nor is it meant to do so. It then suggests Then it suggests direct column assignment, which will not work if the LHS is a matrix instead of data.frame. The linter thus leaves one out of efficient options for concatenating matrices or other non-data-frame objects. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks very much for adding tests. I left one more suggestion for those tests...otherwise, I approve of this from the R side.
Before merging though, I would still like some help from @guolinke or @shiyu1994 to review the changes in lightgbm_R.cpp
(#5108 (review)).
The linter raised a curious comment ... The linter thus leaves one out of efficient options for concatenating matrices
Since you haven't mentioned the specific place where you wanted to use cbind()
and what you did instead, it's not possible for maintainers to judge whether whatever pattern you used instead is less efficient than using cbind()
.
The warning you've mentioned was added a while ago in this project, to nudge contributors who were not familiar with R into using safer patterns when creating data frames. If you have a specific proposal for why that restriction should be removed, with a specific example of a place in {lightgbm}
's code that would benefit from the use of cbind()
, I'd be happy to consider removing that restriction.
|
||
X_wrong <- X[, c(1L:10L, 1L:10L)] | ||
X_wrong <- as(X_wrong, "CsparseMatrix") | ||
expect_error(predict(bst, X_wrong, predcontrib = TRUE)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for these expect_error()
calls, can you please use argument regexp
to be sure that these tests are matching the specific error message they're intended to catch?
Similar to these test cases:
LightGBM/R-package/tests/testthat/test_basic.R
Lines 721 to 729 in c000b8c
expect_error({ | |
bst <- lgb.train( | |
data = dtrain | |
, params = list( | |
objective_type = "not_a_real_objective" | |
, verbosity = VERBOSITY | |
) | |
) | |
}, regexp = "Unknown objective type name: not_a_real_objective") |
LightGBM/tests/python_package_test/test_basic.py
Lines 602 to 604 in c000b8c
with pytest.raises(lgb.basic.LightGBMError, | |
match="Cannot find parser class 'dummy', please register first or check config format"): | |
data.construct() |
We've found that approach to error-catching tests useful to prevent the case where tests silently pass in the presence of other, unexpected errors happening on the codepaths the test touches.
Any usage of |
@shiyu1994 @guolinke can you please provide a review? |
C++ part looks good to me |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for looking @guolinke !
Ok @david-cortes , I think we can move forward with this! Since it's been a few weeks since the last commit on this PR, can you please update it to the latest state of master
? If CI passes after that, I'll merge this up.
Thanks for all the hard work!
Updated. |
The linter is interpreting regular expressions as file paths:
|
I noticed something similar upgrading to |
Updated. |
This pull request has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this. |
ref #4982
This PR adds a missing C-level function for predictions on sparse inputs to the R interface.
Docs are not updated in order to avoid future merge conflicts with following PRs.