-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
create sequential context testing utility function #220
create sequential context testing utility function #220
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few style comments but overall looks great. I think we should also have a notebook demonstrating expected behavior, can you make a tutorial notebook for the website that shows this?
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
@@ -0,0 +1,3260 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A quick explanation of why sequential testing is useful (we want to find which features are the most context dependent, and which contexts are the most important), and how we actually achieve this would be good to start with.
Reply via ReviewNB
@@ -0,0 +1,3260 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This output should be silenced, not sure why capture is not working here. This could be a fix:
Lightning-AI/pytorch-lightning#6341
import logging logging.getLogger("pytorch_lightning").setLevel(logging.ERROR)
Reply via ReviewNB
@@ -0,0 +1,3260 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's odd that so many of these show up as 0.90909. I wonder if this is a bug. It would also be nice to show at least one significantly context-dependent predictor, since all of these are p > 0.05 without correcting for FDR. You might need to simulate data to show this.
Reply via ReviewNB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good! Mainly style comments on the code for documentation. More critically, we will need to update the notebook to have an example where we recover a significantly context-dependent predictor. The pvals in the existing notebook are odd, and none are significant.
contextualized/analysis/pvals.py
Outdated
@@ -165,3 +166,66 @@ def calc_heterogeneous_predictor_effects_pvals(model, C, **kwargs): | |||
] | |||
) | |||
return pvals | |||
|
|||
|
|||
def test_sequential_contexts(model_constructor, C, X, Y, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add this to docs/source/analysis.rst
to include it in the summary table and in the documentation page, and then rebuild the docs after updating the type hints and docstring?
Update Aaron's branch with main updates
@@ -0,0 +1,463 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should almost certainly be significant because of our simulation setup. I've run this simulation and I get the p-val at the lower bound of our range because the context dependence is so strong/pure. It would be nice to get similar results here.
Reply via ReviewNB
implements the utility function for sequential univariate testing of contexts. for issue #204