Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

create sequential context testing utility function #220

Merged
merged 10 commits into from
Feb 11, 2024

Conversation

aaron10l
Copy link
Contributor

implements the utility function for sequential univariate testing of contexts. for issue #204

Copy link
Owner

@cnellington cnellington left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few style comments but overall looks great. I think we should also have a notebook demonstrating expected behavior, can you make a tutorial notebook for the website that shows this?

contextualized/analysis/pvals.py Outdated Show resolved Hide resolved
contextualized/analysis/pvals.py Show resolved Hide resolved
contextualized/analysis/utils.py Outdated Show resolved Hide resolved
@cnellington cnellington changed the base branch from dev to main December 29, 2023 23:14
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@@ -0,0 +1,3260 @@
{
Copy link
Owner

@cnellington cnellington Jan 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A quick explanation of why sequential testing is useful (we want to find which features are the most context dependent, and which contexts are the most important), and how we actually achieve this would be good to start with.


Reply via ReviewNB

@@ -0,0 +1,3260 @@
{
Copy link
Owner

@cnellington cnellington Jan 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This output should be silenced, not sure why capture is not working here. This could be a fix:

Lightning-AI/pytorch-lightning#6341

import logging
logging.getLogger("pytorch_lightning").setLevel(logging.ERROR)


Reply via ReviewNB

@@ -0,0 +1,3260 @@
{
Copy link
Owner

@cnellington cnellington Jan 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's odd that so many of these show up as 0.90909. I wonder if this is a bug. It would also be nice to show at least one significantly context-dependent predictor, since all of these are p > 0.05 without correcting for FDR. You might need to simulate data to show this.


Reply via ReviewNB

Copy link
Owner

@cnellington cnellington left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good! Mainly style comments on the code for documentation. More critically, we will need to update the notebook to have an example where we recover a significantly context-dependent predictor. The pvals in the existing notebook are odd, and none are significant.

contextualized/analysis/pvals.py Outdated Show resolved Hide resolved
@@ -165,3 +166,66 @@ def calc_heterogeneous_predictor_effects_pvals(model, C, **kwargs):
]
)
return pvals


def test_sequential_contexts(model_constructor, C, X, Y, **kwargs):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add this to docs/source/analysis.rst to include it in the summary table and in the documentation page, and then rebuild the docs after updating the type hints and docstring?

@@ -0,0 +1,463 @@
{
Copy link
Owner

@cnellington cnellington Feb 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should almost certainly be significant because of our simulation setup. I've run this simulation and I get the p-val at the lower bound of our range because the context dependence is so strong/pure. It would be nice to get similar results here.


Reply via ReviewNB

@cnellington cnellington merged commit af62b4c into cnellington:main Feb 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants