-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENH] New diagnostics module, with post-meta-analysis Jackknife method #592
Conversation
Here's an example output, based on the ALE done in the manuscript, here.
|
IBMA estimators need masker and dataset attributes to work with the Jackknife method.
Just came across a potential bug in the updated example. There are some negative values in the table, which shouldn't be possible with ALE metas, since ALEs only have positive values. EDIT: The issue was that the Jackknife in that example was called twice- on different MetaResults. Those MetaResults contained references to the original ALEs, which was one object called twice on different Datasets, meaning that the MetaResults had the most recent version of the ALE (including the dataset and inputs_ attributes). This should be fixed in 4529cca. |
… object. This was causing problems when the same Estimator was fitted to multiple Datasets. The MetaResult's estimator attribute would have the updated inputs_ and Dataset from the last Dataset.
Now there's a problem with the peaks2maps test... EDIT: And now it seems fine 🤷 |
Codecov Report
@@ Coverage Diff @@
## main #592 +/- ##
==========================================
- Coverage 85.84% 85.72% -0.13%
==========================================
Files 39 40 +1
Lines 4288 4384 +96
==========================================
+ Hits 3681 3758 +77
- Misses 607 626 +19
Continue to review full report at Codecov.
|
# For the private method that does the work of the analysis, we have a lot of constant | ||
# parameters, and only one that varies (the ID of the study being removed), | ||
# so we use functools.partial. | ||
transform_method = partial( | ||
self._transform, | ||
all_ids=meta_ids, | ||
dset=dset, | ||
estimator=estimator, | ||
target_value_map=target_value_map, | ||
stat_values=stat_values, | ||
original_masker=original_masker, | ||
cluster_masker=cluster_masker, | ||
) | ||
with Pool(self.n_cores) as p: | ||
jackknife_results = list(tqdm(p.imap(transform_method, meta_ids), total=len(meta_ids))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a much cleaner approach than what I've been doing for Monte Carlo FWE correction in the CBMA classes. However, one thing I'm not sure about is if using Pool
with one process is the same as running the same function without multiprocessing at all. @jdkent do you know?
Closes #590.
Changes proposed in this pull request:
To do:
- [ ] Convert into a proper Transformer? E.g., return an updated MetaResult object, with the labeled cluster map and the associated experiment contribution table.I don't want to tackle this in this PR. I'll do it in another one, I guess.