Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

threshold z-map images from Dataset objects to create coordinates #423

Closed
jdkent opened this issue Dec 16, 2020 · 4 comments · Fixed by #446
Closed

threshold z-map images from Dataset objects to create coordinates #423

jdkent opened this issue Dec 16, 2020 · 4 comments · Fixed by #446
Labels
cbma Issues/PRs pertaining to coordinate-based meta-analysis enhancement New feature or request ibma Issues/PRs pertaining to image-based meta-analysis

Comments

@jdkent
Copy link
Member

jdkent commented Dec 16, 2020

It would be useful to create coordinates from an image within a Dataset object in circumstances where:

  • for some of the studies you only have coordinates, but others you have images
  • want to compare an IBMA approach with a CBMA approach when you have images.
    This can be accomplished with nilearns get_clusters_table
@tsalo tsalo added the enhancement New feature or request label Dec 16, 2020
@tsalo
Copy link
Member

tsalo commented Dec 17, 2020

I think this is a great idea, but I have a couple of thoughts:

  1. Would applying the same threshold to all image-only studies introduce a bias?
  2. It seems like this would make handling null findings in CBMAs more relevant since there's no guarantee that the images will have any significant clusters (see Handling null findings in coordinate-based meta-analyses #294).
  3. It should be trivial to convert other images types to z-maps before the coordinate extraction too!

@tyarkoni
Copy link
Contributor

tyarkoni commented Dec 17, 2020

Would applying the same threshold to all image-only studies introduce a bias?

Yes in the sense that the distribution of voxel-wise image statistics will be biased (e.g., if you use a low threshold, you'll get more 1's in the MKDA maps than if you use a high threshold). But that's only a problem in the usual "gives people one more knob with which to p-hack" sense. There's no obvious reason why it should bias meta-analysis statistics, because picking a different threshold shouldn't increase the probability of spatial convergence. (Well, except maybe in the sense that at the limit, the null is obviously false everywhere, so with enough non-zero values, the whole brain becomes significant. But that's a separate and much deeper conceptual issue.)

It seems like this would make handling null findings in CBMAs more relevant since there's no guarantee that the images will have any significant clusters.

Probably a dumb question, but: why are null findings a problem? Can't we just run empty maps through the existing procedures and then they just add a bit of uncertainty? Or is the issue just in the representation of the data—i.e., that we can't tell whether a study is missing coordinates because of null findings, or because of missing data? If it's the latter, maybe we can just adopt a convention that [] is different from null (or None in Python). [EDIT: I found #294; will follow up there.]

It should be trivial to convert other images types to z-maps before the coordinate extraction too!

+1

@tsalo
Copy link
Member

tsalo commented Dec 17, 2020

There's no obvious reason why it should bias meta-analysis statistics, because picking a different threshold shouldn't increase the probability of spatial convergence.

That's a relief. Thanks!

@tyarkoni
Copy link
Contributor

As an aside, this is something we could potentially follow up on later. I don't expect changes in threshold to bias the estimated values per se, but there should be a (large) effect on the variance of the estimates and the size of the resulting clusters. There is the standard tradeoff here between voxel-wise sensitivity and spatial specificity, and it might be interesting to characterize that. For example, it seems reasonable to suppose that if one has access to the original images, but nevertheless insists on doing a CBMA for some reason, then one is generally better off using a lower rather than a higher threshold to generate peaks—potentially even without any MCC.

@jdkent is working on a workflow to easily run CBMA analyses on coordinates extracted from thresholded NeuroScout maps, and once that's ready it should be pretty trivial to address the above question.

@tsalo tsalo added cbma Issues/PRs pertaining to coordinate-based meta-analysis ibma Issues/PRs pertaining to image-based meta-analysis labels Mar 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cbma Issues/PRs pertaining to coordinate-based meta-analysis enhancement New feature or request ibma Issues/PRs pertaining to image-based meta-analysis
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants