Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

meta-analysis problem #33

Closed
adelavega opened this issue Oct 7, 2021 · 8 comments
Closed

meta-analysis problem #33

adelavega opened this issue Oct 7, 2021 · 8 comments

Comments

@adelavega
Copy link
Contributor

No description provided.

@adelavega
Copy link
Contributor Author

summary: afni meta-anaysis maps w/ all tasks look very weak, especially for RMS which should be solid and looks good at single dataset level (t-stats)

two possibilities:

  1. nndb is on a different scale from other datasets, due to preprocessing differences (primarily) but also predictor scale differences (q: should we re-scale?)
  • look at time series for both
  • possibly rescale RMS for an NNDB task and take a look
  1. nndb tasks on average have fewer subjects, and if that's a problem (i.e. higher variance), they may overly domiante meta-analysis
  2. AFNI specific issue may lead to unstable PEs, with some very high value
  • find an example of this
  • exclude NNDB from meta-analysis to test if that's the primary cause of strange results

@jdkent
Copy link
Collaborator

jdkent commented Oct 8, 2021

Notes:

  • nistats has not been run on nndb so the results should be identical

  • using nistats by itself does not have strong meta-analytic maps since estimator has not always been tracked (but I could assume anything without an estimator was run using nistats).

  • afni:

    • nndb:
      • yes #############################:
        afni_nndb-yes
      • no #############################:
        afni_nndb-no
  • nistats

    • nndb:
      • yes #############################:
        nistats_nndb-yes
      • no #############################:
        nistats_nndb-no

@jdkent
Copy link
Collaborator

jdkent commented Oct 8, 2021

nistats including "None" estimators (assuming they are nistats)
nistats_nndb-no_none-yes

@jdkent
Copy link
Collaborator

jdkent commented Oct 8, 2021

using the first available collection for every study (including nndb
first_collections
)

@jdkent
Copy link
Collaborator

jdkent commented Oct 8, 2021

first available collections without nndb
first_collections_nndb-no

@jdkent
Copy link
Collaborator

jdkent commented Oct 8, 2021

the first available collection results with and without nndb are perfectly correlated, but differ in magnitude, where not including nndb increases magnitude.

@adelavega
Copy link
Contributor Author

adelavega commented Oct 8, 2021

Looks most likely that the issue is with fitlins.
poldracklab/fitlins#318

Running a test model to see if the patch fixes the issue.
Analysis ID: MzQpZ

If it seems to help, then run on all non-NNDB datasets, and compare to the map above.
If so, will delete old maps, with afni since they are incorrect.

@jdkent
Copy link
Collaborator

jdkent commented Oct 13, 2021

problem is neurostuff/NiMARE/issues/579

@jdkent jdkent closed this as completed Oct 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants