Skip to content

Commit

Permalink
fixed inline math symbols
Browse files Browse the repository at this point in the history
  • Loading branch information
ccraddock committed Nov 11, 2015
1 parent ce7781c commit ef58b02
Show file tree
Hide file tree
Showing 10 changed files with 8,942 additions and 7,268 deletions.
1,870 changes: 935 additions & 935 deletions frontiersFPHY.cls

Large diffs are not rendered by default.

1,872 changes: 936 additions & 936 deletions frontiersHLTH.cls

Large diffs are not rendered by default.

1,870 changes: 935 additions & 935 deletions frontiersSCNS.cls

Large diffs are not rendered by default.

1,864 changes: 932 additions & 932 deletions frontiers_suppmat.cls

Large diffs are not rendered by default.

3,272 changes: 1,636 additions & 1,636 deletions frontiersinHLTH&FPHY.bst

Large diffs are not rendered by default.

3,134 changes: 1,567 additions & 1,567 deletions frontiersinSCNS_ENG_HUMS.bst

Large diffs are not rendered by default.

243 changes: 243 additions & 0 deletions main.tex

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion qap_fn.tex
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@ \section{Introduction}
\label{intro}
It is well accepted that poor quality data interferes with the ability of neuroimaging analyses to uncover biological signal and distinguish meaningful from artifactual findings, but there is no clear guidance on how to differentiate “good” from “bad” data. A variety of different measures have been proposed, but there is no evidence supporting the primacy of one measure over another or on the ranges of values for differentiating “good” from “bad” data. As a result, researchers are required to rely on painstaking visual inspection to assess data quality. But this approach consumes a lot of time and resources, is subjective, and is susceptible to inter-­rater and test-­retest variability. Additionally, it is possible that some defects are too subtle to be fully appreciated by visual inspection, yet are strong enough to degrade the accuracy of data processing algorithms or bias analysis results. Further, it is difficult to visually assess the quality of data that has already been processed, such as that being shared through the Preprocessed Connectomes Project (PCP; http://preprocessed­connectomes­project.github.io/), the Human Connectome Project (HCP), and the Addiction Connectomes Preprocessing Iniatiative (ACPI). To begin to address this problem, the PCP has assembled several of the quality metrics proposed in the literature to implement a Quality Assessment Protocol (QAP; http://preprocessed­connectomes­project.github.io/quality­assessment­protocol).

The QAP is a open source software package implemented in python for the automated calculation of quality measures for functional and structural MRI data. The QAP software makes use of the Nipype () pipelining library to efficiently acheive high throughput processing on a variety of different high performance computing systems (). The quality of structural MRI data is assessed using contrast-­to-­noise ratio (CNR; Magnotta and Friedman, 2006), entropy focus criterion (EFC, Atkinson 1997), foreground-­to-­background energy ratio (FBER, ), voxel smoothness (FWHM, Friedman 2008), percentage of artifact voxels (QI1, Mortamet 2009), and signal­-to-­noise ratio (SNR, \cite{magnotta2006}). The QAP includes methods to assess both the spatial and temporal quality of fMRI data. Spatial quality is assessed using EFC, FBER, and FWHM, in addition to ghost-­to-­signal ratio (GSR). Temporal quality of functional data is assessed using the standardized root mean squared change in fMRI signal between volumes (DVARS; Nichols 2013), mean root mean square deviation (MeanFD, Jenkinson 2003), the percentage of voxels with MeanFD > 0.2 (Percent FD; Power 2012), the temporal mean of AFNI’s 3dTqual metric (1 minus the Spearman correlation between each fMRI volume and the median volume; Cox 1995) and the average fraction of outliers found in each volume using AFNI’s 3dTout command.
The QAP is a open source software package implemented in python for the automated calculation of quality measures for functional and structural MRI data. The QAP software makes use of the Nipype () pipelining library to efficiently acheive high throughput processing on a variety of different high performance computing systems (). The quality of structural MRI data is assessed using contrast-­to-­noise ratio (CNR; Magnotta and Friedman, 2006), entropy focus criterion (EFC, Atkinson 1997), foreground-­to-­background energy ratio (FBER, ), voxel smoothness (FWHM, Friedman 2008), percentage of artifact voxels (QI1, Mortamet 2009), and signal­-to-­noise ratio (SNR, \cite{magnotta2006}). The QAP includes methods to assess both the spatial and temporal quality of fMRI data. Spatial quality is assessed using EFC, FBER, and FWHM, in addition to ghost-­to-­signal ratio (GSR). Temporal quality of functional data is assessed using the standardized root mean squared change in fMRI signal between volumes (DVARS; Nichols 2013), mean root mean square deviation (MeanFD, Jenkinson 2003), the percentage of voxels with MeanFD $>$ 0.2 (Percent FD; Power 2012), the temporal mean of AFNI’s 3dTqual metric (1 minus the Spearman correlation between each fMRI volume and the median volume; Cox 1995) and the average fraction of outliers found in each volume using AFNI’s 3dTout command.

Applying the QAP for differentiating good quality data from poor will require learning which of the measures are the most sensitive to problems in the data and the ranges of values for the measures that indicate poor data. The solutions to these questions are likely to vary based on the analyses at hand and finding them will likely require the ready availability of large scale hetereogenous datasets for which the QAP metrics have been calculated. To help with this goal, the QAP has been used to measure structural and temporal data quality on data from the Autism Brain Imaging Data Exchange (ABIDE; Di Martino 2013) and the Consortium for Reliability and Reproducibility (CoRR, Zuo 2014) and the results are being openly shared through the PCP. An initial analyses of the resulting values has been performed to evaluate their collinearity, correspondence to expert­-assigned quality labels, and test­-retest reliability.
\section{Methods}
Expand Down
Loading

0 comments on commit ef58b02

Please sign in to comment.