21 results on '"Nichols, Thomas E."'
Search Results
2. Statistical Limitations in Functional Neuroimaging I. Non-Inferential Methods and Statistical Models
- Author
-
Petersson, Karl Magnus, Nichols, Thomas E., Poline, Jean-Baptiste, and Holmes, Andrew P.
- Published
- 1999
3. Cluster failure : Why fMRI inferences for spatial extent have inflated false-positive rates
- Author
-
Eklund, Anders, Nichols, Thomas E., and Knutsson, Hans
- Published
- 2016
4. Alternative-based thresholding with application to presurgical fMRI
- Author
-
Durnez, Joke, Moerkerke, Beatrijs, Bartsch, Andreas, and Nichols, Thomas E.
- Published
- 2013
- Full Text
- View/download PDF
5. Effective degrees of freedom of the Pearson's correlation coefficient under autocorrelation
- Author
-
Afyouni, Soroosh, Smith, Stephen M., and Nichols, Thomas E.
- Subjects
Serial correlation ,Functional Neuroimaging ,fMRI ,Brain ,Quadratic covariance ,Variance ,Toeplitz matrix ,Magnetic Resonance Imaging ,Article ,Graph theory ,Functional connectivity ,Autocorrelation ,Data Interpretation, Statistical ,Pearson correlation coefficient ,Connectome ,Image Processing, Computer-Assisted ,Humans ,Time-series ,Cross correlation ,Resting state - Abstract
The dependence between pairs of time series is commonly quantified by Pearson's correlation. However, if the time series are themselves dependent (i.e. exhibit temporal autocorrelation), the effective degrees of freedom (EDF) are reduced, the standard error of the sample correlation coefficient is biased, and Fisher's transformation fails to stabilise the variance. Since fMRI time series are notoriously autocorrelated, the issue of biased standard errors – before or after Fisher's transformation – becomes vital in individual-level analysis of resting-state functional connectivity (rsFC) and must be addressed anytime a standardised Z-score is computed. We find that the severity of autocorrelation is highly dependent on spatial characteristics of brain regions, such as the size of regions of interest and the spatial location of those regions. We further show that the available EDF estimators make restrictive assumptions that are not supported by the data, resulting in biased rsFC inferences that lead to distorted topological descriptions of the connectome on the individual level. We propose a practical “xDF” method that accounts not only for distinct autocorrelation in each time series, but instantaneous and lagged cross-correlation. We find the xDF correction varies substantially over node pairs, indicating the limitations of global EDF corrections used previously. In addition to extensive synthetic and real data validations, we investigate the impact of this correction on rsFC measures in data from the Young Adult Human Connectome Project, showing that accounting for autocorrelation dramatically changes fundamental graph theoretical measures relative to no correction., Highlights • Autocorrelation is a problem for sample correlation, breaking the variance-stabilising property of Fisher's transformation. • We show that fMRI autocorrelation varies systematically with region of interest size, and is heterogeneous over subjects. • Existing adjustment methods are themselves biased when true correlation is non-zero due to a confounding effect. • Our “xDF” method provides accurate Z-scores based on either of Pearson's or Fisher's transformed correlations. • Resting state fMRI autocorrelation considerably alters the graph theoretical description of human connectome.
- Published
- 2019
6. Discussion on "distributional independent component analysis for diverse neuroimaging modalities" by Ben Wu, Subhadip Pal, Jian Kang, and Ying Guo.
- Author
-
Keeratimahat, Kan and Nichols, Thomas E.
- Subjects
- *
INDEPENDENT component analysis , *BRAIN imaging , *MODAL logic - Abstract
Wu et al. have made an important contribution to the methodology for data‐driven analysis of MRI data. However, we wish to challenge the authors on new potential applications of their approach beyond diffusion tensor imaging data, and to think carefully about the impact of random initialization implicit in their method. We illustrate the variability found from re‐analyzing the supplied demonstration data multiple times, finding that the discovered independent components have a wide range of reliability, from nearly perfect overlap to no overlap at all. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. The expected behaviour of random fields in high dimensions: contradictions in the results of Bansal and Peterson [ ].
- Author
-
Davenport, Samuel and Nichols, Thomas E.
- Subjects
- *
RANDOM fields , *CONTRADICTION - Abstract
Bansal and Peterson (2018) found that in simple stationary Gaussian simulations Random Field Theory incorrectly estimates the number of clusters of a Gaussian field that lie above a threshold. Their results contradict the existing literature and appear to have arisen due to errors in their code. Using reproducible code we demonstrate that in their simulations Random Field Theory correctly predicts the expected number of clusters and therefore that many of their results are invalid. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Isolating the sources of pipeline‐variability in group‐level task‐fMRI results.
- Author
-
Bowring, Alexander, Nichols, Thomas E., and Maumet, Camille
- Subjects
- *
INTEGRATED software , *FUNCTIONAL magnetic resonance imaging , *WORKFLOW - Abstract
Task‐fMRI researchers have great flexibility as to how they analyze their data, with multiple methodological options to choose from at each stage of the analysis workflow. While the development of tools and techniques has broadened our horizons for comprehending the complexities of the human brain, a growing body of research has highlighted the pitfalls of such methodological plurality. In a recent study, we found that the choice of software package used to run the analysis pipeline can have a considerable impact on the final group‐level results of a task‐fMRI investigation (Bowring et al., 2019, BMN). Here we revisit our work, seeking to identify the stages of the pipeline where the greatest variation between analysis software is induced. We carry out further analyses on the three datasets evaluated in BMN, employing a common processing strategy across parts of the analysis workflow and then utilizing procedures from three software packages (AFNI, FSL, and SPM) across the remaining steps of the pipeline. We use quantitative methods to compare the statistical maps and isolate the main stages of the workflow where the three packages diverge. Across all datasets, we find that variation between the packages' results is largely attributable to a handful of individual analysis stages, and that these sources of variability were heterogeneous across the datasets (e.g., choice of first‐level signal model had the most impact for the balloon analog risk task dataset, while first‐level noise model and group‐level model were more influential for the false belief and antisaccade task datasets, respectively). We also observe areas of the analysis workflow where changing the software package causes minimal differences in the final results, finding that the group‐level results were largely unaffected by which software package was used to model the low‐frequency fMRI drifts. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Exploring the impact of analysis software on task fMRI results.
- Author
-
Bowring, Alexander, Maumet, Camille, and Nichols, Thomas E.
- Subjects
TASK analysis ,INTEGRATED software ,SYSTEMS software ,COMPUTER software ,CARTOGRAPHY software - Abstract
A wealth of analysis tools are available to fMRI researchers in order to extract patterns of task variation and, ultimately, understand cognitive function. However, this "methodological plurality" comes with a drawback. While conceptually similar, two different analysis pipelines applied on the same dataset may not produce the same scientific results. Differences in methods, implementations across software, and even operating systems or software versions all contribute to this variability. Consequently, attention in the field has recently been directed to reproducibility and data sharing. In this work, our goal is to understand how choice of software package impacts on analysis results. We use publicly shared data from three published task fMRI neuroimaging studies, reanalyzing each study using the three main neuroimaging software packages, AFNI, FSL, and SPM, using parametric and nonparametric inference. We obtain all information on how to process, analyse, and model each dataset from the publications. We make quantitative and qualitative comparisons between our replications to gauge the scale of variability in our results and assess the fundamental differences between each software package. Qualitatively we find similarities between packages, backed up by Neurosynth association analyses that correlate similar words and phrases to all three software package's unthresholded results for each of the studies we reanalyse. However, we also discover marked differences, such as Dice similarity coefficients ranging from 0.000 to 0.684 in comparisons of thresholded statistic maps between software. We discuss the challenges involved in trying to reanalyse the published studies, and highlight our efforts to make this research reproducible. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. Insight and inference for DVARS.
- Author
-
Afyouni, Soroosh and Nichols, Thomas E.
- Subjects
- *
BRAIN function localization , *FUNCTIONAL magnetic resonance imaging , *DIAGNOSTIC imaging , *FUNCTIONAL assessment , *BRAIN imaging - Abstract
Estimates of functional connectivity using resting state functional Magnetic Resonance Imaging (rs-fMRI) are acutely sensitive to artifacts and large scale nuisance variation. As a result much effort is dedicated to preprocessing rs-fMRI data and using diagnostic measures to identify bad scans. One such diagnostic measure is DVARS, the spatial root mean square of the data after temporal differencing. A limitation of DVARS however is the lack of concrete interpretation of the absolute values of DVARS, and finding a threshold to distinguish bad scans from good. In this work we describe a sum of squares decomposition of the entire 4D dataset that shows DVARS to be just one of three sources of variation we refer to as D -var (closely linked to DVARS), S -var and E -var. D -var and S -var partition the sum of squares at adjacent time points, while E -var accounts for edge effects; each can be used to make spatial and temporal summary diagnostic measures. Extending the partitioning to global (and non-global) signal leads to a rs-fMRI DSE table, which decomposes the total and global variability into fast ( D -var), slow ( S -var) and edge ( E -var) components. We find expected values for each component under nominal models, showing how D -var (and thus DVARS) scales with overall variability and is diminished by temporal autocorrelation. Finally we propose a null sampling distribution for DVARS-squared and robust methods to estimate this null model, allowing computation of DVARS p-values. We propose that these diagnostic time series, images, p-values and DSE table will provide a succinct summary of the quality of a rs-fMRI dataset that will support comparisons of datasets over preprocessing steps and between subjects. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
11. A defense of using resting-state fMRI as null data for estimating false positive rates.
- Author
-
Nichols, Thomas E., Eklund, Anders, and Knutsson, Hans
- Subjects
- *
COGNITIVE neuroscience , *FALSE positive error , *FUNCTIONAL magnetic resonance imaging , *TASK analysis , *DATA analysis software - Abstract
A recent Editorial inCognitive Neurosciencereconsiders the findings of our work on the accuracy of false positive rate control with cluster inference in functional magnetic resonance imaging (fMRI), in particular criticizing our use of resting-state fMRI as a source for null data in the evaluation of task fMRI methods. We defend this use of resting fMRI data, as while there is much structure in this data, we argue it is representative of task data noise and task analysis software should be able to accommodate this noise. We also discuss a potential problem with Slotnick’s own method. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
12. Generating and reporting peak and cluster tables for voxel-wise inference in FSL.
- Author
-
Maumet, Camille and Nichols, Thomas E.
- Subjects
MAGNETIC resonance imaging ,STATISTICAL reliability ,SPM materials ,STATISTICS ,BRAIN imaging - Abstract
Mass universities analyses, in which a statistical test is performed at each voxel in the brain, is the most widespread approach to analyzing task-evoked functional Magnetic Resonance Imaging (fMRI) data. Such analyses identify the brain areas that are significantly activated in response to a given stimulus. In the literature, the significant areas are usually summarised by providing a table, listing, for each significant region, the 3D positions of the local maxima along with corresponding statistical values. This tabular output is provided by all the major as dsa dneuroimaging software packages including SPM, FSL and AFNI. Yet, in the HTML report generated by FSL, peak and cluster tables are only provided for one type of inference (cluster-wise inference) but not when a voxelwise threshold is specified. In this project, we proposed an update for FSL to generate and report peak and cluster tables for voxel-wise inferences. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
13. Reply to Chen et al.: Parametric methods for cluster inference perform worse for two‐sided t‐tests.
- Author
-
Eklund, Anders, Knutsson, Hans, and Nichols, Thomas E.
- Abstract
One‐sided t‐tests are commonly used in the neuroimaging field, but two‐sided tests should be the default unless a researcher has a strong reason for using a one‐sided test. Here we extend our previous work on cluster false positive rates, which used one‐sided tests, to two‐sided tests. Briefly, we found that parametric methods perform worse for two‐sided t‐tests, and that nonparametric methods perform equally well for one‐sided and two‐sided tests. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
14. A Bayesian non-parametric Potts model with application to pre-surgical FMRI data.
- Author
-
Johnson, Timothy D, Liu, Zhuqing, Bartsch, Andreas J, and Nichols, Thomas E
- Subjects
BRAIN tumor diagnosis ,BRAIN imaging ,FUNCTIONAL magnetic resonance imaging ,POTTS model ,BAYESIAN analysis ,DIRICHLET principle ,MONTE Carlo method ,ALGORITHMS - Abstract
The Potts model has enjoyed much success as a prior model for image segmentation. Given the individual classes in the model, the data are typically modeled as Gaussian random variates or as random variates from some other parametric distribution. In this article, we present a non-parametric Potts model and apply it to a functional magnetic resonance imaging study for the pre-surgical assessment of peritumoral brain activation. In our model, we assume that the Z-score image from a patient can be segmented into activated, deactivated, and null classes, or states. Conditional on the class, or state, the Z-scores are assumed to come from some generic distribution which we model non-parametrically using a mixture of Dirichlet process priors within the Bayesian framework. The posterior distribution of the model parameters is estimated with a Markov chain Monte Carlo algorithm, and Bayesian decision theory is used to make the final classifications. Our Potts prior model includes two parameters, the standard spatial regularization parameter and a parameter that can be interpreted as the a priori probability that each voxel belongs to the null, or background state, conditional on the lack of spatial regularization. We assume that both of these parameters are unknown, and jointly estimate them along with other model parameters. We show through simulation studies that our model performs on par, in terms of posterior expected loss, with parametric Potts models when the parametric model is correctly specified and outperforms parametric models when the parametric model in misspecified. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
15. Optimization of experimental design in fMRI: a general framework using a genetic algorithm
- Author
-
Wager, Tor D. and Nichols, Thomas E.
- Subjects
- *
MAGNETIC resonance imaging , *GENETIC algorithms - Abstract
This article describes a method for selecting design parameters and a particular sequence of events in fMRI so as to maximize statistical power and psychological validity. Our approach uses a genetic algorithm (GA), a class of flexible search algorithms that optimize designs with respect to single or multiple measures of fitness. Two strengths of the GA framework are that (1) it operates with any sort of model, allowing for very specific parameterization of experimental conditions, including nonstandard trial types and experimentally observed scanner autocorrelation, and (2) it is flexible with respect to fitness criteria, allowing optimization over known or novel fitness measures. We describe how genetic algorithms may be applied to experimental design for fMRI, and we use the framework to explore the space of possible fMRI design parameters, with the goal of providing information about optimal design choices for several types of designs. In our simulations, we considered three fitness measures: contrast estimation efficiency, hemodynamic response estimation efficiency, and design counterbalancing. Although there are inherent trade-offs between these three fitness measures, GA optimization can produce designs that outperform random designs on all three criteria simultaneously. [Copyright &y& Elsevier]
- Published
- 2003
- Full Text
- View/download PDF
16. Age differences in the brain mechanisms of good taste.
- Author
-
Rolls, Edmund T., Kellerhals, Michele B., and Nichols, Thomas E.
- Subjects
- *
AGE differences , *FLAVOR , *FOOD chemistry , *FUNCTIONAL magnetic resonance imaging , *ORANGE juice - Abstract
There is strong evidence demonstrating age-related differences in the acceptability of foods and beverages. To examine the neural foundations underlying these age-related differences in the acceptability of different flavors and foods, we performed an fMRI study to investigate brain and hedonic responses to orange juice, orange soda, and vegetable juice in three different age groups: Young (22), Middle (40) and Elderly (60 years). Orange juice and orange soda were found to be liked by all age groups, while vegetable juice was disliked by the Young, but liked by the Elderly. In the insular primary taste cortex, the activations to these stimuli were similar in the 3 age groups, indicating that the differences in liking for these stimuli between the 3 groups were not represented in this first stage of cortical taste processing. In the agranular insula (anterior to the insular primary taste cortex) where flavor is represented, the activations to the stimuli were similar in the Elderly, but in the Young the activations were larger to the vegetable juice than to the orange drinks; and the activations here were correlated with the unpleasantness of the stimuli. In the anterior midcingulate cortex, investigated as a site where the activations were correlated with the unpleasantness of the stimuli, there was again a greater activation to the vegetable than to the orange stimuli in the Young but not in the Elderly. In the amygdala (and orbitofrontal cortex), investigated as sites where the activations were correlated with the pleasantness of the stimuli, there was a smaller activation to the vegetable than to the orange stimuli in the Young but not in the Elderly. The Middle group was intermediate with respect to the separation of their activations to the stimuli in the brain areas that represent the pleasantness or unpleasantness of flavors. Thus age differences in the activations to different flavors can in some brain areas be related to, and probably cause, the differences in pleasantness of foods as they differ for people of different ages. This novel work provides a foundation for understanding the underlying neural bases for differences in food acceptability between age groups. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
17. Post-hoc power estimation for topological inference in fMRI.
- Author
-
Durnez, Joke, Moerkerke, Beatrijs, and Nichols, Thomas E.
- Subjects
- *
FUNCTIONAL magnetic resonance imaging , *EVOKED potentials (Electrophysiology) , *BRAIN stimulation , *DIAGNOSTIC imaging , *BRAIN imaging , *ERROR analysis in mathematics , *DATA analysis - Abstract
Abstract: When analyzing functional MRI data, several thresholding procedures are available to account for the huge number of volume units or features that are tested simultaneously. The main focus of these methods is to prevent an inflation of false positives. However, this comes with a serious decrease in power and leads to a problematic imbalance between type I and type II errors. In this paper, we show how estimating the number of activated peaks or clusters enables one to estimate post-hoc how powerful the selection procedure performs. This procedure can be used in real studies as a diagnostics tool, and raises awareness on how much activation is potentially missed. The method is evaluated and illustrated using simulations and a real data example. Our real data example illustrates the lack of power in current fMRI research. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
18. Ten simple rules for neuroimaging meta-analysis.
- Author
-
Müller, Veronika I., Cieslik, Edna C., Eickhoff, Simon B., Tench, Christopher R., Yarkoni, Tal, Nichols, Thomas E., Turkeltaub, Peter E., Wager, Tor D., Laird, Angela R., Fox, Peter T., Radua, Joaquim, and Mataix-Cols, David
- Subjects
- *
BRAIN imaging , *META-analysis , *FUNCTIONAL magnetic resonance imaging , *NEUROANATOMY , *POSITRON emission tomography - Abstract
Neuroimaging has evolved into a widely used method to investigate the functional neuroanatomy, brain-behaviour relationships, and pathophysiology of brain disorders, yielding a literature of more than 30,000 papers. With such an explosion of data, it is increasingly difficult to sift through the literature and distinguish spurious from replicable findings. Furthermore, due to the large number of studies, it is challenging to keep track of the wealth of findings. A variety of meta-analytical methods (coordinate-based and image-based) have been developed to help summarise and integrate the vast amount of data arising from neuroimaging studies. However, the field lacks specific guidelines for the conduct of such meta-analyses. Based on our combined experience, we propose best-practice recommendations that researchers from multiple disciplines may find helpful. In addition, we provide specific guidelines and a checklist that will hopefully improve the transparency, traceability, replicability and reporting of meta-analytical results of neuroimaging data. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
19. Evaluating the consistency and specificity of neuroimaging data using meta-analysis
- Author
-
Wager, Tor D., Lindquist, Martin A., Nichols, Thomas E., Kober, Hedy, and Van Snellenberg, Jared X.
- Subjects
- *
BRAIN imaging , *META-analysis , *MAGNETIC resonance imaging , *KERNEL functions , *BIOLOGICAL neural networks , *INTELLECTUAL disabilities - Abstract
Abstract: Making sense of a neuroimaging literature that is growing in scope and complexity will require increasingly sophisticated tools for synthesizing findings across studies. Meta-analysis of neuroimaging studies fills a unique niche in this process: It can be used to evaluate the consistency of findings across different laboratories and task variants, and it can be used to evaluate the specificity of findings in brain regions or networks to particular task types. This review discusses examples, implementation, and considerations when choosing meta-analytic techniques. It focuses on the multilevel kernel density analysis (MKDA) framework, which has been used in recent studies to evaluate consistency and specificity of regional activation, identify distributed functional networks from patterns of co-activation, and test hypotheses about functional cortical-subcortical pathways in healthy individuals and patients with mental disorders. Several tests of consistency and specificity are described. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
20. Network modelling methods for FMRI
- Author
-
Smith, Stephen M., Miller, Karla L., Salimi-Khorshidi, Gholamreza, Webster, Matthew, Beckmann, Christian F., Nichols, Thomas E., Ramsey, Joseph D., and Woolrich, Mark W.
- Subjects
- *
MAGNETIC resonance imaging , *BRAIN imaging , *ANALYSIS of covariance , *STATISTICAL correlation , *GRAPHIC methods in statistics , *DATA analysis - Abstract
Abstract: There is great interest in estimating brain “networks” from FMRI data. This is often attempted by identifying a set of functional “nodes” (e.g., spatial ROIs or ICA maps) and then conducting a connectivity analysis between the nodes, based on the FMRI timeseries associated with the nodes. Analysis methods range from very simple measures that consider just two nodes at a time (e.g., correlation between two nodes'' timeseries) to sophisticated approaches that consider all nodes simultaneously and estimate one global network model (e.g., Bayes net models). Many different methods are being used in the literature, but almost none has been carefully validated or compared for use on FMRI timeseries data. In this work we generate rich, realistic simulated FMRI data for a wide range of underlying networks, experimental protocols and problematic confounds in the data, in order to compare different connectivity estimation approaches. Our results show that in general correlation-based approaches can be quite successful, methods based on higher-order statistics are less sensitive, and lag-based approaches perform very poorly. More specifically: there are several methods that can give high sensitivity to network connection detection on good quality FMRI data, in particular, partial correlation, regularised inverse covariance estimation and several Bayes net methods; however, accurate estimation of connection directionality is more difficult to achieve, though Patel''s τ can be reasonably successful. With respect to the various confounds added to the data, the most striking result was that the use of functionally inaccurate ROIs (when defining the network nodes and extracting their associated timeseries) is extremely damaging to network estimation; hence, results derived from inappropriate ROI definition (such as via structural atlases) should be regarded with great caution. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
21. Confidence Sets for Cohen's d effect size images.
- Author
-
Bowring, Alexander, Telschow, Fabian J.E., Schwartzman, Armin, and Nichols, Thomas E.
- Subjects
- *
INFERENTIAL statistics , *NULL hypothesis , *CONFIDENCE , *SIGNAL detection , *SHORT-term memory - Abstract
• Confidence Sets (CSs) extend the idea of confidence intervals to fMRI maps. • For a Cohen's d threshold c , upper CS asserts where d > c , lower CS where d < c. • We demonstrate the CSs method on HCP subject-level Cohen's d data. • We compare the CSs with results from standard statistical voxelwise inference. • Unlike traditional cluster tests, CSs precisely quantify spatial uncertainty. Current statistical inference methods for task-fMRI suffer from two fundamental limitations. First, the focus is solely on detection of non-zero signal or signal change, a problem that is exacerbated for large scale studies (e.g. UK Biobank, N = 40 , 000 +) where the 'null hypothesis fallacy' causes even trivial effects to be determined as significant. Second, for any sample size, widely used cluster inference methods only indicate regions where a null hypothesis can be rejected, without providing any notion of spatial uncertainty about the activation. In this work, we address these issues by developing spatial Confidence Sets (CSs) on clusters found in thresholded Cohen's d effect size images. We produce an upper and lower CS to make confidence statements about brain regions where Cohen's d effect sizes have exceeded and fallen short of a non-zero threshold, respectively. The CSs convey information about the magnitude and reliability of effect sizes that is usually given separately in a t -statistic and effect estimate map. We expand the theory developed in our previous work on CSs for %BOLD change effect maps (Bowring et al., 2019) using recent results from the bootstrapping literature. By assessing the empirical coverage with 2D and 3D Monte Carlo simulations resembling fMRI data, we find our method is accurate in sample sizes as low as N = 60. We compute Cohen's d CSs for the Human Connectome Project working memory task-fMRI data, illustrating the brain regions with a reliable Cohen's d response for a given threshold. By comparing the CSs with results obtained from a traditional statistical voxelwise inference, we highlight the improvement in activation localization that can be gained with the Confidence Sets. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.