Activation Likelihood Estimation (ALE) is an objective, quantitative technique for coordinate-based meta-analysis (CBMA) of neuroimaging results that has been validated for a variety of uses. Stepwise modifications have improved ALE’s theoretical and statistical rigor since its introduction. Here, we evaluate two avenues to further optimize ALE. First, we demonstrate that the maximum contribution of an experiment makes to an ALE map is related to the number of foci it reports and their proximity. We present a modified ALE algorithm that eliminates these within-experiment effects. However, we show that these effects only account for 2–3% of cumulative ALE values, and removing them has little impact on thresholded ALE maps. Next, we present an alternate organizational approach to datasets that prevents subject groups with multiple experiments in a dataset from influencing ALE values more than others. This modification decreases cumulative ALE values by 7–9%, changes the relative magnitude of some clusters, and reduces cluster extents. Overall, differences between results of the standard approach and these new methods were small. This finding validates previous ALE reports against concerns that they were driven by within-experiment or within-group effects. We suggest that the modified ALE algorithm is theoretically advantageous compared with the current algorithm, and that the alternate organization of datasets is the most conservative approach for typical ALE analyses and other CBMA methods. Combining the two modifications minimizes both within-experiment and within-group effects, optimizing the degree to which ALE values represent concordance of findings across independent reports.
This review updates and consolidates evidence on the safety of transcranial Direct Current Stimulation (tDCS). Safety is here operationally defined by, and limited to, the absence of evidence for a Serious Adverse Effect, the criteria for which are rigorously defined. This review adopts an evidence-based approach, based on an aggregation of experience from human trials, taking care not to confuse speculation on potential hazards or lack of data to refute such speculation with evidence for risk. Safety data from animal tests for tissue damage are reviewed with systematic consideration of translation to humans. Arbitrary safety considerations are avoided. Computational models are used to relate dose to brain exposure in humans and animals. We review relevant dose-response curves and dose metrics (e.g. current, duration, current density, charge, charge density) for meaningful safety standards. Special consideration is given to theoretically vulnerable populations including children and the elderly, subjects with mood disorders, epilepsy, stroke, implants, and home users. Evidence from relevant animal models indicates that brain injury by Direct Current Stimulation (DCS) occurs at predicted brain current densities (6.3–13 A/m2) that are over an order of magnitude above those produced by conventional tDCS. To date, the use of conventional tDCS protocols in human trials (≤40 min, ≤4 mA, ≤7.2 Coulombs) has not produced any reports of a Serious Adverse Effect or irreversible injury across over 33,200 sessions and 1,000 subjects with repeated sessions. This includes a wide variety of subjects, including persons from potentially vulnerable populations.
Activation likelihood estimation (ALE) has greatly advanced voxel-based meta-analysis research in the field of functional neuroimaging. We present two improvements to the ALE method. First, we evaluate the feasibility of two techniques for correcting for multiple comparisons: the single threshold test and a procedure that controls the false discovery rate (FDR). To test these techniques, foci from four different topics within the literature were analyzed: overt speech in stuttering subjects, the color-word Stroop task, picture-naming tasks, and painful stimulation. In addition, the performance of each thresholding method was tested on randomly generated foci. We found that the FDR method more effectively controls the rate of false positives in meta-analyses of small or large numbers of foci. Second, we propose a technique for making statistical comparisons of ALE meta-analyses and investigate its efficacy on different groups of foci divided by task or response type and random groups of similarly obtained foci. We then give an example of how comparisons of this sort may lead to advanced designs in future meta-analytic research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.