Motion artifacts are a significant source of noise in many functional near-infrared spectroscopy (fNIRS) experiments. Despite this, there is no well-established method for their removal. Instead, functional trials of fNIRS data containing a motion artifact are often rejected completely. However, in most experimental circumstances the number of trials is limited, and multiple motion artifacts are common, particularly in challenging populations. Many methods have been proposed recently to correct for motion artifacts, including principle component analysis, spline interpolation, Kalman filtering, wavelet filtering and correlation-based signal improvement. The performance of different techniques has been often compared in simulations, but only rarely has it been assessed on real functional data. Here, we compare the performance of these motion correction techniques on real functional data acquired during a cognitive task, which required the participant to speak aloud, leading to a low-frequency, low-amplitude motion artifact that is correlated with the hemodynamic response. To compare the efficacy of these methods, objective metrics related to the physiology of the hemodynamic response have been derived. Our results show that it is always better to correct for motion artifacts than reject trials, and that wavelet filtering is the most effective approach to correcting this type of artifact, reducing the area under the curve where the artifact is present in 93% of the cases. Our results therefore support previous studies that have shown wavelet filtering to be the most promising and powerful technique for the correction of motion artifacts in fNIRS data. The analyses performed here can serve as a guide for others to objectively test the impact of different motion correction algorithms and therefore select the most appropriate for the analysis of their own fNIRS experiment.
In this work, we develop an empirically driven model of visual attention to multiple words using the word-word interference (WWI) task. In this task, two words are simultaneously presented visually: a to-be-ignored distractor word at fixation, and a to-be-read-aloud target word above or below the distractor word. Experiment 1 showed that low-frequency distractor words interfere more than high-frequency distractor words. Experiment 2 showed that distractor frequency (high vs. low) and target frequency (high vs. low) exert additive effects. Experiment 3 showed that the effect of the case status of the target (same vs. AlTeRnAtEd) interacts with the type of distractor (word vs. string of # marks). Experiment 4 showed that targets are responded to faster in the presence of semantically related distractors than in presence of unrelated distractors. Our model of visual attention to multiple words borrows two principles governing processing dynamics from the dual-route cascaded model of reading: cascaded interactive activation and lateral inhibition. At the core of the model are three mechanisms aimed at dealing with the distinctive feature of the WWI task, which is that two words are presented simultaneously. These mechanisms are identification, tokenization, and deactivation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.