Event-related potentials (ERPs) are tiny signals, and they are embedded in noise that may be an order of magnitude larger. In theory, we can "average out" the noise by combining a large number of single-trial waveforms into an averaged ERP waveform. In practice, however, it is often difficult to obtain enough trials to adequately reduce the noise, and the remaining variability can dramatically reduce our power to detect significant differences. Moreover, the noise level may vary widely across recordings as a result of factors such as skin potentials, movement artifacts, poor electrode connections, and nearby electrical devices. The noise level may also be impacted by the experimental design, the recording procedure, and the signal processing pipeline. As a result, the signal-tonoise ratio may differ considerably across studies, across participants within a study, and across data processing methods. | Desirable properties for a metric of ERP data qualityAlthough noisy ERP waveforms are a major practical impediment in ERP research, the field has not adopted a universal measure of data quality that can be used to quantify the noise level in individual participants. Some metrics have been proposed, such as the root mean square of the voltage in the prestimulus period (Luck, 2014) or the standard deviation of a
Presenting different visual object stimuli can elicit detectable changes in EEG recordings, but this is typically observed only after averaging together data from many trials and many participants. We report results from a simple visual object recognition experiment where independent component analysis (ICA) data processing and machine learning classification were able to correctly distinguish presence of visual stimuli at around 87% (0.70 AUC, p<0.0001) accuracy within single trials, using data from single ICs. Seven subjects observed a series of everyday visual object stimuli while EEG was recorded. The task was to indicate whether or not they recognised each object as familiar to them. EEG or IC data from a subset of initial object presentations was used to train support vector machine (SVM) classifiers, which then generated a label for subsequent data. Task-label classifier accuracy gives a proxy measure of task-related information present in the data used to train. This allows comparison of EEG data processing techniques - here, we found selected single ICs that give higher performance than when classifying from any single scalp EEG channel (0.70 AUC vs 0.65 AUC, p<0.0001). Most of these single selected ICs were found in occipital regions. Scoring a sliding analysis window moving through the time-points of the trial revealed that peak accuracy is when using data from +75 to +125 ms relative to the object appearing on screen. We discuss the use of such classification and potential cognitive implications of differential accuracy on IC activations.
Event-related potentials (ERPs) can be very noisy, and yet there is no widely accepted metric of ERP data quality. Here we propose a universal measure of data quality for ERP research: the standardized measurement error (SME). Whereas some potential measures of data quality provide a generic quantification of the noise level, the SME quantifies the expected error in the specific amplitude or latency value being measured in a given study (e.g., the peak latency of the P3 wave). It can be applied to virtually any value that is derived from averaged ERP waveforms, making it a universal measure of data quality. In addition, the SME quantifies the data quality for each individual participant, making it possible to identify participants with low- quality data and “bad” channels. When appropriately aggregated across individuals, SME values can be used to quantify the impact of the single-trial EEG variability and the number of trials being averaged together on the effect size and statistical power in a given experiment. If SME values were regularly included in published papers, researchers could identify the recording and analysis procedures that produce the highest data quality, which could ultimately lead to increased effect sizes and greater replicability across the field. Thus, the SME is a both a universal and useful metric of ERP data quality.
Event-related potentials (ERPs) are noninvasive measures of human brain activity that index a range of sensory, cognitive, affective, and motor processes. Despite their broad application across basic and clinical research, there is little standardization of ERP paradigms and analysis protocols across studies. To address this, we created ERP CORE (Compendium of Open Resources and Experiments), a set of optimized paradigms, experiment control scripts, data processing pipelines, and sample data (N = 40 neurotypical young adults) for seven widely used ERP components: N170, mismatch negativity (MMN), N2pc, N400, P3, lateralized readiness potential (LRP), and error-related negativity (ERN). This resource makes it possible for researchers to 1) employ standardized ERP paradigms in their research, 2) apply carefully designed analysis pipelines and use a priori selected parameters for data processing, 3) rigorously assess the quality of their data, and 4) test new analytic techniques with standardized data from a wide range of paradigms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.