BackgroundClinicians have qualitatively described rhythmic delta activity as a prominent EEG abnormality in individuals with Angelman syndrome, but this phenotype has yet to be rigorously quantified in the clinical population or validated in a preclinical model. Here, we sought to quantitatively measure delta rhythmicity and evaluate its fidelity as a biomarker.MethodsWe quantified delta oscillations in mouse and human using parallel spectral analysis methods and measured regional, state-specific, and developmental changes in delta rhythms in a patient population.ResultsDelta power was broadly increased and more dynamic in both the Angelman syndrome mouse model, relative to wild-type littermates, and in children with Angelman syndrome, relative to age-matched neurotypical controls. Enhanced delta oscillations in children with Angelman syndrome were present during wakefulness and sleep, were generalized across the neocortex, and were more pronounced at earlier ages.ConclusionsDelta rhythmicity phenotypes can serve as reliable biomarkers for Angelman syndrome in both preclinical and clinical settings.Electronic supplementary materialThe online version of this article (doi:10.1186/s11689-017-9195-8) contains supplementary material, which is available to authorized users.
discharges (IEDs) in electroencephalograms (EEGs) are a biomarker of epilepsy, seizure risk, and clinical decline. However, there is a scarcity of experts qualified to interpret EEG results. Prior attempts to automate IED detection have been limited by small samples and have not demonstrated expert-level performance. There is a need for a validated automated method to detect IEDs with expert-level reliability.OBJECTIVE To develop and validate a computer algorithm with the ability to identify IEDs as reliably as experts and classify an EEG recording as containing IEDs vs no IEDs.DESIGN, SETTING, AND PARTICIPANTS A total of 9571 scalp EEG records with and without IEDs were used to train a deep neural network (SpikeNet) to perform IED detection. Independent training and testing data sets were generated from 13 262 IED candidates, independently annotated by 8 fellowship-trained clinical neurophysiologists, and 8520 EEG records containing no IEDs based on clinical EEG reports. Using the estimated spike probability, a classifier designating the whole EEG recording as positive or negative was also built.MAIN OUTCOMES AND MEASURES SpikeNet accuracy, sensitivity, and specificity compared with fellowship-trained neurophysiology experts for identifying IEDs and classifying EEGs as positive or negative or negative for IEDs. Statistical performance was assessed via calibration error and area under the receiver operating characteristic curve (AUC). All performance statistics were estimated using 10-fold cross-validation.RESULTS SpikeNet surpassed both expert interpretation and an industry standard commercial IED detector, based on calibration error (SpikeNet, 0.041; 95% CI, 0.033-0.049; vs industry standard, 0.066; 95% CI, 0.060-0.078; vs experts, mean, 0.183; range, 0.081-0.364) and binary classification performance based on AUC (SpikeNet, 0.980; 95% CI, 0.977-0.984; vs industry standard, 0.882; 95% CI, 0.872-0.893). Whole EEG classification had a mean calibration error of 0.126 (range, 0.109-0.1444) vs experts (mean, 0.197; range, 0.099-0.372) and AUC of 0.847 (95% CI, 0.830-0.865). CONCLUSIONS AND RELEVANCEIn this study, SpikeNet automatically detected IEDs and classified whole EEGs as IED-positive or IED-negative. This may be the first time an algorithm has been shown to exceed expert performance for IED detection in a representative sample of EEGs and may thus be a valuable tool for expedited review of EEGs.
The validity of using electroencephalograms (EEGs) to diagnose epilepsy requires reliable detection of interictal epileptiform discharges (IEDs). Prior interrater reliability (IRR) studies are limited by small samples and selection bias.OBJECTIVE To assess the reliability of experts in detecting IEDs in routine EEGs. DESIGN, SETTING, AND PARTICIPANTSThis prospective analysis conducted in 2 phases included as participants physicians with at least 1 year of subspecialty training in clinical neurophysiology. In phase 1, 9 experts independently identified candidate IEDs in 991 EEGs (1 expert per EEG) reported in the medical record to contain at least 1 IED, yielding 87 636 candidate IEDs. In phase 2, the candidate IEDs were clustered into groups with distinct morphological features, yielding 12 602 clusters, and a representative candidate IED was selected from each cluster. We added 660 waveforms (11 random samples each from 60 randomly selected EEGs reported as being free of IEDs) as negative controls. Eight experts independently scored all 13 262 candidates as IEDs or non-IEDs. The 1051 EEGs in the study were recorded at the Massachusetts General Hospital between 2012 and 2016.MAIN OUTCOMES AND MEASURES Primary outcome measures were percentage of agreement (PA) and beyond-chance agreement (Gwet κ) for individual IEDs (IED-wise IRR) and for whether an EEG contained any IEDs (EEG-wise IRR). Secondary outcomes were the correlations between numbers of IEDs marked by experts across cases, calibration of expert scoring to group consensus, and receiver operating characteristic analysis of how well multivariate logistic regression models may account for differences in the IED scoring behavior between experts. RESULTS Among the 1051 EEGs assessed in the study, 540 (51.4%) were those of females and 511 (48.6%) were those of males. In phase 1, 9 experts each marked potential IEDs in a median of 65 (interquartile range [IQR], 28-332) EEGs. The total number of IED candidates marked was 87 636. Expert IRR for the 13 262 individually annotated IED candidates was fair, with the mean PA being 72.4% (95% CI, 67.0%-77.8%) and mean κ being 48.7% (95% CI, 37.3%-60.1%). The EEG-wise IRR was substantial, with the mean PA being 80.9% (95% CI, 76.2%-85.7%) and mean κ being 69.4% (95% CI, 60.3%-78.5%). A statistical model based on waveform morphological features, when provided with individualized thresholds, explained the median binary scores of all experts with a high degree of accuracy of 80% (range, 73%-88%).CONCLUSIONS AND RELEVANCE This study's findings suggest that experts can identify whether EEGs contain IEDs with substantial reliability. Lower reliability regarding individual IEDs may be largely explained by various experts applying different thresholds to a common underlying statistical model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.