2014
DOI: 10.1016/j.yebeh.2014.01.011
|View full text |Cite
|
Sign up to set email alerts
|

EEG interpretation reliability and interpreter confidence: A large single-center study

Abstract: The intrarater and interrater reliability (I&IR) of EEG interpretation has significant implications for the value of EEG as a diagnostic tool. We measured both I&IR of EEG interpretation based on interpretation of complete EEGs into standard diagnostic categories and rater confidence in their interpretations, and investigated sources of variance in EEG interpretations. During two distinct time intervals six board-certified clinical neurophysiologists classified 300 EEGs into one or more of seven diagnostic cat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

8
75
0
2

Year Published

2014
2014
2024
2024

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 104 publications
(89 citation statements)
references
References 21 publications
8
75
0
2
Order By: Relevance
“…However it is well known that EEG interpretation contains a substantial intuitive component and the accuracy of EEG interpretation is demonstrably low (Grant et al, 2014). These may well be due to our incomplete knowledge of its underlying mechanisms.…”
Section: Introductionmentioning
confidence: 99%
“…However it is well known that EEG interpretation contains a substantial intuitive component and the accuracy of EEG interpretation is demonstrably low (Grant et al, 2014). These may well be due to our incomplete knowledge of its underlying mechanisms.…”
Section: Introductionmentioning
confidence: 99%
“…Aware of this limitation, we performed a separate study of EEG intra- and inter-rater reliability [3]. Briefly, a pool of six epileptologists interpreted 300 EEGs in such a way as to generate both intra- and inter-rater reliability data, as well other variables of interest.…”
Section: Discussionmentioning
confidence: 99%
“…Third, estimation of the new device's true accuracy must account for the accuracy of both the reference device and the EEG interpreters. These challenges led to two appurtenant studies, the results of which are utilized in the present analyses [3, 4]. …”
Section: Introductionmentioning
confidence: 99%
“…Cohen’s kappa (κ), an index that measures inter-rater agreement for categorical items and takes into account agreement occurring by chance, was used to assess variability in an individual subject’s olfactory performance on successive UPSITs or BSITs. This measure, which is frequently used to evaluate agreement between different individuals, has also been used to evaluate inter-rater reliability (e.g., [43]) and is useful here since replicate tests of olfactory function made at six month or longer intervals should not be influenced by the memory of prior testing and can therefore be treated as independent assessments. Therefore, κ was used to evaluate how reproducibly a subject identified or misidentified individual odors in successive UPSIT and BSIT assessments.…”
Section: Methodsmentioning
confidence: 99%