2000
DOI: 10.1093/sleep/23.7.1e
|View full text |Cite
|
Sign up to set email alerts
|

Interobserver Agreement Among Sleep Scorers From Different Centers in a Large Dataset

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

24
160
3
8

Year Published

2006
2006
2020
2020

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 203 publications
(195 citation statements)
references
References 0 publications
24
160
3
8
Order By: Relevance
“…6 The improvement in scoring agreement is more impressive when compared to historical values for R&K scoring, such as the 73% agreement among 5 experts from different centers reported by Norman and colleagues. 7 Our data, like those of other researchers, indicates that the best agreement is achieved with stages W, N2, and R. Disagreement with the scoring of stage N1 includes scoring of stage W and scoring of stage N2 sleep. Disagreement with the scoring of stage N3 sleep is almost entirely based on confusion with scoring stage N2 sleep.…”
Section: Discussionsupporting
confidence: 85%
“…6 The improvement in scoring agreement is more impressive when compared to historical values for R&K scoring, such as the 73% agreement among 5 experts from different centers reported by Norman and colleagues. 7 Our data, like those of other researchers, indicates that the best agreement is achieved with stages W, N2, and R. Disagreement with the scoring of stage N1 includes scoring of stage W and scoring of stage N2 sleep. Disagreement with the scoring of stage N3 sleep is almost entirely based on confusion with scoring stage N2 sleep.…”
Section: Discussionsupporting
confidence: 85%
“…However, the Cohen κ coefficients for the various stages ranged from 0.46-0.59, which are considered as moderate agreement according to Cohen κ criteria. 28 Moreover, these results are essentially within the variability range reported in some studies comparing registered PSG scorers, 29,30 and similar to the agreement reported in comparisons between automated PSG scoring and manual scoring. 31 Multiple scoring sites, various recording techniques and a mixture between patients and normal subjects were used in our study, which may have increased biological and methodological variability compared to prior single center studies.…”
Section: Light/deep Sleep Detectionsupporting
confidence: 88%
“…However, we do not have a suitable measure of how well the different micro-sleep events are captured. In the case of the control subjects, the manual scores might be a suitable gold standard, although the mean inter-rater agreement rate has been reported to be as low as 76% (Norman et al, 2000). In the patients' case, the inter-rater agreement rates are lower, and in particularly in patients with PD, where the agreement rate has been reported to be only ''fair'' in more than 25% of sleep recordings (DankerHopfe et al, 2004).…”
Section: Discussionmentioning
confidence: 99%