2022
DOI: 10.1093/sleep/zsac134
|View full text |Cite
|
Sign up to set email alerts
|

Certainty about uncertainty in sleep staging: a theoretical framework

Abstract: Sleep stage classification is an important tool for the diagnosis of sleep disorders. Because sleep staging has such a high impact on clinical outcome, it is important that it is done reliably. However, it is known that uncertainty exists in both expert scorers and automated models. On average, agreement between human scorers is only 82.6%. In this manuscript, we provide a theoretical framework to facilitate discussion and further analyses of uncertainty in sleep staging. To this end, we introduce two variants… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 25 publications
(19 citation statements)
references
References 28 publications
0
18
0
Order By: Relevance
“…Nevertheless, one should take into account that a supervised classifier only mimics the label distribution under the assumptions that the the model is evaluated on a test dataset that entails the same statistics as the training set, and that the model exhibits the 'right amount' of capacity. In other words, models that are evaluated on an out-of-distribution dataset, or models that are too small exhibit a large amount of epistemic uncertainty [7], which increases the average entropy of the hypnodensity graph. On the other hand, a model that has too much capacity has the tendency to over-fit on the training set and becomes over-confident, creating a lower-entropy hypnodensity graph.…”
Section: Discussion Of Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…Nevertheless, one should take into account that a supervised classifier only mimics the label distribution under the assumptions that the the model is evaluated on a test dataset that entails the same statistics as the training set, and that the model exhibits the 'right amount' of capacity. In other words, models that are evaluated on an out-of-distribution dataset, or models that are too small exhibit a large amount of epistemic uncertainty [7], which increases the average entropy of the hypnodensity graph. On the other hand, a model that has too much capacity has the tendency to over-fit on the training set and becomes over-confident, creating a lower-entropy hypnodensity graph.…”
Section: Discussion Of Resultsmentioning
confidence: 99%
“…This scoring ambiguity can be caused by inherently ambiguous epochs (see fig. 1), and the stochastic nature of human decision making [7]. To model this stochastic decision process, we model each expert annotation as a sample from a label distribution, being a probability distribution over sleep stages, that is conditioned upon the mixture coefficients of the characteristics belonging to these stages.…”
Section: Annotation Modelmentioning
confidence: 99%
See 3 more Smart Citations