2023
DOI: 10.1016/j.jsmc.2023.05.002
|View full text |Cite
|
Sign up to set email alerts
|

Challenges of Applying Automated Polysomnography Scoring at Scale

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 123 publications
0
2
0
Order By: Relevance
“…Automated analysis of PSG aims to improve scoring accuracy, reduce variability, and increase practice efficiency. However, this process is limited by the (1) variability of sensors and resolution of recorded signals used by different sleep laboratories, (2) differences in PSG analysis software and data formats, (3) challenges in assessing performance of automated analysis due to heterogeneity of training and testing datasets, and (4) real-world implementation problems related to integration into current systems, billing and coding requirements, and acceptance by practitioners [ 1 ].…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Automated analysis of PSG aims to improve scoring accuracy, reduce variability, and increase practice efficiency. However, this process is limited by the (1) variability of sensors and resolution of recorded signals used by different sleep laboratories, (2) differences in PSG analysis software and data formats, (3) challenges in assessing performance of automated analysis due to heterogeneity of training and testing datasets, and (4) real-world implementation problems related to integration into current systems, billing and coding requirements, and acceptance by practitioners [ 1 ].…”
mentioning
confidence: 99%
“…The reduction in performance seen in the SHHS dataset may have been due to the missing EEG signals or could be a sign of reduced inter-database generalization. However, there remains a larger question of using human-based, 30-second epoch scoring as the gold standard when training new models [ 1 ]. A recent meta-analysis estimated the interrater reliability between expert PSG scorers at a Cohen’s kappa of 0.76 for overall sleep staging agreement between individuals, when looking for exact agreement in staging between scorers [ 5 ].…”
mentioning
confidence: 99%