2022
DOI: 10.1038/s41598-022-17203-1
|View full text |Cite
|
Sign up to set email alerts
|

Automatic vocalisation-based detection of fragile X syndrome and Rett syndrome

Abstract: Fragile X syndrome (FXS) and Rett syndrome (RTT) are developmental disorders currently not diagnosed before toddlerhood. Even though speech-language deficits are among the key symptoms of both conditions, little is known about infant vocalisation acoustics for an automatic earlier identification of affected individuals. To bridge this gap, we applied intelligent audio analysis methodology to a compact dataset of 4454 home-recorded vocalisations of 3 individuals with FXS and 3 individuals with RTT aged 6 to 11 … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 12 publications
(12 citation statements)
references
References 74 publications
0
12
0
Order By: Relevance
“…Opposed to that, in the babbling phase, a number of studies analyse verbal capacities utilizing computational approaches (e.g. Pokorny et al, 2018 , 2020 , 2022 ). In general, manual analysis of LLDs such as fundamental frequency (F 0 ) is not very common for babbling vocalisations.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Opposed to that, in the babbling phase, a number of studies analyse verbal capacities utilizing computational approaches (e.g. Pokorny et al, 2018 , 2020 , 2022 ). In general, manual analysis of LLDs such as fundamental frequency (F 0 ) is not very common for babbling vocalisations.…”
Section: Resultsmentioning
confidence: 99%
“…In our group, we have utilised a machine learning approach (i.e. support vector machines), that focused on automatic preverbal vocalisation-based differentiation between typically developing infants and infants later diagnosed with RTT, FXS or ASD ( Pokorny et al, 2016a , 2017 , 2022 ). Studies evaluating acoustic features of early vocalisations or applying machine learning models or neural networks will be referred to as “computational studies” hereafter.…”
mentioning
confidence: 99%
“…It is important to bear in mind that typically developing children also produce vocalisations that sound conspicuous from time to time. For example, transient high-pitched crying in pain, pressed voice, or articulations with inspiratory airstream during pleasure bursts are observable (Marschik et al, 2022;Nathani & Oller, 2001;Oller, 2000). Could humans and computer algorithms also in those cases detect the "origin-revealing" features and correctly identify the producers of the conspicuously-sounding vocalisations?…”
Section: Discussionmentioning
confidence: 99%
“…Qualitative anomalies have also been reported in early speech-language development, such as production of vocalisations on ingressive airstream or with breathy voice characteristics (Marschik et al, 2010; Marschik, Pini, et al, 2012). These atypical vocalisations are detectable by human listeners (Marschik, Einspieler, et al, 2012) as well as by acoustic analyses using computer-based approaches (Pokorny et al, 2018; Pokorny et al, 2022). However, because the atypical vocalisations are often interspersed with more typical vocalisations in infants with RTT(Marschik et al, 2009; Marschik, Pini, et al, 2012), it can make their accurate detection challenging.…”
Section: Introductionmentioning
confidence: 99%
“…This then may allow to train systems to self-learn early markers. Success of this principle has been shown amongst other for Fragile-X and Rett-Syndrome [15]. However, the approach comes with considerable challenges, as earlier personal data material is often only available in discontinuous time intervals and often stems from mixed recording hardware or infrastructure and different quality levels.…”
Section: B Computationally Supporting Preventionmentioning
confidence: 99%