2018
DOI: 10.1016/j.ridd.2018.02.019
|View full text |Cite
|
Sign up to set email alerts
|

Typical vs. atypical: Combining auditory Gestalt perception and acoustic analysis of early vocalisations in Rett syndrome

Abstract: Knowledge gained in our study shall contribute to the generation of an objective model of early vocalisation atypicality. Such a model might be used for increasing caregivers' and healthcare professionals' sensitivity to identify atypical vocalisation patterns, or even for a probabilistic approach to automatically detect RTT based on early vocalisations.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
18
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7

Relationship

5
2

Authors

Journals

citations
Cited by 18 publications
(18 citation statements)
references
References 44 publications
0
18
0
Order By: Relevance
“…Later, they evaluated more than 6000 features to differentiate typical and atypical early speech language of one infant with Rett syndrome. Main differences were observed in auditory attributes such as timbre and pitch (Pokorny et al, 2018).…”
Section: Automatic Cry Segmentationmentioning
confidence: 97%
See 2 more Smart Citations
“…Later, they evaluated more than 6000 features to differentiate typical and atypical early speech language of one infant with Rett syndrome. Main differences were observed in auditory attributes such as timbre and pitch (Pokorny et al, 2018).…”
Section: Automatic Cry Segmentationmentioning
confidence: 97%
“…3.2.6. Other sound assessment Several recent audio processing methods have been proposed regarding non-cry signals and concerning either pre-linguistic vocalizations (including cooing) (Fuller andHorii, 1986, 1988;Pokorny et al, 2016Pokorny et al, , 2018. Non-voice analyses were also proposed in different contexts such as external noise detection (Raboshchuk et al, 2018a,b), EEG sonification (Gomez et al, 2018) or lung sound assessment (Emmanouilidou et al, 2017).…”
Section: Automatic Cry Segmentationmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, in future studies it would be interesting to examine regression both in a dimensional and a categorical way and compare the results (Ozonoff, Heung, et al, 2008;Thurm et al, 2014). Up till now, only one recent prospective longitudinal study what we see into a more signal based and machine learning approach of analyzing audio-video data (e.g., Marschik et al, 2017;Pokorny et al, 2017Pokorny et al, , 2018). These analyses on signal level can be applied in both future retrospective and prospective studies on onset patterns in ASD.…”
Section: Future Studies Combining Categorical and Dimensional Conceptmentioning
confidence: 99%
“…Building on our experience in collecting and analyzing preverbal data of typical and atypical development (e.g., Bartl-Pokorny et al 2013;Marschik et al 2012aMarschik et al , b, 2013Marschik et al , 2014aMarschik et al , b, 2017Pokorny et al 2016aPokorny et al , 2017Pokorny et al , 2018, we provide a methodological overview of current strategies for the collection and representation of preverbal data for intelligent audio analysis purposes. Exemplified on the basis of empirical data, we will especially focus on application-oriented challenges and constraints that have to be considered when dealing with preverbal data of individuals with late detected developmental disorders.…”
Section: Introductionmentioning
confidence: 99%