2020
DOI: 10.1109/taslp.2019.2957887
|View full text |Cite
|
Sign up to set email alerts
|

Vowel Onset Point Based Screening of Misarticulated Stops in Cleft Lip and Palate Speech

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 36 publications
0
3
0
Order By: Relevance
“…Studies also focused on phoneme detection ( 42 ) and their analysis ( 47 ) to assess the quality of speech. Evaluation of misarticulated stops ( 39 ), intelligibility ( 48 ) and resonance ( 45 ) was done using different models. Two studies detected consonant misarticulation ( 45 , 46 ) in participants with cleft lip and palate.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Studies also focused on phoneme detection ( 42 ) and their analysis ( 47 ) to assess the quality of speech. Evaluation of misarticulated stops ( 39 ), intelligibility ( 48 ) and resonance ( 45 ) was done using different models. Two studies detected consonant misarticulation ( 45 , 46 ) in participants with cleft lip and palate.…”
Section: Resultsmentioning
confidence: 99%
“…These provide encouraging experimental results, but they do not reflect the practical performance of these systems. They characterize speech based on hypernasality ( 40 , 41 , 43 , 44 , 46 , 48 ), identifying phonemes ( 42 , 47 ) and misarticulations ( 39 , 45 ). These studies use features of speech data at a single point in time.…”
Section: Discussionmentioning
confidence: 99%
“…As an alternative to traditional features based on signal processing, joint spectro-temporal features have been proposed to model CV transition regions in machine learning models. The two-dimensional discrete cosine transform (2D-DCT) is one of the most commonly used approaches to modeling the spectrotemporal dynamics of CV transition regions [19], [22]- [24]. Supervised learning methods using 2D-DCT features as input have been used for classification of place of articulation in stop consonants [23], to evaluate the goodness of /t/ and /k/ productions in children with speech sound disorders [25], to detect stop consonant production errors [24], and to model perceptual intelligibility ratings in children with CP [19].…”
Section: A Related Workmentioning
confidence: 99%