2015
DOI: 10.1007/s10772-015-9299-z
|View full text |Cite
|
Sign up to set email alerts
|

Automatic prominent syllable detection with machine learning classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
4
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 42 publications
1
4
0
Order By: Relevance
“…This set of features was used in a discriminant function to classify between accented and unaccented syllables, achieving 91.9% of correct classification of accented syllables and 92.2% of correct classification of unaccented syllables for the new population of English speakers affected with ataxic dysarthria. The classification accuracy results are comparable with the results of our previous study for native Dutch speakers (healthy and dysarthric speech) and with other studies of accent detection in healthy speech [30][31][32][33][34][35]. The results suggest that combining the ten acoustic parameters developed by Mendoza et al [41] has a good capacity to discriminate between accented and unaccented syllables in healthy and speech-impaired speakers of Germanic languages with comparable accentuation patterns, such as English and Dutch.…”
Section: Cross-population Validation Of Acoustic Featuressupporting
confidence: 88%
See 1 more Smart Citation
“…This set of features was used in a discriminant function to classify between accented and unaccented syllables, achieving 91.9% of correct classification of accented syllables and 92.2% of correct classification of unaccented syllables for the new population of English speakers affected with ataxic dysarthria. The classification accuracy results are comparable with the results of our previous study for native Dutch speakers (healthy and dysarthric speech) and with other studies of accent detection in healthy speech [30][31][32][33][34][35]. The results suggest that combining the ten acoustic parameters developed by Mendoza et al [41] has a good capacity to discriminate between accented and unaccented syllables in healthy and speech-impaired speakers of Germanic languages with comparable accentuation patterns, such as English and Dutch.…”
Section: Cross-population Validation Of Acoustic Featuressupporting
confidence: 88%
“…Studies of acoustic correlates of sentence accent have provided valuable insight into this domain [12,[25][26][27][28][29]. Acoustic accent production descriptors have been studied in healthy speech [30][31][32][33][34][35][36][37] and in dysarthria [9,10,16,24,38,39]. Currently, there is general agreement in the literature that syllable duration, pitch pattern, and intensity (or sub-band energy) correlate with accentuation [26,40].…”
Section: Introductionmentioning
confidence: 99%
“…Although the nature of prominence is already pretty well understood and was investigated from very different angles [7], a few questions still remain open. In view of the growing attention paid to prominence matters in the rapidly growing fields of speech technology and human-machine-interaction [8,9], a very important question concerns the quantification of prominence-cue hierarchies. For example, for German, which is the subject of the present study, [10] concluded that F0 is the primary cue to prominence, as it varied more between prominent and non-prominence syllables than duration.…”
Section: Introductionmentioning
confidence: 99%
“…Numerous studies have investigated automated lexical stress classification since the early 2000s. Most systems utilize F0, intensity, and duration measures along with various machine learning algorithms to predict the stress patterns of words (Chen and Wang 2010;Chen and Jang 2012;Deshmukh and Verma 2009;Johnson and Kang 2015;Li et al 2018;Tepperman and Narayanan 2005). A number of systems also incorporate segmental information, like cepstral coefficients (Ferrer et al 2015;Li et al 2007).…”
Section: Related Workmentioning
confidence: 99%