2010 IEEE Fourth International Conference on Semantic Computing 2010
DOI: 10.1109/icsc.2010.41
|View full text |Cite
|
Sign up to set email alerts
|

Automatically Assessing Personality from Speech

Abstract: Abstract-In this paper, we present first results on applying a personality assessment paradigm to speech input, and comparing human and automatic performance on this task. We cue a professional speaker to produce speech using different personality profiles and encode the resulting vocal personality impressions in terms of the Big Five NEO-FFI personality traits. We then have human raters, who do not know the speaker, estimate the five factors. We analyze the recordings using signal-based acoustic and prosodic … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
41
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 86 publications
(46 citation statements)
references
References 5 publications
5
41
0
Order By: Relevance
“…Therefore, speech data should allow one to perform APP reasonably well. While still being limited, the results proposed so far in the speech literature seem to confirm the indications above for both APR (Mairesse et al, 2007;Ivanov et al, 2011) and APP (Mairesse et al, 2007;Mohammadi and Vinciarelli, 2012;Polzehl et al, 2010;Valente et al, 2012;Nass and Min Lee, 2001;Schmitz et al, 2007;Trouvain et al, 2006).…”
Section: Speaker Personalitysupporting
confidence: 70%
See 1 more Smart Citation
“…Therefore, speech data should allow one to perform APP reasonably well. While still being limited, the results proposed so far in the speech literature seem to confirm the indications above for both APR (Mairesse et al, 2007;Ivanov et al, 2011) and APP (Mairesse et al, 2007;Mohammadi and Vinciarelli, 2012;Polzehl et al, 2010;Valente et al, 2012;Nass and Min Lee, 2001;Schmitz et al, 2007;Trouvain et al, 2006).…”
Section: Speaker Personalitysupporting
confidence: 70%
“…The task was performed with a Logistic Regression and the accuracy was between 60 % and 75 % depending on the traits (best results for Extroversion and Conscientiousness). For their APP experiments, Polzehl et al (2010) used 220 samples of one professional speaker acting 10 personality types. The features (1 450 in total, including Mel Frequency Cepstral Coefficients (MFCCs), Harmonic-to-Noise-Ratio (HNR), Zero-Crossing-Rate, etc.)…”
Section: Speaker Personalitymentioning
confidence: 99%
“…[54]. The computing literature seems to follow on this core-idea and the number of APP works based on paralanguage is large compared to those based on other modalities (possibly in combination with paralanguage) [43], [80], [81], [82], [83], [84] (see Table 7 for a synopsis of data, approaches and results). Furthermore, speech based APP was the focus of a recent benchmarking campaign [85], [86], the "Interspeech 2012 Speaker Trait Challenge" [87], that has led to the first, rigorous comparison of different approaches over the same data and using the same experimental protocol [88], [89], [90], [91], [92], [93], [94], [95], [96].…”
Section: App From Paralanguagementioning
confidence: 99%
“…It is clear in the literature that some traits can be more easily recognized by means of automatic procedures than others, but this fact may vary according to the data and the methodologies applied (see [21] for a survey). Moreover, it has been timidly pointed out that different personality dimensions/traits are revealed in spontaneous speech by means of different sets of representative acoustic/prosodic features [11,[14][15][16]21], but exhaustive categorizations of such features and studies on their impact across ages, cultures, etc. are still missing.…”
Section: Introductionmentioning
confidence: 99%