2014
DOI: 10.1093/llc/fqu041
|View full text |Cite
|
Sign up to set email alerts
|

Towards modeling expressed emotions in oral history interviews: Using verbal and nonverbal signals to track personal narratives

Abstract: The article aims to model the verbal and prosodic features of emotional expression in interviews to investigate the potential for synergy between scholarly fields that have the narrative as object of study. Using a digital collection of oral history interviews that contains narrative aspects addressing war and violence in Croatia, we analyzed emotional expression through the words spoken, and through the pitch, vocal effort, and pause duration in the speech signal. The findings were correlated with the linear … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
2
2

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 41 publications
0
4
0
Order By: Relevance
“…Considering the strong focus on speech data within sociolinguistics, there is much potential for computational approaches to be applied to spoken language as well. Moreover, the increased availability of recordings of spontaneous speech and transcribed speech has inspired a revival in the study of the social dimensions of spoken language (Jain et al 2012), as well as in the analysis of the relation between the verbal and the nonverbal layers in spoken dialogues (Truong et al 2014). As online data increasingly becomes multimodal, for example with the popularity of vlogs (video blogs), we expect the use of spoken word data for computational sociolinguistics to increase.…”
Section: Scope Of Discussionmentioning
confidence: 99%
“…Considering the strong focus on speech data within sociolinguistics, there is much potential for computational approaches to be applied to spoken language as well. Moreover, the increased availability of recordings of spontaneous speech and transcribed speech has inspired a revival in the study of the social dimensions of spoken language (Jain et al 2012), as well as in the analysis of the relation between the verbal and the nonverbal layers in spoken dialogues (Truong et al 2014). As online data increasingly becomes multimodal, for example with the popularity of vlogs (video blogs), we expect the use of spoken word data for computational sociolinguistics to increase.…”
Section: Scope Of Discussionmentioning
confidence: 99%
“…It provides word-level search capability and a time-correlated transcript or indexed interview connecting the search term to the corresponding utterance and moment in the recorded interview online. Though aimed primarily at improving access, interaction, and information retrieval, it additionally deploys alternative entry points for information seeking like emotion tagging and emotion detection (Warren et al, 2013;Truong et al, 2014;Turner, 2017). Other transcription and text analysis tools, both free and commercial, cited by oral historians include WebASR, FromTo (see below), Express Scribe, NVivo, ATLAS.ti, GATE, Lexalytics, and Apache Open Natural Language Processing (NLP) to name a few.…”
Section: Doh: the State Of The Artmentioning
confidence: 99%
“…Significant ongoing research is reworking OH recordings and re-centring aurality with techniques like machine learning to enhance discovery (Clement et al, 2014), emotion detection, and analysis. Crucially, much of this research is moving away from an overemphasis on the textual transcript of the interview, by looking at pitch, vocal effort, and pause duration in the speech signal with a view to navigating collections by emotion markers (Truong et al, 2014). Four initiatives that have discerned the advanced possibilities for DOH, directly and indirectly, are AudioVisual Material in DH (AVinDH); CLARIN Media Suite; the Sussex Humanities Lab (SHL); and HiPSTAS.…”
Section: 'Tooling Up' For Mdohmentioning
confidence: 99%
“…This leads to the third challenge as large individual differences exist in intensities and durations of different modalities of emotional expression [18], a problem that was encountered in previous studies that tried to bring affective computing to real world situations [19]. To address these challenges, this PhD research project aims to take these intricacies of multimodal expression of emotions into account to advance the automatic analysis of emotional expression in dementia.…”
Section: Research Problemmentioning
confidence: 99%