Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Lang 2003
DOI: 10.3115/1073483.1073495
|View full text |Cite
|
Sign up to set email alerts
|

Detection of agreement vs. disagreement in meetings

Abstract: To support summarization of automatically transcribed meetings, we introduce a classifier to recognize agreement or disagreement utterances, utilizing both word-based and prosodic cues. We show that hand-labeling efforts can be minimized by using unsupervised training on a large unlabeled data set combined with supervised training on a small amount of data. For ASR transcripts with over 45% WER, the system recovers nearly 80% of agree/disagree utterances with a confusion rate of only 3%.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
87
0

Year Published

2009
2009
2020
2020

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 112 publications
(89 citation statements)
references
References 8 publications
2
87
0
Order By: Relevance
“…Several studies incorporate features such as dialogue acts (DAs) and adjacency pairs [16] [17] to capture a level of agreement. Other works use word-based features (e.g., the number of positive and negative keywords spoken during a conversation) and prosodic cues to perform prediction tasks [18]. Although various sets of dialogue features have been used, these studies only analyze decision-making process from the perspective of a single participant; consequently, they do not capture the level of joint agreement among team members as a group.…”
Section: Related Workmentioning
confidence: 99%
“…Several studies incorporate features such as dialogue acts (DAs) and adjacency pairs [16] [17] to capture a level of agreement. Other works use word-based features (e.g., the number of positive and negative keywords spoken during a conversation) and prosodic cues to perform prediction tasks [18]. Although various sets of dialogue features have been used, these studies only analyze decision-making process from the perspective of a single participant; consequently, they do not capture the level of joint agreement among team members as a group.…”
Section: Related Workmentioning
confidence: 99%
“…Neiberg et al [124] used spectral features (MFCCs) and pitch features and lexical n-grams for recognizing emotions in the ISL Meeting Corpus (Burger et al [24]). Agreement and disagreement recognition (using both lexical and prosodic cues), and hotspot detection in meetings were investigated by e.g., Hillard et al [80], Galley et al [65], Hahn et al [72], and Wrede and Shriberg [211] respectively. Hotspots are events in meetings where the participants are highly involved in a discussion.…”
Section: Related Workmentioning
confidence: 99%
“…Hillard et al (2003) and Hahn et al (2006) used the ICSI Meeting Corpus (Janin et al, 2003) to develop systems that would classify utterances into agreements, disagreements, backchannels, and 'other'. While these authors only leveraged lexical and prosodic features of the utterance to be classified (i.e., local features), Galley et al (2004) showed that accuracy could be improved by taking into account contextual dependencies, in particular previous (dis)agreements between the dialogue participants, achieving an overall accuracy of 86.9%.…”
Section: Related Computational Workmentioning
confidence: 99%