2008 IEEE Workshop on Machine Learning for Signal Processing 2008
DOI: 10.1109/mlsp.2008.4685529
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of dialogue act recognition approaches

Abstract: This paper deals with automatic dialogue act recognition. Dialogue acts (DAs) are utterance-level labels that represent different states of a dialogue, such as questions, statements, hesitations, etc. Information about actual DA can be seen as the first level of dialogue understanding. The main goal of this paper is to compare our dialogue act recognition approaches that model the utterance structure, and are particularly useful when the DA corpus is small, with n-gram based approaches. Our best approach is al… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
18
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(20 citation statements)
references
References 17 publications
2
18
0
Order By: Relevance
“…Note, that in this work, only texts are extracted, and the audio recordings are discarded for the time being. We have indeed shown in previous works (Král et al, 2006), (Král et al, 2007), (Král et al, 2008) that textual transcriptions are the most informative features for dialog act recognition, which justifies this choice as a first approximation.…”
Section: Introductionsupporting
confidence: 71%
“…Note, that in this work, only texts are extracted, and the audio recordings are discarded for the time being. We have indeed shown in previous works (Král et al, 2006), (Král et al, 2007), (Král et al, 2008) that textual transcriptions are the most informative features for dialog act recognition, which justifies this choice as a first approximation.…”
Section: Introductionsupporting
confidence: 71%
“…Other dialogue acts include: question, thank, introduce, suggest, feedback, confirm and motivate [30]. Based on detailed analysis of extensive annotated datasets, some dialogue act tag-sets have emerged as pseudo-standards in this area [31]. These large annotated datasets and tag sets are used to train classifiers that can distinguish between different dialogue acts.…”
Section: Automatic Detection Of Exploratory Dialoguementioning
confidence: 99%
“…For example, adding a feature to indicate whether the speaker of an utterance is the same as the previous speaker or not, while coding at the whole transcript level. Methods to apply dialogue acts (Austin, 1975) as labels -labels indicating the type of "move" being made -(see for example, Erkens & Janssen, 2008;Král & Cerisara, 2012;Stolcke et al, 2000) -would also fall into this category. This is arguably not dissimilar to taking individual quotations from a transcript and providing some contextual information (although it is unusual for this to be formalized in the way feature selection is).…”
Section: Operationalizing Our Feature and Segmentation Level Represenmentioning
confidence: 99%
“…In those cases, though, where analysis is automated (in "Epistemic Games"), the dialogue is structured to facilitate identification of topically related talk in order to simplify the segmentation process. Other means through which segments may be identified include analysis of dialogue acts, (see, Erkens & Janssen, 2008;Král & Cerisara, 2012;Stolcke et al, 2000) through grammatical features indicating an exchange and shift to a new exchange, and more broadly than that analysis of "break points" in an exchange or topic indicative of new sections or types of dialogue (see for example, Chiu, 2008). In each case, the use of tokenizing for grammatical features (for example, through the use of the Part of Speech Taggers), and lexical tools for identification of topics, are important -both of these are feature selection tools, and again we see the interplay of features and segmentation -feature selection can be used to provide the means through which to segment, in order to identify regularities in features across segments.…”
Section: Operationalizing Our Feature and Segmentation Level Represenmentioning
confidence: 99%