2006 IEEE Spoken Language Technology Workshop 2006
DOI: 10.1109/slt.2006.326819
|View full text |Cite
|
Sign up to set email alerts
|

Dialogue-Act Tagging Using Smart Feature Selection; Results on Multiple Corpora

Abstract: This paper presents an overview of our on-going work on dialogueact classification. Results are presented on the ICSI, Switchboard, and on a selection of the AMI corpus, setting a baseline for forthcoming research. For these corpora the best accuracy scores obtained are 89.27%, 65.68% and 59.76%, respectively. We introduce a smart compression technique for feature selection and compare the performance from a subset of the AMI transcriptions with AMI-ASR output for the same subset.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2010
2010
2017
2017

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 32 publications
(39 citation statements)
references
References 13 publications
0
39
0
Order By: Relevance
“…Large numbers of n-grams are generated by any real-world corpus so a predictive subset is used (in [3] sets of 10 or 300). N-grams have been used to code features for Hidden Markov Models [4].…”
Section: Features For Da Classificationmentioning
confidence: 99%
See 2 more Smart Citations
“…Large numbers of n-grams are generated by any real-world corpus so a predictive subset is used (in [3] sets of 10 or 300). N-grams have been used to code features for Hidden Markov Models [4].…”
Section: Features For Da Classificationmentioning
confidence: 99%
“…Dialogue Act (DA) classification is an established element of research in the field of Dialogue Management [1][2][3][4][5][6]. This work is motivated by the application of DA classification to natural language interaction with Dialogue Systems (DSs) [7] and Robots [8].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Some syntactic relations are captured by HMM word models, such as the widely-used n-grams [11], but these approaches only capture local syntactic relations, while we consider next global syntactic trees. Most other works thus focus on morphosyntactic tags, as demonstrated for instance in [48], where a smart compression technique for feature selection is introduced. The authors use a rich feature set with POS-tags included and obtain with a decision tree classifier an accuracy of 89.27%, 65.68% and 59.76% respectively on the ICSI, Switchboard and on a selection of the AMI corpus.…”
Section: Related Workmentioning
confidence: 99%
“…As the literature indicates that only the ranking approaches have been investigated (Samuel et al, 1999;Webb et al, 2205;Lesch, 2005;Kats, 2006;Verbree et al, 2006) due to their computational efficiency, regardless of their inefficiency with respect to the relevance and redundancy of the selection. In addition to examining the proposed VLGA on the lexical cue selection, a number of ranking approaches which are always applied for cue phrase selection have been experimented.…”
Section: C)p(f C) P(f C)p(f C)] X(f C) P(f )P(f )P(c)p(cmentioning
confidence: 99%