2004
DOI: 10.1142/s0129065704002157
|View full text |Cite
|
Sign up to set email alerts
|

A Neural Network Model With Feature Selection for Korean Speech Act Classification

Abstract: A speech act is a linguistic action intended by a speaker. Speech act classification is an essential part of a dialogue understanding system because the speech act of an utterance is closely tied with the user's intention in the utterance. We propose a neural network model for Korean speech act classification. In addition, we propose a method that extracts morphological features from surface utterances and selects effective ones among the morphological features. Using the feature selection method, the proposed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2005
2005
2021
2021

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 13 publications
(22 citation statements)
references
References 4 publications
0
22
0
Order By: Relevance
“…As shown in Table 3, the SADM showed better results than Kim et al [8] at all of the cut-off points. Moreover, the SADM, which used 100 dialogues as a training corpus, had similar precisions to Kim et al [8] by using 500-700 dialogues as a training corpus. The p-value against Kim et al [8] is measured as 0.000002.…”
Section: Resultsmentioning
confidence: 74%
See 4 more Smart Citations
“…As shown in Table 3, the SADM showed better results than Kim et al [8] at all of the cut-off points. Moreover, the SADM, which used 100 dialogues as a training corpus, had similar precisions to Kim et al [8] by using 500-700 dialogues as a training corpus. The p-value against Kim et al [8] is measured as 0.000002.…”
Section: Resultsmentioning
confidence: 74%
“…In Table 3, the results of Kim et al [8] are similar to those of the SADM for users' utterances, except that they do not use the concept sequence features as input features. As shown in Table 3, the SADM showed better results than Kim et al [8] at all of the cut-off points.…”
Section: Resultsmentioning
confidence: 78%
See 3 more Smart Citations