2020
DOI: 10.1609/aaai.v34i05.6498
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Point Semantic Representation for Intent Classification

Abstract: Detecting user intents from utterances is the basis of natural language understanding (NLU) task. To understand the meaning of utterances, some work focuses on fully representing utterances via semantic parsing in which annotation cost is labor-intentsive. While some researchers simply view this as intent classification or frequently asked questions (FAQs) retrieval, they do not leverage the shared utterances among different intents. We propose a simple and novel multi-point semantic representation framework w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
32
0
2

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(34 citation statements)
references
References 9 publications
0
32
0
2
Order By: Relevance
“…Several works tried to exploit an external knowledge base to accomplish dataless classification (Chang et al, 2008;Song and Roth, 2014). However, external knowledge base like Wikipedia is not always available for many languages or domains.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Several works tried to exploit an external knowledge base to accomplish dataless classification (Chang et al, 2008;Song and Roth, 2014). However, external knowledge base like Wikipedia is not always available for many languages or domains.…”
Section: Related Workmentioning
confidence: 99%
“…Recent studies on dataless text classification show promising results on reducing labeling effort (Liu et al, 2004;Druck et al, 2008;Chang et al, 2008;Hingmire et al, 2013;Hingmire and Chakraborti, 2014;Song and Roth, 2014;Chen et al, 2015;Li et al, 2016b;Li et al, 2018;Li et al, 2019a;Shalaby and Zadrozny, 2019). Without any labeled documents, a dataless classifier performs text classification by using a small set of relevant words for each category (called "seed words") or resorting to hidden topic labeling.…”
Section: Introductionmentioning
confidence: 99%
“…Entity-based text representation may be utilized in many other tasks, e.g., computing document similarity [44], text classification [10], or question answering [6,45]. Medical search is another prominent example for the use of controlled vocabulary representations, with a lot of work conducted in the context of the TREC Genomics track [31,36,46].…”
Section: Further Readingmentioning
confidence: 99%
“…Uma das possíveis aplicações para o ESA (e o ESA-G) é a classificação de textos [49]. No domínio em estudo, um texto pode se relacionar, simultaneamente, a vários diferentes rótulos, caracterizando a então denominada "Classificação Multirrótulo" apresentada a seguir.…”
Section: Esa-gunclassified
“…A proposta inicial do ESA não tem um classificador, dado que seu propósito original não é a categorização de documentos e sim a medição de similaridade semântica entre eles. Em [49], apresenta-se uma das primeiras propostas de classificação com o ESA sem um conjunto de treinamento explícito, baseando-se apenas no conhecimento incorporado em seu sistema de conceitos para realizar categorizações. Como o problema apresentado na Seção 1.1 envolve a análise de fragmentos de texto e não a simples comparação entre dois ou mais trechos de documentos, uma das primeiras mudanças introduzidas no ESA foi a de retornar, no lugar do cosseno entre dois textos, o conceito que mais se aproxima semanticamente do texto submetido ao A2E.…”
Section: Adaptação No Classificador Do Esaunclassified