Proceedings - Natural Language Processing in a Deep Learning World 2019
DOI: 10.26615/978-954-452-056-4_108
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised dialogue intent detection via hierarchical topic model

Abstract: One of the challenges during a taskoriented chatbot development is the scarce availability of the labeled training data. The best way of getting one is to ask the assessors to tag each dialogue according to its intent. Unfortunately, performing labeling without any provisional collection structure is difficult since the very notion of the intent is ill-defined.In this paper, we propose a hierarchical multimodal regularized topic model to obtain a first approximation of the intent set. Our rationale for hierarc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…The usefulness of these agents was at peak during pandemic when the life was restricted without human assistance for any problem. For designing these agents two main components are required namely slot filling (Firdaus et al, 2021;Manchanda et al, 2021) and intent detection (Popov et al, 2019;Reza et al, 2020) and their performance have a direct effect on human language understanding and efficiently performing various natural language related tasks (Yang et al, 2021;Abro et al,2022). Most of the research considers existing datasets and apply their slot filling and intent detection models on them either separately or in an integrated way.…”
Section: Introductionmentioning
confidence: 99%
“…The usefulness of these agents was at peak during pandemic when the life was restricted without human assistance for any problem. For designing these agents two main components are required namely slot filling (Firdaus et al, 2021;Manchanda et al, 2021) and intent detection (Popov et al, 2019;Reza et al, 2020) and their performance have a direct effect on human language understanding and efficiently performing various natural language related tasks (Yang et al, 2021;Abro et al,2022). Most of the research considers existing datasets and apply their slot filling and intent detection models on them either separately or in an integrated way.…”
Section: Introductionmentioning
confidence: 99%
“…These approaches model the user intent and then propose novel items to the user by assigning weights to the intent-related information resources. 71,[205][206][207][208][209][210] They exploit user profiles, cross-platform learning models, textual intent mapping, and contextual weight models to learn user intent and generate personalized recommendations.…”
Section: Intent-aware Modelingmentioning
confidence: 99%
“…Popov et al 210 consider a language model to characterize the user preferred query terms and attribute values to evaluate preferred entity features for the intent. Therefore, intent consists of captured vocabularies of both the query and entity spaces.…”
Section: Taxonomy Of Iars Applications and Techniquesmentioning
confidence: 99%
“…Baseline methods: We employ two state-of-the-art baseline methods for performance comparison against our proposed artefact detection approaches: (i) K-Means clustering (Popov et al 2019) -an unsupervised approach in which conversation utterances are clustered over TF-IDF vector space and utterance labels are assigned based on majority voting for each cluster. We use it as a baseline against our unsupervised artefact extraction approach (DKG); and (ii) BiLSTM-CRF model (Kumar et al 2018) -a supervised approach where utterance representations are learned using the BiLSTM model which are then used for predicting utterance labels modeled as a sequence labelling task.…”
Section: Artefact Predictionmentioning
confidence: 99%