Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019) 2019
DOI: 10.18653/v1/d19-6116
|View full text |Cite
|
Sign up to set email alerts
|

Metric Learning for Dynamic Text Classification

Abstract: Traditional text classifiers are limited to predicting over a fixed set of labels. However, in many real-world applications the label set is frequently changing. For example, in intent classification, new intents may be added over time while others are removed.We propose to address the problem of dynamic text classification by replacing the traditional, fixed-size output layer with a learned, semantically meaningful metric space. Here the distances between textual inputs are optimized to perform nearest-neighb… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 27 publications
(32 reference statements)
0
3
0
Order By: Relevance
“…We choose cosine-similarity function, which is equivalent to inner product for normalized vectors. Training Stage -The goal is to learn a latent space following metric learning [27], where matching query and FAQ pairs shall have smaller distance compared to the non-matching pairs. Let D train = {(x i , y + i , y − i,1 , .…”
Section: Proposed Approachmentioning
confidence: 99%
“…We choose cosine-similarity function, which is equivalent to inner product for normalized vectors. Training Stage -The goal is to learn a latent space following metric learning [27], where matching query and FAQ pairs shall have smaller distance compared to the non-matching pairs. Let D train = {(x i , y + i , y − i,1 , .…”
Section: Proposed Approachmentioning
confidence: 99%
“…Conversely, the distance between the vectors of two samples belonging to different classes should be larger. In recent years, metric learning has been shown to be effective in a number of computer vision tasks, such as image retrieval (Zhong et al, 2021 ), object recognition (Sohn, 2016 ), and face recognition (Cao et al, 2013 ), and also for natural language processing tasks, such as text classification (Wohlwend et al, 2019 ) and entity linking (Liu et al, 2020 ).…”
Section: Background and Related Workmentioning
confidence: 99%
“…These methods typically learn generally useful text representations from a large corpus of unlabeled text and use them for a specific target task with limited supervision (Howard and Ruder, 2018;Devlin et al, 2019;Lan et al, 2020;Peters et al, 2018). Metric learning (Wohlwend et al, 2019) is related to our work, but they focus on few-shot learning and we work on improving unsupervised text classifiers. Related contemporaneous work has proposed methods to generate more relevant label names from a given set (Meng et al, 2020;Schick et al, 2020).…”
Section: Related Workmentioning
confidence: 99%