2018
DOI: 10.1007/s11042-018-5772-4
|View full text |Cite
|
Sign up to set email alerts
|

Few-shot learning for short text classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
38
0
2

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 81 publications
(41 citation statements)
references
References 34 publications
1
38
0
2
Order By: Relevance
“…Another category of metric-learning based approaches such as siamese networks [14], matching networks [15], prototypical networks [16] and relation networks [17] aim to learn a set of projection functions such that when represented in this embedding, images are easy to recognize using simple linear classifiers. Only a few works focus on the few-shot learning on NLP tasks, for example, a text classification framework based on siamese CNN network and few-shot learning is proposed in [18]. However, taking the widely used experimental setting in few-shot learning for instance, miniImagenet, the benchmark dataset is split into 64, 16, and 20 classes for training, validation and testing, respectively.…”
Section: Related Work a Few-shot Transfer Learningmentioning
confidence: 99%
“…Another category of metric-learning based approaches such as siamese networks [14], matching networks [15], prototypical networks [16] and relation networks [17] aim to learn a set of projection functions such that when represented in this embedding, images are easy to recognize using simple linear classifiers. Only a few works focus on the few-shot learning on NLP tasks, for example, a text classification framework based on siamese CNN network and few-shot learning is proposed in [18]. However, taking the widely used experimental setting in few-shot learning for instance, miniImagenet, the benchmark dataset is split into 64, 16, and 20 classes for training, validation and testing, respectively.…”
Section: Related Work a Few-shot Transfer Learningmentioning
confidence: 99%
“…Several works (Nishida et al, 2011;Romero et al, 2013; Ramírez de la Rosa et al, 2013;Yang et al, 2013;Zhang & Zhong, 2016;Wang et al, 2016a;Dai et al, 2017;Li et al, 2018;Ravi & Kozareva, 2018;Yan et al, 2018) focused on designing new classification techniques specifically for short-texts. The reviewed techniques are organised under three categories, according to how they tackle the classification task: word/character sequence-based techniques (including techniques based on data compression and similarity), domain knowledge-based techniques (including ontologies and diverse corpora) and neural networks-based techniques (including techniques leveraging on word embeddings and deep learning).…”
Section: New Classification Techniquesmentioning
confidence: 99%
“…Traditional BOW models might have difficulties for capturing the semantic meaning of short-texts. To overcome this problem, several works (Wang et al, 2016a;Dai et al, 2017;Ravi & Kozareva, 2018;Yan et al, 2018) have leveraged on word embedding models and deep learning techniques for short-text classification. Word embeddings aim at quantifying the semantic similarity of linguistic items based on the distributional properties of words in large textual samples.…”
Section: Neural Network-based Techniquesmentioning
confidence: 99%
See 1 more Smart Citation
“…), such as patents, news, and papers. e most typical object of text without structure is short text [8][9][10]. Text with an obvious structure, due to its clear structure, often appears in formal written expressions and has a wide range of applications.…”
Section: Introductionmentioning
confidence: 99%