2019 ACM/IEEE Joint Conference on Digital Libraries (JCDL) 2019
DOI: 10.1109/jcdl.2019.00025
|View full text |Cite
|
Sign up to set email alerts
|

Learning from Few Samples: Lexical Substitution with Word Embeddings for Short Text Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 21 publications
0
2
0
1
Order By: Relevance
“…We use F1-score (micro-averaged (micro) and macroaveraged (macro)) as it has been the most popular metric in the text categorization research [14]. Micro F1-score is harmonic mean of micro precision and micro recall that are defined in (23) and (24), respectively.…”
Section: B Experimental Setup 1) Policiesmentioning
confidence: 99%
See 1 more Smart Citation
“…We use F1-score (micro-averaged (micro) and macroaveraged (macro)) as it has been the most popular metric in the text categorization research [14]. Micro F1-score is harmonic mean of micro precision and micro recall that are defined in (23) and (24), respectively.…”
Section: B Experimental Setup 1) Policiesmentioning
confidence: 99%
“…Topic modelling techniques Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) were used in [18] and [19], respectively. Recently, deep learning based short text classification has become popular (see [23], [24]).…”
Section: Introductionmentioning
confidence: 99%
“…O autor cita como classificadores de melhor desempenho o Random Forest e a Deep Learning-Based Conventional Network.Um método geral de pré-processamento então foi proposto para cenários em que os dados de treinamento são escassos. Ele agrupa termos semanticamente semelhantes através do Word Embedding, que simulam como humanos pré-processam textos, substituindo palavras desconhecidas por termos conhecidos e também agrupando palavras semanticamente semelhantes(Elekes, Di Stefano, Schaler, Bohm, & Keller, 2019).Neste outro trabalho foram realizados experimentos adicionando técnicas de Word Embedding, usando não só o Word2vec mas também testando o Doc2vec, para um conjunto de classificação de textos clínicos. Os resultados foram comparados com o uso do método tradicional Bag Of Words.…”
unclassified