2021
DOI: 10.11591/ijeecs.v22.i2.pp1032-1040
|View full text |Cite
|
Sign up to set email alerts
|

A comparative study of deep learning based language representation learning models

Abstract: Deep learning (DL) approaches use various processing layers to learn hierarchical representations of data. Recently, many methods and designs of natural language processing (NLP) models have shown significant development, especially in text mining and analysis. For learning vector-space representations of text, there are famous models like Word2vec, GloVe, and fastText. In fact, NLP took a big step forward when BERT and recently GTP-3 came out. In this paper, we highlight the most important language representa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
4

Relationship

4
6

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 17 publications
0
10
0
Order By: Relevance
“…BERT is one of the best DL algorithms in SA, as shown in [12]. Yadav et al [13] used the BERT algorithm to identify cyberbullying on social media platforms, utilizing the BERT model as a classifier with a single linear neural network layer, trained and evaluated on two social media datasets, one small and one fairly big.…”
Section: Related Workmentioning
confidence: 99%
“…BERT is one of the best DL algorithms in SA, as shown in [12]. Yadav et al [13] used the BERT algorithm to identify cyberbullying on social media platforms, utilizing the BERT model as a classifier with a single linear neural network layer, trained and evaluated on two social media datasets, one small and one fairly big.…”
Section: Related Workmentioning
confidence: 99%
“…TL is a machine and deep learning research area that aims to transfer knowledge from one or more source tasks to one or more target tasks [15]. Supposing a source domain , a learning task , a target domain , and a learning task ; TL serves in improving the learning of the target predictive function (. )…”
Section: Transfer Learningmentioning
confidence: 99%
“…First, we define the architecture and strategy of every method then critically examine their main strengths and limitations. Secondly, we train a text encoder, char-convolutional neural network recurrent neural network (CNN-RNN) text embeddings [8], on a dataset of text captions to read the sentences and extract the relevant attributes. Then, we give a full implementation and adaptation on bird [9] and common object in context (COCO) [10] datasets, we compare their efficiencies on multiple levels like generated Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752 …”
Section: Introductionmentioning
confidence: 99%