2022
DOI: 10.3390/app12062848
|View full text |Cite
|
Sign up to set email alerts
|

An Enhanced Neural Word Embedding Model for Transfer Learning

Abstract: Due to the expansion of data generation, more and more natural language processing (NLP) tasks are needing to be solved. For this, word representation plays a vital role. Computation-based word embedding in various high languages is very useful. However, until now, low-resource languages such as Bangla have had very limited resources available in terms of models, toolkits, and datasets. Considering this fact, in this paper, an enhanced BanglaFastText word embedding model is developed using Python and two large… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 36 publications
0
7
0
Order By: Relevance
“…The proposed system utilized DCGAN+Bangla Fasttext to generate face images from the corresponding Bangla descriptions. Firstly the Bangla text description is fed to Bangla Fasttext [2] which returns a [300 × 1] shaped text embedding. A random noise vector with a shape of [100 × 1] along with the achieved text embedding is passed to the generator.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The proposed system utilized DCGAN+Bangla Fasttext to generate face images from the corresponding Bangla descriptions. Firstly the Bangla text description is fed to Bangla Fasttext [2] which returns a [300 × 1] shaped text embedding. A random noise vector with a shape of [100 × 1] along with the achieved text embedding is passed to the generator.…”
Section: Methodsmentioning
confidence: 99%
“…A work [8] from 2021 suggested that using a pre-trained text encoder enhances the performance of text-to-image synthesis. The proposed system employs Bangla FastText [2] as pretrained text encoder. Bangla FastText is trained using 20 million Bangla data.…”
Section: B Text Encodermentioning
confidence: 99%
See 1 more Smart Citation
“…The importance of word embedding cannot be overstated in approaches based on deep learning architecture because data must be in numerical form. Many word embeddings have been proposed to date, and we evaluated the frameworks proposed using the two most prevalent word embeddings regarded as separate dimensions [30].…”
Section: Word Embeddingmentioning
confidence: 99%
“…The Word2Vec method employs constructing a vector using continuous bag-of-words (CBOW), the Skip-gram model, and two hidden layers of shallow neural networks with high dimensions for each word; take the next step in our research. Using Skip-grams [30] delves into the corpus of word w and their context. The purpose is to increase the likelihood as much as possible:…”
Section: Word2vecmentioning
confidence: 99%