2018 International Conference on Bangla Speech and Language Processing (ICBSLP) 2018
DOI: 10.1109/icbslp.2018.8554443
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Word Embedding for Bangla Sentiment Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(10 citation statements)
references
References 4 publications
0
10
0
Order By: Relevance
“…As a result of the research, the authors concluded that the "term presence" is more important than the "term frequency" in SA; adjective, adverb, and verbs can be considered as features and irrelevant words can be removed from the corpus so that vocab size can be reduced (Mejova & Srinivasan, 2011). In addition, the authors indicated that most researchers performed SA using English language; but they found some researchers used non-English languages for solving SA problems providing compatible results as well (Che et al, 2015;Sharma et al, 2015;Sumit et al, 2018, as cited in Sharma et al, 2020.…”
Section: Resultsmentioning
confidence: 99%
“…As a result of the research, the authors concluded that the "term presence" is more important than the "term frequency" in SA; adjective, adverb, and verbs can be considered as features and irrelevant words can be removed from the corpus so that vocab size can be reduced (Mejova & Srinivasan, 2011). In addition, the authors indicated that most researchers performed SA using English language; but they found some researchers used non-English languages for solving SA problems providing compatible results as well (Che et al, 2015;Sharma et al, 2015;Sumit et al, 2018, as cited in Sharma et al, 2020.…”
Section: Resultsmentioning
confidence: 99%
“…The Facebook AI Research (FAIR) team developed a library and toolkit for effective text classification and word representation. It was created to handle text data effectively and efficiently, especially in situations where there is a lot of text and there aren't many computational resources available [24]. Word embeddings, which are vector representations of words in a continuous space, can be created using FastText [16].…”
Section: Data Collection and Processingmentioning
confidence: 99%
“…A text classification model that links textual descriptions to backdrop image names is trained using FastText. The resulting model can anticipate background names from fresh descriptions, making it easier for comic strips and graphic storytelling to seamlessly combine text and graphics [24].…”
Section: Model Trainingmentioning
confidence: 99%

Generate Comic Strips Using AI

Pramoda P Gunasekara,
Pawani Muthusala Perera,
Chathum D Adhihetty
et al. 2024
CONTRE
“…However, the sentimental analysis of the Bengali digital data does not yield a good result due to the unavailability of a good sentimental analyzer. Sumit et al [17] have used the word2vec model for representation vectors from Bengali text where skip-gram has been used to find out the close words to understand the sentiment of a Bengali sentence. The skip-gram and word2vec model provide 83.79% accuracy in the sentimental analysis of the Bengali data [17].…”
Section: Related Workmentioning
confidence: 99%