2023
DOI: 10.1016/j.engappai.2023.106586
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging the meta-embedding for text classification in a resource-constrained language

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…Text classification, especially sentiment analysis, has become one of the very important NLP applications. It helps in processing millions of entries to deduct a sense of a mass satisfaction / dissatisfaction through users' produced text which helps companies and/or organizations and/or scientists to form perspective and/or decisions which are based on a big picture and developing applications that serves real-life solutions as discussed in [16,20,21].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Text classification, especially sentiment analysis, has become one of the very important NLP applications. It helps in processing millions of entries to deduct a sense of a mass satisfaction / dissatisfaction through users' produced text which helps companies and/or organizations and/or scientists to form perspective and/or decisions which are based on a big picture and developing applications that serves real-life solutions as discussed in [16,20,21].…”
Section: Introductionmentioning
confidence: 99%
“…That approach might sound intuitive but at the same time it creates a new set of greater challenges. The first one is the need to provide and maintain a specific model and lexicons for every single language and its dialects, which is already a huge challenge in NLP domain due the variant complexity of many languages and dialects regarding grammar and structure to begin with as these studies show [11,[21][22][23]25]. Then comes the challenge of the doubled resources needed, due to having two ML models (at least) running instead of one.…”
Section: Introductionmentioning
confidence: 99%