2021 Sixth International Conference on Image Information Processing (ICIIP) 2021
DOI: 10.1109/iciip53038.2021.9702548
|View full text |Cite
|
Sign up to set email alerts
|

A Code-Diverse Kannada-English Dataset For NLP Based Sentiment Analysis Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 38 publications
0
3
0
Order By: Relevance
“…This is due to the influence of English, which is widely recognized as the language of education and higher learning. The practice of code-mixing is not limited to those with less education or training but is also observed among individuals with higher levels of education and training [9]. The use of Roman script for writing Tamil-English combinations, or "Tanglish," is also common on the internet, reflecting the widespread use of English online [10].…”
Section: Amentioning
confidence: 99%
“…This is due to the influence of English, which is widely recognized as the language of education and higher learning. The practice of code-mixing is not limited to those with less education or training but is also observed among individuals with higher levels of education and training [9]. The use of Roman script for writing Tamil-English combinations, or "Tanglish," is also common on the internet, reflecting the widespread use of English online [10].…”
Section: Amentioning
confidence: 99%
“…During this stage, unnecessary characters, punctuation, and numbers are eliminated, and specific challenges such as spelling variations and abbreviations unique to the regional language are addressed. Additionally, the text is divided into individual words and sub-words for further analysis and processing [2]. In certain regional languages, words are not always separated by spaces or punctuation marks.…”
Section: Introductionmentioning
confidence: 99%
“…In other words, it is the percentage of times that the model predicts a positive class label when the actual class label is positive. Sensitivity = TP / (TP + FN)(2)…”
mentioning
confidence: 99%