Proceedings of the Fourteenth Workshop on Semantic Evaluation 2020
DOI: 10.18653/v1/2020.semeval-1.147
|View full text |Cite
|
Sign up to set email alerts
|

Gundapusunil at SemEval-2020 Task 8: Multimodal Memotion Analysis

Abstract: Recent technological advancements in the Internet and Social media usage have resulted in the evolution of faster and efficient platforms of communication. These platforms include visual, textual and speech mediums and have brought a unique social phenomenon called Internet memes. Internet memes are in the form of images with witty, catchy, or sarcastic text descriptions. In this paper, we present a multi-modal sentiment analysis system using deep neural networks combining Computer Vision and Natural Language … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…Some quality works done on multimodality content before this propaganda problem , such as the spread of false information [Dupuis and Williams, 2019], hateful memes identification [Kiela et al, 2020, Lippe et al, 2020, Das et al, 2020, Gundapu and Mamidi, 2020, antisemitism [Chandra et al, 2021].…”
Section: Related Workmentioning
confidence: 99%
“…Some quality works done on multimodality content before this propaganda problem , such as the spread of false information [Dupuis and Williams, 2019], hateful memes identification [Kiela et al, 2020, Lippe et al, 2020, Das et al, 2020, Gundapu and Mamidi, 2020, antisemitism [Chandra et al, 2021].…”
Section: Related Workmentioning
confidence: 99%
“…Recently, SemEval-2020 Task 9 on Memotion Analysis (Sharma et al, 2020a) introduced a dataset of 10k memes, annotated with sentiment, emotions, and emotion intensity. Most participating systems in this challenge used fusion of visual and textual features computed using models such as Inception, ResNet, CNN, VGG-16 and DenseNet for image representation (Morishita et al, 2020;Sharma et al, 2020b;Yuan et al, 2020), and BERT, XLNet, LSTM, GRU and DistilBERT for text representation (Liu et al, 2020;Gundapu and Mamidi, 2020). Due to class imbalance in the dataset, approaches such as GMM and Training Signal Annealing (TSA) were also found useful.…”
Section: Related Workmentioning
confidence: 99%
“…Due to class imbalance in the dataset, approaches such as GMM and Training Signal Annealing (TSA) were also found useful. Morishita et al (2020); Bonheme and Grzes (2020); Guo et al (2020); Sharma et al (2020b) proposed ensemble learning, whereas Gundapu and Mamidi (2020); De la Peña Sarracén et al ( 2020) and several others used multimodal approaches. A few others leveraged transfer-learning using pre-trained models such as BERT (Devlin et al, 2019), VGG-16 (Simonyan and Zisserman, 2015), and ResNet (He et al, 2016).…”
Section: Related Workmentioning
confidence: 99%