2022
DOI: 10.48550/arxiv.2211.03524
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adaptive Contrastive Learning on Multimodal Transformer for Review Helpfulness Predictions

Abstract: Modern Review Helpfulness Prediction systems are dependent upon multiple modalities, typically texts and images. Unfortunately, those contemporary approaches pay scarce attention to polish representations of crossmodal relations and tend to suffer from inferior optimization. This might cause harm to model's predictions in numerous cases. To overcome the aforementioned issues, we propose Multi-modal Contrastive Learning for Multimodal Review Helpfulness Prediction (MRHP) problem, concentrating on mutual informa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…Advanced machine learning algorithms such as the Contrastive Learning framework have been applied to the natural language processing and computer vision [31,33,34,36,37,45]. Optimal Transport has also been extensively used in many natural language processing tasks and also the integration of vision and language fields, for example, Cross-Lingual Abstractive Summarization [32], machine translation [6], Vision and language pretraining [7,17], Visual Question Answering [5], etc.…”
Section: Related Workmentioning
confidence: 99%
“…Advanced machine learning algorithms such as the Contrastive Learning framework have been applied to the natural language processing and computer vision [31,33,34,36,37,45]. Optimal Transport has also been extensively used in many natural language processing tasks and also the integration of vision and language fields, for example, Cross-Lingual Abstractive Summarization [32], machine translation [6], Vision and language pretraining [7,17], Visual Question Answering [5], etc.…”
Section: Related Workmentioning
confidence: 99%
“…In practice, mutual information maximization is approximated with a tractable lower bound, such as InfoNCE (Van den Oord, Li, and Vinyals 2018) and InfoMax (Hjelm et al 2019). These are also known as contrastive learning (Arora et al 2019;Wang and Isola 2020;Nguyen et al 2022;) that learns the representation similarity of positive and negative samples. Some recent studies (Xu et al 2022) apply mutual information for monolingual topic modeling and focus on the representations of documents.…”
Section: Related Workmentioning
confidence: 99%