2021
DOI: 10.48550/arxiv.2108.09190
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Supervised Contrastive Learning for Interpretable Long-Form Document Matching

Abstract: Recent advancements in deep learning techniques have transformed the area of semantic text matching. However, most of the state-ofthe-art models are designed to operate with short documents such as tweets, user reviews, comments, etc., and have fundamental limitations when applied to long-form documents such as scientific papers, legal documents, and patents. When handling such long documents, there are three primary challenges: (i) The presence of different contexts for the same word throughout the document, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…[40] use causal masking to remove salient regions of the input image and generate positive and negative contrast images to improve model interpretability. [16,17] propose contrastive learning to improve interpretability for NLP models. [11] introduced the idea of imposing a perceptual consistency prior on the attention heatmaps while training the network for multi-label image classification.…”
Section: Related Workmentioning
confidence: 99%
“…[40] use causal masking to remove salient regions of the input image and generate positive and negative contrast images to improve model interpretability. [16,17] propose contrastive learning to improve interpretability for NLP models. [11] introduced the idea of imposing a perceptual consistency prior on the attention heatmaps while training the network for multi-label image classification.…”
Section: Related Workmentioning
confidence: 99%