2021
DOI: 10.1016/j.patcog.2020.107811
|View full text |Cite
|
Sign up to set email alerts
|

Training deep retrieval models with noisy datasets: Bag exponential loss

Abstract: Temporal segmentation and keyframe selection methods for user-generated video search-based annotation. Expert Systems with Applications, 42(1), 488502.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 121 publications
(197 reference statements)
0
4
0
Order By: Relevance
“…In [10] vision transformers are proposed to generate image descriptors for retrieval, training the models with a metric learning objective that combines a contrastive loss with a differential entropy regularizer. In [26], the problem of retrieval under noisy datasets is addressed proposing a novel noiserobust loss based on Multiple Instance Learning (MIL). The proposed method allows to use noisy generated training sets, that can be easily created, to adapt CNNs for image retrieval on new objects.…”
Section: End-to-end Approachesmentioning
confidence: 99%
“…In [10] vision transformers are proposed to generate image descriptors for retrieval, training the models with a metric learning objective that combines a contrastive loss with a differential entropy regularizer. In [26], the problem of retrieval under noisy datasets is addressed proposing a novel noiserobust loss based on Multiple Instance Learning (MIL). The proposed method allows to use noisy generated training sets, that can be easily created, to adapt CNNs for image retrieval on new objects.…”
Section: End-to-end Approachesmentioning
confidence: 99%
“…In specific projects, some loss functions decrease the gradient of the difference calculated by the gradient fast, while some decrease slowly, so choosing the appropriate loss function is also very critical. Currently, the commonly used loss functions in the training of classification neural networks include 0-1 loss, Logistics loss, Hinge loss [29], exponential loss [30], and cross-entropy loss [31]. In this paper, the loss function we use is the cross-entropy loss function.…”
Section: Loss Functionmentioning
confidence: 99%
“…This is different from the traditional triplet mining methods that often address the negative samples (i.e., heterogeneous pairs). It should be mentioned that the idea of addressing the importance of similar homogeneous pairs has been introduced by BagLoss [33] to improve the performance of image retrieval models. The technical difference between BagLoss and GraVIS lies in the way to generate and utilize negative samples.…”
Section: Related Workmentioning
confidence: 99%
“…For contrastive approaches, we include SimCLR-Derm [3], MoCov2 [43], BYOL [21] and C2L [2] for comparison. As for traditional self-supervised learning methods, we compare GraVIS against Model Genesis (MG) [1], N -pairs triplet loss with the triplet mining strategy (NPTL) [44] and BagLoss [33], where the last two methods are based on metric learning.…”
Section: Baselinesmentioning
confidence: 99%