2022
DOI: 10.3390/s22062188
|View full text |Cite
|
Sign up to set email alerts
|

AutoRet: A Self-Supervised Spatial Recurrent Network for Content-Based Image Retrieval

Abstract: Image retrieval techniques are becoming famous due to the vast availability of multimedia data. The present image retrieval system performs excellently on labeled data. However, often, data labeling becomes costly and sometimes impossible. Therefore, self-supervised and unsupervised learning strategies are currently becoming illustrious. Most of the self/unsupervised strategies are sensitive to the number of classes and can not mix labeled data on availability. In this paper, we introduce AutoRet, a deep convo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 23 publications
(8 citation statements)
references
References 58 publications
0
8
0
Order By: Relevance
“…In 2022 Monowar, M.M., et al, [20] presented AutoRet, a DCNNimage retrieval system. A DCNN is used as the standard technique for extracting embeddings from various picture patches.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In 2022 Monowar, M.M., et al, [20] presented AutoRet, a DCNNimage retrieval system. A DCNN is used as the standard technique for extracting embeddings from various picture patches.…”
Section: Literature Reviewmentioning
confidence: 99%
“…From Tables 1 and 2, we figured out that ResNet50 performs better than the other ResNet models for the CBTIR system with HD as an index for similarity measure calculation. https://www.indjst.org/ (12) ED 85 Monowar et al (15) ED 83 Abdullah (16) HD 98 Proposed ResNet50 ED 90 Proposed ResNet50…”
Section: Precision =mentioning
confidence: 99%
“…Monowar et al [21] introduced, a self-supervised image retrieval system using neural networks (NN), addressing the challenges of expensive or unfeasible data labeling. Trained on pairwise constraints, the model can function in self-supervised environments and with partially labeled datasets.…”
Section: Related Workmentioning
confidence: 99%