2021
DOI: 10.48550/arxiv.2110.14455
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CBIR using Pre-Trained Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
0
0
Order By: Relevance
“…DALG, presented by Y.Song et al, uses a cross-attention module to hierarchically (instead of heuristically) fuse the features [28]. Furthermore, Alappat et al present a model that uses an Inception V3 backbone network and extracts the MS-RMAC feature matrix to retrieve images [29]. Global-local attention module (GLAM), proposed by C.Song, combines both local and global attention as well as spatial and channel attention and then computes a new feature tensor [30].…”
Section: Related Workmentioning
confidence: 99%
“…DALG, presented by Y.Song et al, uses a cross-attention module to hierarchically (instead of heuristically) fuse the features [28]. Furthermore, Alappat et al present a model that uses an Inception V3 backbone network and extracts the MS-RMAC feature matrix to retrieve images [29]. Global-local attention module (GLAM), proposed by C.Song, combines both local and global attention as well as spatial and channel attention and then computes a new feature tensor [30].…”
Section: Related Workmentioning
confidence: 99%
“…According to the authors in [32], their MS-RMAC method gives better results than the RMAC proposed in [27]. The authors in [33] propose another variant of MS-RMAC with the InceptionV3 architecture and a post-processing of the vector descriptor before calculating the ranking loss.…”
Section: Ms-rmacmentioning
confidence: 99%