2020
DOI: 10.1109/lsp.2020.3039755
|View full text |Cite
|
Sign up to set email alerts
|

Correlation Filtering-Based Hashing for Fine-Grained Image Retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 36 publications
(21 citation statements)
references
References 28 publications
0
21
0
Order By: Relevance
“…The loss function commonly used in crowd counting is the MSE loss [ 41 ], which is mainly used to evaluate the variability of the data. After that, considering the local similarity between different regions due to scale variation, some researches [ 42 , 43 ] start to design different similarity preserving metrics [ 44 , 45 ] and similarity losses [ 28 , 39 ] to reduce the difference between the ground-truth and the estimated density map. DSSINet [ 29 ] designs a Dilated Multiscale Structural Similarity (DMS-SSIM) loss to learn local correlations within regions of different sizes.…”
Section: Related Workmentioning
confidence: 99%
“…The loss function commonly used in crowd counting is the MSE loss [ 41 ], which is mainly used to evaluate the variability of the data. After that, considering the local similarity between different regions due to scale variation, some researches [ 42 , 43 ] start to design different similarity preserving metrics [ 44 , 45 ] and similarity losses [ 28 , 39 ] to reduce the difference between the ground-truth and the estimated density map. DSSINet [ 29 ] designs a Dilated Multiscale Structural Similarity (DMS-SSIM) loss to learn local correlations within regions of different sizes.…”
Section: Related Workmentioning
confidence: 99%
“…One can see the detailed proofs of these results in Theorem 2 and Theorem 3, respectively. According to these results, problem (12) can be efficiently solved by Nesterov's optimal gradient method (OGM) [45]. Theorem 2.…”
Section: Algorithms Designmentioning
confidence: 99%
“…For the convenience of notations, we use C to represent the associated constraints in Eq. (12). At the iteration t, the two sequences are…”
Section: Algorithms Designmentioning
confidence: 99%
See 1 more Smart Citation
“…More recent works on deep supervised hashing employ objectives based on class-wise loss [35], semantic clusterbased unary loss [36], multi task-based loss [37], list-wise loss [38], or using anchor graphs for defining the loss function and further improving the hashing performance [39]. Furthermore, an end-to-end supervised product quantization approach for information retrieval was proposed in [40], while deep discrete hashing approaches [18,41], incremental hashing methods [42] and correlation filteringbased fine-grained hashing approaches [43] have also been utilized to the same end.…”
Section: Related Workmentioning
confidence: 99%