2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.247
|View full text |Cite
|
Sign up to set email alerts
|

Deep Sketch Hashing: Fast Free-Hand Sketch-Based Image Retrieval

Abstract: Free-hand sketch-based image retrieval (SBIR) is a specific cross-view retrieval task, in which queries are abstract and ambiguous sketches while the retrieval database is formed with natural images. Work in this area mainly focuses on extracting representative and shared features for sketches and natural images. However, these can neither cope well with the geometric distortion between sketches and images nor be feasible for large-scale SBIR due to the heavy continuous-valued distance computation. In this pap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
242
0
1

Year Published

2017
2017
2019
2019

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 252 publications
(244 citation statements)
references
References 66 publications
1
242
0
1
Order By: Relevance
“…Based on the problem at hand, two separated tasks have been identified: (1) Fine-grained SBIR (FG-SBIR) aims to capture fine-grained similarities of sketch and photo [15,27,37] and (2) Coarse-grained SBIR (CG-SBIR) performs a instance level search across multiple object categories [38,10,11,31,38], which has received a lot of attention due to its importance. Realising the need of large-scale SBIR, some researchers have proposed a variant of cross-modal hashing framework for the same [17,39], which also showed promising results in SBIR scenario. In contrast, our proposed model overcomes this domain gap by mining the modality agnostic features using a domain loss along with a GRL.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Based on the problem at hand, two separated tasks have been identified: (1) Fine-grained SBIR (FG-SBIR) aims to capture fine-grained similarities of sketch and photo [15,27,37] and (2) Coarse-grained SBIR (CG-SBIR) performs a instance level search across multiple object categories [38,10,11,31,38], which has received a lot of attention due to its importance. Realising the need of large-scale SBIR, some researchers have proposed a variant of cross-modal hashing framework for the same [17,39], which also showed promising results in SBIR scenario. In contrast, our proposed model overcomes this domain gap by mining the modality agnostic features using a domain loss along with a GRL.…”
Section: Related Workmentioning
confidence: 99%
“…grained matching [37,30,24], large-scale hashing [17,16], cross-modal attention [5,30] to name a few. However, a common bottleneck identified by almost all sketch researches is that of data scarcity.…”
Section: Introductionmentioning
confidence: 99%
“…Fortunately, at near recent, on account of great development in the area of deep learning [31]- [34], hashing methods turn to extract deep features automatically using Convolutional Neural Network (CNN) descriptors [35]- [38], which inherit high discriminative property and often get state-of-the-art performance. However, most state-of-the-art unsupervised deep hashing algorithms [39]- [41] suffer from severe performance degradation due to lack of label information, e.g.…”
Section: Introductionmentioning
confidence: 99%
“…The whole process is shown in Algorithm 2. The only difference is in line (5). When comparing the Hamming distance between the generating code and already generated code, we need to add the corresponding of these two codes defined in semantic relation matrix .…”
Section: Semantically Uneven Situationmentioning
confidence: 99%
“…This kind of task is content-based image retrieval (CBIR) [1][2][3][4], a technique for retrieving images by automatically derived features such as colour, texture, and shape. There are also some applications of CBIR like free-hand sketchbased image retrieval [5] whose query images are abstract and ambiguous sketches. In CBIR, derived features are not easy to store.…”
Section: Introductionmentioning
confidence: 99%