2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
DOI: 10.1109/cvprw53098.2021.00242
|View full text |Cite
|
Sign up to set email alerts
|

Sketch-QNet: A Quadruplet ConvNet for Color Sketch-based Image Retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…A natural extension of SBIR is the case where the input sketch includes color information. Here, Fuentes and Saavedra [7] recently presented an interesting approach extending the notion of triplets to quadruplet-based training.…”
Section: Sketch-based Image Retrievalmentioning
confidence: 99%
See 1 more Smart Citation
“…A natural extension of SBIR is the case where the input sketch includes color information. Here, Fuentes and Saavedra [7] recently presented an interesting approach extending the notion of triplets to quadruplet-based training.…”
Section: Sketch-based Image Retrievalmentioning
confidence: 99%
“…For instance, the main computer vision conferences already include workshops to promote research and applications on this topic. In this vein, we have seen advances in a diversity of tasks like sketch classification [5,28,33], sketch-guided object localization [24], sketch-based image and video retrieval [2,4,7,19,20,23,32], sketch-to-photo translation [3,21], among others.…”
Section: Introductionmentioning
confidence: 99%
“…To bridge the domain gap, extensive efforts have been devoted to learning common representations from different domains (Sangkloy et al 2016;Yu et al 2016;Sain et al 2021;Fuentes and Saavedra 2021). Although the existing crossdomain image retrieval methods have achieved promising performance, they implicitly assume that the multi-domain training data are annotated and aligned well.…”
Section: Introductionmentioning
confidence: 99%