2014
DOI: 10.1007/978-3-319-10599-4_22
|View full text |Cite
|
Sign up to set email alerts
|

Interactively Guiding Semi-Supervised Clustering via Attribute-Based Explanations

Abstract: (ABSTRACT)Unsupervised image clustering is a challenging and often ill-posed problem. Existing image descriptors fail to capture the clustering criterion well, and more importantly, the criterion itself may depend on (unknown) user preferences. Semi-supervised approaches such as distance metric learning and constrained clustering thus leverage user-provided annotations indicating which pairs of images belong to the same cluster (must-link) and which ones do not (cannot-link). These approaches require many such… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 25 publications
(22 citation statements)
references
References 28 publications
0
22
0
Order By: Relevance
“…Recognition with humans in the loop. Among the most similar works to ours is the approaches which combine computer vision with human-in-the-loop collaboration for tasks such as fine-grained image classification [6,59,12,60], image segmentation [26], attribute-based classification [32,40,3], image clustering [34], image annotation [54,55,47], and human interaction [31] and object annotation in videos [58]. Methods such as [6,59,12,60] jointly model human and computer uncertainty and characterize human time versus annotation accuracy, but only incorporate a single type of human response.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recognition with humans in the loop. Among the most similar works to ours is the approaches which combine computer vision with human-in-the-loop collaboration for tasks such as fine-grained image classification [6,59,12,60], image segmentation [26], attribute-based classification [32,40,3], image clustering [34], image annotation [54,55,47], and human interaction [31] and object annotation in videos [58]. Methods such as [6,59,12,60] jointly model human and computer uncertainty and characterize human time versus annotation accuracy, but only incorporate a single type of human response.…”
Section: Related Workmentioning
confidence: 99%
“…The field of crowd engineering has provided lots of insight into human-machine collaboration for solving difficult problems in computing such as protein folding [41,9], disaster relief distribution [18] and galaxy discovery [38]. In computer vision with human-in-the-loop approaches, human intervention has ranged from binary question-and-answer [6,59,60] to attribute-based feedback [40,39,34] to free-form object annotation [58]. For understanding all objects in an image, one important decision is which questions to pose to humans.…”
Section: Introductionmentioning
confidence: 99%
“…Attribute-based feedback has also been used for interactive clustering, where the goal is not to name the object present in the image but rather to cluster a large collection of images in a meaningful way [Lad and Parikh, 2014].…”
Section: Interactively Improving Annotation Accuracymentioning
confidence: 99%
“…Attributes have been used extensively for a variety of applications in computer vision such as object recognition [20,21], scene understanding [22,23], image description [4,24], image search [5], clustering [25] and fine-grained recognition [26]. Many existing methods learn the attributes independently [23, 2, 1], often using linear models.…”
Section: Related Workmentioning
confidence: 99%