2015 IEEE International Conference on Computer Vision (ICCV) 2015
DOI: 10.1109/iccv.2015.288
|View full text |Cite
|
Sign up to set email alerts
|

A Unified Multiplicative Framework for Attribute Learning

Abstract: Attributes are mid-level semantic properties of objects. Recent research has shown that visual attributes can benefit many traditional learning problems in computer vision community. However, attribute learning is still a challenging problem as the attributes may not always be predictable directly from input images and the variation of visual attributes is sometimes large across categories. In this paper, we propose a unified multiplicative framework for attribute learning, which tackles the key problems. Spec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
16
0
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 29 publications
(18 citation statements)
references
References 16 publications
1
16
0
1
Order By: Relevance
“…This demonstrates the importance of learning the classification codewords, rather than fixing them. Note that, for codeword regularization, best results were obtained for intermediate values of β, which encourage consistency between the se- [37]. ‡ As reported by Al-Halah et al [6].…”
Section: Gains Of Regularizationsupporting
confidence: 65%
“…This demonstrates the importance of learning the classification codewords, rather than fixing them. Note that, for codeword regularization, best results were obtained for intermediate values of β, which encourage consistency between the se- [37]. ‡ As reported by Al-Halah et al [6].…”
Section: Gains Of Regularizationsupporting
confidence: 65%
“…One reason is that attributes often give prominent classification performance [21], [22], [40], [41], [42]. For another reason, attribute representation is a compact way that can further describe an image by concrete words that are humanunderstandable [16], [43], [44], [45]. Various types of attributes are proposed to enrich applicable tasks and improve the performance, such as relative attributes [15], classsimilarity attributes [21], and augmented attributes [17].…”
Section: Related Workmentioning
confidence: 99%
“…For foreground segmentation we use DeepLabv3+ with Xception-65 backbone [13] initially trained on PAS-CAL VOC 2012 [22] and fine-tuned on HumanParsing dataset [40,41] to predict initial human body segmentation masks. We additionally employ GrabCut [54] with background/foreground model initialized by the masks to refine object boundaries on the high-resolution images.…”
Section: Methodsmentioning
confidence: 99%