2016 IEEE Winter Conference on Applications of Computer Vision (WACV) 2016
DOI: 10.1109/wacv.2016.7477611
|View full text |Cite
|
Sign up to set email alerts
|

Fashion apparel detection: The role of deep convolutional neural network and pose-dependent priors

Abstract: In this work, we propose and address a new computer vision task, which we call fashion item detection, where the aim is to detect various fashion items a person in the image is wearing or carrying. The types of fashion items we consider in this work include hat, glasses, bag, pants, shoes and so on. The detection of fashion items can be an important first step of various e-commerce applications for fashion industry. Our method is based on state-of-the-art object detection method pipeline which combines object … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
17
0
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 44 publications
(21 citation statements)
references
References 30 publications
1
17
0
1
Order By: Relevance
“…Specifically, features (usually, called deep features) can be extracted from any layer of a pre-trained network and then used in a given task. Deep features trained on ImageNet (a dataset of everyday objects) have already shown remarkable results in applications like flower categorization [42], human attribute detection [43], bird sub-categorization [44], scene retrieval [45], and many others [37,36], including remote sensing [14,38]. Furthermore, Razavian et al [46] suggest that features obtained from deep learning should be the primary candidate in most visual recognition tasks.…”
Section: Convnet As a Feature Extractormentioning
confidence: 99%
“…Specifically, features (usually, called deep features) can be extracted from any layer of a pre-trained network and then used in a given task. Deep features trained on ImageNet (a dataset of everyday objects) have already shown remarkable results in applications like flower categorization [42], human attribute detection [43], bird sub-categorization [44], scene retrieval [45], and many others [37,36], including remote sensing [14,38]. Furthermore, Razavian et al [46] suggest that features obtained from deep learning should be the primary candidate in most visual recognition tasks.…”
Section: Convnet As a Feature Extractormentioning
confidence: 99%
“…Deep features can be transferred from natural images to remote sensing images, which has been verified in many works [ 26 ]. With the support of the abundant training data of ImageNet (1.2 million images and 1000 distinct classes), pretrained DCNNs have acquired a sufficient ability of image deep feature extraction and have shown remarkable results in many applications, such as human attribute detection [ 40 ], scene retrieval [ 41 ], robotics [ 42 ] and remote sensing [ 43 , 44 , 45 , 46 , 47 ]. Furthermore, it is very simple to adopt the pre-trained DCNNs as feature extractors since there is no training or tuning needed.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Such applications are particularly useful for understanding and segmenting customer feedback using product reviews inside platforms like Amazon and eBay to augment advertising targeting decisions (Kannan & Li, 2017;Xu, Wang, Li, & Haghighi, 2017). Application of DL on image data in fashion firms include apparel segmentation (Zhilan Hu, Yan, & Lin, 2008), apparel recognition (Bossard et al, 2011), apparel classification and retrieval (Hara, Jagadeesh, & Piramuthu, 2016), and apparel classification for tagging (Eshwar et al, 2016). By using various DL outcomes, the fashion firms aim to support their strategic and marketing decisions, through more efficiently identifying personalized products meeting customer needs, monitoring future fashion trends toward informing product-design decisions, market segmentation, and target marketing campaigns.…”
Section: Targetingmentioning
confidence: 99%