2023
DOI: 10.1108/ijcst-10-2022-0145
|View full text |Cite
|
Sign up to set email alerts
|

OMNet: Outfit Memory Net for clothing parsing

Abstract: PurposeExisting clothing parsing methods make little use of dataset-level information. This paper aims to propose a novel clothing parsing method which utilizes higher-level outfit combinatorial consistency knowledge from the whole clothing dataset to improve the accuracy of segmenting clothing images.Design/methodology/approachIn this paper, the authors propose an Outfit Memory Net (OMNet) that augments original feature by aggregating dataset-level prior clothing combination information. Specifically, the aut… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…Moreover, our utilization of this dataset is in line with our aim of setting benchmarks and advancing clothing recognition algorithms, thereby contributing to the broader domains of computer vision and fashion-related applications. For our study, attire images were obtained from Github's clothing dataset [22], which encompasses a wide range of clothing products. This publicly accessible dataset served as the cornerstone for constructing our proposed hybrid learning approach.…”
Section: Dataset Detailsmentioning
confidence: 99%
“…Moreover, our utilization of this dataset is in line with our aim of setting benchmarks and advancing clothing recognition algorithms, thereby contributing to the broader domains of computer vision and fashion-related applications. For our study, attire images were obtained from Github's clothing dataset [22], which encompasses a wide range of clothing products. This publicly accessible dataset served as the cornerstone for constructing our proposed hybrid learning approach.…”
Section: Dataset Detailsmentioning
confidence: 99%