2020 IEEE Winter Conference on Applications of Computer Vision (WACV) 2020
DOI: 10.1109/wacv45572.2020.9093454
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Detection of Offensive and Non-compliant Content / Logo in Product Images

Abstract: In e-commerce, product content, especially product images have a significant influence on a customer's journey from product discovery to evaluation and finally, purchase decision. Since many e-commerce retailers sell items from other thirdparty marketplace sellers besides their own, the content published by both internal and external content creators needs to be monitored and enriched, wherever possible. Despite guidelines and warnings, product listings that contain offensive and non-compliant images continue … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
7
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 26 publications
(8 citation statements)
references
References 25 publications
1
7
0
Order By: Relevance
“…In the present paper, we focus on images following the definition (b). This definition aligns with definitions of previous work detecting hate speech [20] and offensive product images [16]. Note that inappropriateness, especially offensiveness, is a concept that is based on social norms, and people have diverse sentiments.…”
Section: Inappropriate Image Contentsupporting
confidence: 69%
See 2 more Smart Citations
“…In the present paper, we focus on images following the definition (b). This definition aligns with definitions of previous work detecting hate speech [20] and offensive product images [16]. Note that inappropriateness, especially offensiveness, is a concept that is based on social norms, and people have diverse sentiments.…”
Section: Inappropriate Image Contentsupporting
confidence: 69%
“…Furthermore, they conducted a hand-surveyed image selection to identify misogynistic images in the ImageNet-ILSVRC-2012 (ImageNet1k) dataset. Gandhi et al [16] aimed to detect offensive product content using machine learning; however, they have described the lack of adequate training data. Recently, Nichol et al [33] applied CLIP to filter images of violent objects but also images portraying people and faces.…”
Section: Issues Arising From Large Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, other types of Web content, such as images and memes, require automatic moderation, as they can also be harmful. Only few works have addressed this problem [107,230].…”
Section: Adjacent Researchmentioning
confidence: 99%
“…Also, these themes are usually relevant throughout the year. Hence, we address this type of themes by building supervised models [5,8]. The second type of themes is characterized by ill-defined requirements.…”
Section: Non-compliant Product Detectionmentioning
confidence: 99%