Proceedings of the 10th International Conference on Computer Vision Theory and Applications 2015
DOI: 10.5220/0005344904920497
|View full text |Cite
|
Sign up to set email alerts
|

Challenges and Limitations Concerning Automatic Child Pornography Classification

Abstract: The huge volume of data to be analyzed in the course of child pornography investigations puts special demands on tools and methods for automated classification, often used by law enforcement and prosecution. The need for a clear distinction between pornographic material and inoffensive pictures with a large amount of skin, like people wearing bikinis or underwear, causes problems. Manual evaluation carried out by humans tends to be impossible due to the sheer number of assets to be sighted. The main contributi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
2
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…However, the reliance on face as the main biometric routinely used in CSAM investigations has several limitations-for example, the lack of distinct facial features appearing in children, the inability to reliably estimate age, instances where the background and the skin tone of the child are similar, the degree of nudity present, and the substantial proportion of CSAM that purposefully shields faces from view (Moser, Rybnicek & Haslinger 2015;Phippen & Bond 2020;Srinivas et al 2019;Yiallourou, Demetriou & Lanitis 2017). These problems lead to higher than desired rates of false positive and false negative matches, thus reducing task automation and requiring manual intervention (and exposure) by investigators.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…However, the reliance on face as the main biometric routinely used in CSAM investigations has several limitations-for example, the lack of distinct facial features appearing in children, the inability to reliably estimate age, instances where the background and the skin tone of the child are similar, the degree of nudity present, and the substantial proportion of CSAM that purposefully shields faces from view (Moser, Rybnicek & Haslinger 2015;Phippen & Bond 2020;Srinivas et al 2019;Yiallourou, Demetriou & Lanitis 2017). These problems lead to higher than desired rates of false positive and false negative matches, thus reducing task automation and requiring manual intervention (and exposure) by investigators.…”
mentioning
confidence: 99%
“…While videos were required to contain a face and voice to be included in our testing dataset, a proportion of CSA videos being distributed online contain neither a face nor a voice. This suggests a need to extend the software's extraction and matching capabilities to include additional soft and primary biometric attributes, such as vascular patterns, age, gait, gender, hair colour and ethnicity (eg Macedo, Costa & dos Santos 2018;Moser, Rybnicek & Haslinger 2015;Sae-Bae et al 2014;Yiallourou, Demetriou & Lanitis 2017). Such algorithms can be integrated into future iterations of BANE and may further enhance matching performance (for individual attributes and combinations of attributes).…”
mentioning
confidence: 99%