Abwooi-Computer vision is one out of many areas thaf wants to understand the process of human functianaliIy and copy that pmecrr with intention to eomplemcnt human life with intelligent machines. For better human-eomputer interaction i t i s n~c e s~a r y for the machine to see people. This can be achieved by empiqing face detection algorithms, like thc one used in the in~taiisrien"15 Seconds O f F i m C " . Mentioned installation unites the areas o f modern an and technology. Its algorithm i s based an skin colour detection. Onc of the pmhiemr thii and Similar slgnrithmE have to deal with i s renritiviry to the illumination conditions under which the input image i s captured. Hence illumination sensitivity influences face detection rer~lts. O m o f the aspects lrom which we can observe illumination influence i s the choke of the pmper colour $pace. Since some coloui spaces are designed to eliminate the influence o f illumination (brightness) whendescribing colour o f an object, an idea of using such P colour space for skiinsolour detection has been taken under consideration and some of the methods have been researched and tested.Kepmf-computer vision, automatic detection, human face, lace candidates search, rkin-eolour determination, 2D colour space. 3D colour space, illumination independence. INTROOUCTiON A. Installution "15 Seconds of Fame"The installation "15 Seconds of Fame" [7] is an interactive art installation, which intends to make instant celebrities out of common people by putting their portraits on the museum wall. The idea was inspired by the quotation ofthe famous artist Andy Warhol: "In the future everybody will he famous for fifteen minutes" and by the pop-an style of his work. The installation looks like a valuable framed picture (Fig. 1). LCD monitor and digital camera are built into the picture. Camera is connected to a computer, which controls the camera and processes captured images. Special software contains algorithm for face detection, which Fig. I . LCO computcr monitardrcsscd up like a prccious painting. Thc round apcning above the picNre is for thc digital camcra Icns. with the following rules [6], [7], which describe the skin cluster in the RGB colour space: % The skin colour at uniform daylight illumination R > 95 AND G > 40 AND B > 20 A N 0 m a x ( R , G . B ) -min(R,G.B) > 15AND % RGB camponcnts must not bc dose togcthcr -% grcyness elimination Yo also R and G components m u 1 nor bc dose togcthcr -% othewisc wc arc not dealing with the fair compicxian IR -GI > 15 AN0 R > G AND R > B OR % R component must bc the greaiest coinponcnl looks for faces in captured images. Among them it chooses one for further processing. In the next step a randomly chosen %The skin ~,.I ","-, ,,,,-ill,:"l ~ or (light) daylight The face detection algorithm that is used by the installation "15 Seconds of Fame" [7] uses 3D colour space (RGB) for detecting skin colour pixels. With the help of heuristic rules it is determined, whether a certain pixel o f input image corresponds to the skin colour. Note that t...
Automatic identity recognition from ear images represents an active field of research within the biometric community. The ability to capture ear images from a distance and in a covert manner makes the technology an appealing choice for surveillance and security applications as well as other application domains. Significant contributions have been made in the field over recent years, but open research problems still remain and hinder a wider (commercial) deployment of the technology. This paper presents an overview of the field of automatic ear recognition (from 2D images) and focuses specifically on the most recent, descriptorbased methods proposed in this area. Open challenges are discussed and potential research directions are outlined with the goal of providing the reader with a point of reference for issues worth examining in the future. In addition to a comprehensive review on ear recognition technology, the paper also introduces a new, fully unconstrained dataset of ear images gathered from the web and a toolbox implementing several stateof-the-art techniques for ear recognition. The dataset and toolbox are meant to address some of the open issues in the field and are made publicly available to the research community. Index Terms-biometry, dataset, in-the-wild, unconstrained image, descriptor-based method, open-source toolbox, ear recognition• Survey: We present a comprehensive survey on ear recognition, which is meant to provide researchers in this field with a recent and up-to-date overview of the state-of-technology. We introduce a taxonomy of the existing 2D ear recognition approaches, discuss the characteristics of the technology and review the existing state-of-the-art. Most importantly, we elaborate on the open problems and challenges faced by the technology. • Dataset: We make a new dataset of ear images available to the research community. The dataset, named Annotated Web Ears (AWE), contains images collected from the web and is to the best of our knowledge the first dataset for ear recognition gathered "in the wild". The images of the AWE dataset contain a
In this paper we present the results of the Unconstrained Ear Recognition Challenge (UERC), a group benchmarking effort centered around the problem of person recognition from ear images captured in uncontrolled conditions. The goal of the challenge was to assess the performance of existing ear recognition techniques on a challenging large-scale dataset and identify open problems that need to be addressed in the future. Five groups from three continents participated in the challenge and contributed six ear recognition techniques for the evaluation, while multiple baselines were made available for the challenge by the UERC organizers. A comprehensive analysis was conducted with all participating approaches addressing essential research questions pertaining to the sensitivity of the technology to head rotation, flipping, gallery size, large-scale recognition and others. The top performer of the UERC was found to ensure robust performance on a smaller part of the dataset (with 180 subjects) regardless of image characteristics, but still exhibited a significant performance drop when the entire dataset comprising 3, 704 subjects was used for testing.
Identity recognition from ear images is an active field of research within the biometric community. The ability to capture ear images from a distance and in a covert manner makes ear recognition technology an appealing choice for surveillance and security applications as well as related application domains. In contrast to other biometric modalities, where large datasets captured in uncontrolled settings are readily available, datasets of ear images are still limited in size and mostly of laboratory-like quality. As a consequence, ear recognition technology has not benefited yet from advances in deep learning and convolutional neural networks (CNNs) and is still lacking behind other modalities that experienced significant performance gains owing to deep recognition technology. In this paper we address this problem and aim at building a CNNbased ear recognition model. We explore different strategies towards model training with limited amounts of training data and show that by selecting an appropriate model architecture, using aggressive data augmentation and selective learning on existing (pre-trained) models, we are able to learn an effective CNN-based model using a little more than 1300 training images. The result of our work is the first CNN-based approach to ear recognition that is also made publicly available to the research community. With our model we are able to improve on the rank one recognition rate of the previous state-of-the-art by more than 25% on a challenging dataset of ear images captured from the web (a.k.a. in the wild).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.