Occlusion due to eyeglasses is one of the main challenges affecting the face and general ocular recognition, including eyebrow matching. In this study, the authors propose a convolutional neural network (CNN)-based method for (a) eyeglasses detection and segmentation to mitigate its impact on personal recognition in mobile devices and (b) use the shape of the glasses as a soft token of identity (something that one has). They evaluated the efficacy of the proposed eyeglasses segmentation on eyebrow matching and eyeglasses-based user authentication. To this front, various texture and deep features were evaluated. Using the publicly available large-scale visible ocular biometric dataset, they show that the proposed methods provide (a) eyeglasses detection and segmentation accuracies of 100 and 97% using CNNs, (b) a 2.51% reduction in eyebrow matching error by removing eyeglass occlusions and (c) eyeglasses matching with a 96.6% accuracy.
A novel scheme is presented for image compression using a compatible form called Chimera. This form represents a new transformation for the image pixels. The compression methods generally look for image division to obtain small parts of an image called blocks. These blocks contain limited predicted patterns such as flat area, simple slope, and single edge inside images. The block content of these images represent a special form of data which be reformed using simple masks to obtain a compressed representation. The compression representation is different according to the type of transform function which represents the preprocessing operation prior the coding step. The cost of any image transformation is represented by two main parameters which are the size of compressed block and the error in reconstructed block. Our proposed Chimera Transform (CT) shows a robustness against other transform such as Discrete Cosine Transform (DCT), Wavelet Transform (WT) and Karhunen-Loeve Transform (KLT). The suggested approach is designed to compress a specific data type which are the images, and this represents the first powerful characteristic of this transform. Additionally, the reconstructed image using Chimera transform has a small size with low error which could be considered as the second characteristic of the suggested approach. Our results show a Peak Signal to Noise Ratio (PSNR) enhancement of 2.0272 for DCT, 1.179 for WT and 4.301 for KLT. In addition, a Structural Similarity Index Measure (SSIM) enhancement of 0.1108 for DCT, 0.051 for WT and 0.175 for KLT.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.