There is a close relationship between the attractiveness of the face and the facial features. The shape of the facial features determines the level of attractiveness, in which the eyes and eyebrows are particularly vital. In this article, we proposed a method to study the facial attractiveness by combining global face shape and local geometric features of eye and eyebrow and using computer big data analysis for assistance. Firstly, we collected 300 images of East Asian female and use machine learning methods to evaluate the attractiveness scores of face images. Secondly, geometric models were constructed separately for the eyebrows and the eyes to obtain their geometric and shape features. Correlation analysis was performed on the obtained data to study their shape matching of different facial attractiveness rating levels. Finally, the relationship between the shape of face and eyebrows was analyzed by combining the facial ratio and the geometric features of the eyebrows. The research in this article can provide reference for medical and beauty institutions and women’s makeup, and further study in the field of facial aesthetic analysis based on geometric features.
Combined with two different types of image dehazing strategies based on image enhancement and atmospheric physical model, respectively, a novel method for gray-scale image dehazing is proposed in this paper. For image-enhancement-based strategy, the characteristics of its simplicity, effectiveness, and no color distortion are preserved, and the common guided image filter is modified to match the application of image enhancement. Through wavelet decomposition, the high frequency boundary of original image is preserved in advance. Moreover, the process of image dehazing can be guided by the image of scene depth proportion directly estimated from the original gray-scale image. Our method has the advantages of brightness consistency and no distortion over the state-of-the-art methods based on atmospheric physical model. Particularly, our method overcomes the essential shortcoming of the abovementioned methods that are mainly working for color image. Meanwhile, an image of scene depth proportion is acquired as a byproduct of image dehazing.
From a sketch image or text description, generating a semantic and photographic face image has always been an extremely important issue in computer vision. Sketch images generally contain only simple profile information but not the detail of the face. Therefore, it is difficult to generate facial attributes accurately. In this paper, we treat the sketch to face the problem as a face hallucination reconstruction problem. In order to solve this problem, we propose an image translation network by exploiting attributes with the generated adversarial network. And it can significantly contribute to the authenticity of the generated face by supplementing sketch image with the additional facial attribute feature. The generator network is composed of a feature extracting network and downsampling-upsampling network, both networks use skip-connection to reduce the number of layers without affecting network performance. The discriminator network is designed to examine whether the generated faces contain the desired attributes or not. In the underlying feature extraction phase, our network is different from most attribute-embedded networks, we fuse the sketch images and attributes perceptually. We set the network sub-Branch A and B, which receive a sketch image and attribute vector in order to extract low-level profile information and high-level semantic features. Compared with the state-of-the-art methods of image translation, the performance of the proposed network is excellent.
Facial attractiveness is an important research direction of genetic psychology and cognitive psychology, and its results are significant for the study of face evolution and human evolution. However, previous studies have not put forward a comprehensive evaluation system of facial attractiveness. Traditionally, the establishment of facial attractiveness evaluation system was based on facial geometric features, without facial skin features. In this paper, combined with big data analysis, evaluation of face in real society and literature research, we found that skin also have a significant impact on facial attractiveness, because skin could reflect age, wrinkles and healthful qualities, thus affected the human perception of facial attractiveness. Therefore, we propose a comprehensive and novel facial attractiveness evaluation system based on face shape structural features, facial structure features and skin texture feature. In order to apply face shape structural features to the evaluation of facial attractiveness, the classification of face shape is the first step. Face image dataset is divided according to face shape, and then facial structure features and skin texture features that represent facial attractiveness are extracted and fused. The machine learning algorithm with the best prediction performance is selected in the face shape structural subsets to predict facial attractiveness. Experimental results show that the facial attractiveness evaluation performance can be improved by the method based on classification of face shape and multi-features fusion, the facial attractiveness scores obtained by the proposed system correlates better with human ratings. Our evaluation system can help people project their cognition of facial attractiveness into artificial agents they interact with.
Digital watermarking is a technique used to protect an author's copyright and has become widespread due to the rapid development of multimedia technologies. In this paper, a novel watermarking algorithm using the nonsubsample shearlet transform is proposed, which combines the directional edge features of an image. A shearlet provides an optimal multiresolution and multidirectional representation of an image based on distributed discontinuities such as edges, which ensures that the embedded watermark does not blur the image. In the proposed algorithm, the nonsubsample shearlet transform is used to decompose the cover image into directional subbands, where different directional subbands represent different directional and textured features. The subband whose texture directionality is strongest is selected to carry the watermark and is thus suitable for the human visual system. Next, singular value decomposition is performed on the selected subband image. Finally, the watermark is embedded in the singular value matrix, which is beneficial for the watermarking robustness and invisibility. In comparison with related watermarking algorithms based on discrete wavelet transforms and nonsubsample contourlet transform domains, experimental results demonstrate that the proposed scheme is highly robust against scaling, cropping, and compression.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.