Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess a given face in a highly similar manner.
BackgroundManual annotation of landmarks is a known source of variance, which exist in all fields of medical imaging, influencing the accuracy and interpretation of the results. However, the variability of human facial landmarks is only sparsely addressed in the current literature as opposed to e.g. the research fields of orthodontics and cephalometrics. We present a full facial 3D annotation procedure and a sparse set of manually annotated landmarks, in effort to reduce operator time and minimize the variance.MethodFacial scans from 36 voluntary unrelated blood donors from the Danish Blood Donor Study was randomly chosen. Six operators twice manually annotated 73 anatomical and pseudo-landmarks, using a three-step scheme producing a dense point correspondence map. We analyzed both the intra- and inter-operator variability, using mixed-model ANOVA. We then compared four sparse sets of landmarks in order to construct a dense correspondence map of the 3D scans with a minimum point variance.ResultsThe anatomical landmarks of the eye were associated with the lowest variance, particularly the center of the pupils. Whereas points of the jaw and eyebrows have the highest variation. We see marginal variability in regards to intra-operator and portraits. Using a sparse set of landmarks (n=14), that capture the whole face, the dense point mean variance was reduced from 1.92 to 0.54 mm.ConclusionThe inter-operator variability was primarily associated with particular landmarks, where more leniently landmarks had the highest variability. The variables embedded in the portray and the reliability of a trained operator did only have marginal influence on the variability. Further, using 14 of the annotated landmarks we were able to reduced the variability and create a dense correspondences mesh to capture all facial features.
The paper presents the process of building and using the SDM of the cochlea. Compared to current best practice, we demonstrate competitive performance and some useful properties of our method.
Recent developments in computational modeling of cochlear implantation are promising to study in-silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging. To address such a challenge, we propose an automatic framework for the generation of patient-specific meshes for finite element modeling of the implanted cochlea. First, a statistical shape model is constructed from highresolution anatomical µCT images. Then, by fitting the statistical model to a patient's CT image, an accurate model of the patient-specific cochlea anatomy is obtained. An algorithm based on the parallel transport frame is employed to perform the virtual insertion of the cochlear implant. Our automatic framework also incorporates the surrounding bone and nerve fibers and assigns constitutive parameters to all components of the finite element model. This model can then be used to study in-silico the e↵ects of the electrical stimulation of the cochlear implant. Results are shown on a total of 25 models of patients. In all cases, a final mesh suitable for finite element 1 This a pre-print version. The final document is available at http://www.springerlink.com simulations was obtained, in an average time of 94 seconds. The framework has proven to be fast and robust, and is promising for a detailed prognosis of the cochlear implantation surgery.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.