BACKGROUND:The association between maxillary development and vector relationships is used in the field of plastic surgery, but the validity of this principle has not been tested yet.AIM:The aim of this study is to determine whether visual classification of anterior malar projection using vector relationships is supported by cephalometric analysis.MATERIALS AND METHODS:Normal, healthy 40 subjects aged 10–15 years with no history of orthodontic treatment, craniofacial syndromes, or trauma formed the study group. These subjects based on the visual assessment of vector relationship (positive and negative) were divided into 2 groups (Group A and Group B), consisting of 20 subjects each. Vectors were drawn on the profile photographs. Sella–Nasion–Orbitale (SNO) angle were traced using the Nemoceph software. The relationship of anterior malar projection obtained from profile photograph and lateral cephalogram were compared. The data obtained were subjected to statistical analysis.RESULTS:Skeletal differences between the positive and negative vector groups based on SNO angles were statistically significant (P < 0.001). SNO angulations in the negative vector group were smaller than the positive vector group by an average of 5.9°.CONCLUSIONS:Visual assessment of vector relationship can be effectively used to classify anterior malar projection. This also helps in diagnosing maxillary hypoplasia and executes different treatment modalities.
Facial Expression Recognition is the critical part in the human emotional detection in the field of image processing. The application tends to soft or hard real time application based on the power of expression detection of idle image or videos. The Social communication with any object can be done by verbal or non-verbal format. Expression and Emotion detection completely rely on the non-verbal communication and facial expression. Machine Learning play an important role in the recognition of facial expression. An attempt is made in this paper to analyse the performance of Convolutional neural network models with diverse activation layer for the performance evaluation.Firstly, the facial expression dataset is extracted from the website http://www.consortium.ri.cmu.edu/ckagree/, http://app.visgraf.impa.br/database/faces/ is subjected with the data processing. Secondly, the data analysis is done for the distribution of expression image in the training and testing dataset. Thirdly, the facial expression images are detected with HAAR cascade and then the images are cropped with (350, 350). Fourth, the facial expression images is applied with normalized and the bottleneck features are created for training and testing data. Fifth, the training dataset is fitted with convolutional sequential neural network models with various activation layers like Sigmoid, Elu, Relu, Selu, Tanh, Softsign and Softplus. Sixth, the performance analysis is done with loss and accuracy for all the epoch of all CNN models for all the activation layers. Experimental results show that CNN sequential model with Relu activation layer is found to have the accuracy of 100%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.