Hand gesture is one of the methods used in sign language for non-verbal communication. It is most commonly used by hearing & speech impaired people who have hearing or speech problems to communicate among themselves or with normal people. Developing sign language applications for hearing impaired people can be very important, as hearing & speech impaired people will be able to communicate easily with even those who don’t understand sign language. This project aims at taking the basic step in bridging the communication gap between normal people, deaf and dumb people using sign language. The main focus of this work is to create a vision based system to identify sign language gestures from the video sequences. The reason for choosing a system based on vision relates to the fact that it provides a simpler and more intuitive way of communication between a human and a computer. Video sequences contain both temporal as well as spatial features. In this project, two different models are used to train the temporal as well as spatial features. To train the model on the spatial features of the video sequences a deep Convolutional Neural Network. Convolutional Neural Network was trained on the frames obtained from the video sequences of train data. To train the model on the temporal features Recurrent Neural Network is used. The Trained Convolutional Neural Network model was used to make predictions for individual frames to obtain a sequence of predictions. Now this sequence of prediction outputs was given to the Recurrent Neural Network to train on the temporal features. Collectively both the trained models i.e. Convolutional Neural Network and Recurrent Neural Network will produce the text output of the respective gesture.
H.265 also called High Efficiency Video Coding is the new futuristic international standard proposed by Joint collaboration Team on Video Coding and released in 2013 in the view of constantly increasing demand of video applications. This new standard reduces the bitrate to half as compared to its predecessor H.264 at the expense of huge amount of computational burden on the encoder. In the proposed work we focus on intraprediction phase of video encoding where 33 new angular modes are introduced in addition to DC and Planar mode in order to achieve high quality videos at higher resolutions. We have proposed the use of applied machine learning to HEVC intra prediction to accelerate angular mode decision process. The features used are also low complexity features with minimal computation so as to avoid any additional burden on the encoder. The Decision tree model built is simple yet efficient which is the requirement of the complexity reduction scenario. The proposed method achieves substantial average encoding time saving of 86.59%, with QP values 4,22,27,32 respectively with minimal loss of 0.033 of PSNR and 0.0023 loss in SSIM which makes it suitable for acceptance of High Efficiency Video coding in real time applications
A genetic disorder is a health condition that is usually caused by mutations in DNA or changes in the number or overall structure of chromosomes. Several types of commonly-known diseases are related to hereditary gene mutations. Genetic testing aids patients in making important decisions in the prevention, treatment, or early detection of hereditary disorders. With increasing population, studies have shown that there has been an exponential increase in the number of genetic disorders. Genetic disorders impact not only the physical health, but also the psychological and social well-being of patients and their families. Genetic disorders have powerful effects on families. Like many chronic conditions, they may require continual attention and lack cures or treatments. Low awareness of the importance of genetic testing contributes to the increase in the incidence of hereditary disorders. Many children succumb to these disorders and it is extremely important that genetic testing be done during pregnancy. In that direction, the project aims to predict Genetic Disorder and Disorder Subclass using a Machine Learning Model trained from a medical dataset. The model being derived out of a predictor and two classifiers, shall predict the presence of genetic disorder and further specify the disorder and disorder subclass, if present.
In the field of medical image processing, brain tumor detection and segmentation using MRI scan has become one of the most important and challenging research areas. In which manual detection and segmentation of brain tumors using brain MRI scan forms a large part of human intervention for detection and segmentation taken per patient, is both tedious and has huge internal and external observer detection and segmentation variability. Hence, there is high demand for an automatic brain tumor detection and segmentation using brain MR images to overcome manual segmentation. So in current days a number of methods have proposed by researchers. But still there is no complete automated system developed yet, is due to accuracy and robustness issues. So, this paper provides a review of the methods and techniques that used to detect and segment brain tumor through MRI segmentation. Finally, the paper concludes with one of the efficient hybrid method which shows high accuracy on detection of brain tumor with proposed Gaussian Mixture Model (GMM).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.