Physical disability is one aspect of people that cannot be disregarded. A deaf person is someone who is naturally unable to hear.A unique language known as "Sign-Language" is used to represent their expertise. Sign language is one of the most popular forms of deaf people to learn is American Sign Language (ASL).A collection of images of hands in various hand gestures or shapes are used in American Sign Language. In this study, we introduce feature-based algorithmic analysis to create a significant model for American Sign Language hand gesture identification. This model can be used to effectively learn in order to make a machine intelligent. We create a list of helpful features from digital images of hand gestures for efficient machine learning.For the preprocessing process, a histogram equalization technique and the an-isotropic diffusion filter are used. To extract image features a robust histogram of oriented gradient feature extraction method is proposed then three different machine learning classifiers are performed to achieve the classification process. To test our model, experiments are achieved using the American MNIST sign language dataset. With the use of HOG as feature and Support Vector Machine as classifier, the system yields by achieving high levels of sensitivity, specificity, and accuracy (99.8%, 98.9% and 99.6%, respectively). We can derive that the proposed model is an efficient sign language detection system. keywords:computer vision, machine learning,histogram oriented gradient , hand-gesture recognition.