An automatic system translated Arabic Sign Language to improve deaf-hearing communication. • KNN algorithm had 86.4% accuracy in recognizing 32 Arabic sign language letters. • The system bridges the communication gap between sign language users and non-users. People need a communication channel and a way to communicate with a person or group. Deaf people communicate with others through sign language. The remarkable and rapid development in image and video recognition systems has made researchers use this development to solve many problems, including sign language for the deaf, and reduce their suffering in communicating with ordinary people. This work aims to use and apply machine learning technology to build an automatic recognition system for Arabic Sign Language (ArSL). In this work, images of ArSL characters were recognized using four classification techniques (Naïve Bayes (NB), Decision Trees (DTs), and Adaptive Boosting), and K-Nearest Neighbor (KNN)) with the Python library and using two feature extraction algorithms (PCA & LDA). Data pre-processing steps, including grayscale conversion, Gaussian blur, histogram equalization, and resizing, are applied to enhance the data's suitability for training and testing. The work was tested with five experiments chosen with multiple ratios for training and test data. The first training is 90%, the second training is 80% of the data, the third is 75%, the fourth is 70%, and the last is 60%. The work also played a good role in interpreting ArSL, and the accuracy of the work considers the KNN algorithm more accurate in prediction.