Classification complexity is the main challenge in recognizing sign language through the use of computer vision to classify Indonesian Sign Language (SIBI) images automatically. It aims to facilitate communication between deaf or mute and non-deaf individuals, with the potential to increase social inclusion and accessibility for the disabled community. The comparison of algorithm performance in this research is between the neural network algorithm and multi-layer perceptron classification in letter recognition. This research uses two methods, namely a neural network and a multi-layer perceptron, to measure accuracy and precision in letter pattern recognition, which is expected to provide a foundation for the development of better sign language recognition technology in the future. The dataset used consists of 32,850 digital images of SIBI letters converted into alphabetic sign language parameters, which represent active signs. The developed system produces alphabet class labels and probabilities, which can be used as a reference for the development of more sophisticated sign language recognition models. In testing using the neural network method, good discrimination results were obtained with precision, recall and accuracy of around ±81%, while in testing using the multi-layer perceptron method around ±86%, showing the applicative potential of both methods in the context of sign language recognition. Testing of the two normalization methods was carried out four times with comparison of the normalized data, which can provide further insight into the effectiveness and reliability of the normalization technique in improving the performance of sign language recognition systems.