Promoting the social inclusion of people with deafness and/or communication disabilities is a priority for many countries. For this reason, considerable emphasis is currently being given to machine learning (ML) and deep learning (DL) techniques for sign language recognition and translation, as they can be a significant contribution to the social inclusion of the deaf and/or deafened community. Thus, the present research proposes the development of a translator of Peruvian Sign Language (PSL) for the recognition and translation of static signs belonging to PSL through a convolutional neural network (CNN). These signs are the numbers from 0 to 9 and the letters of the alphabet except J, Ñ and Z because they are represented with movable signs and the letters O and W because they are very similar to the numbers "0" and "6". To achieve the development of the translator, a balanced database for PSL was built from scratch, consisting of 700 images for each static sign, for a total of 22400 images. These images have a dimension of 80x80 pixels that go through a preprocessing stage, 3 convolutional layers, filters, kernels, ReLu and MaxPooling activation functions. Experimental results show that the translator recognizes static PSL signs with an accuracy of 90%, 86% and 81% for training, validation and testing respectively.