Several painful (or potentially painful) interventions are performed during the hospitalization of a newborn in an intensive care unit. In these situations, there is a challenging difficulty to identify pain, due to the impossibility of direct and objective verbal communication, as usually happens among adults. In the last decades, several pain scales have been proposed to identify such occurrence through analysis of the facial mimic of the human being, allowing to investigate and to create non-invasive methods that help not only the early recognition of the occurrence, but also a better understanding of this experience. In this context, this dissertation aims to propose and implement a sequence of computational procedures for detecting, interpreting and classifying patterns in frontal two-dimensional images of faces for automatic recognition of pain in newborns. Using data transformation and extractions of statistical characteristics from a real-life, healthy-term newborn image database created by the research group of UNIFESP, as well as the evaluation of these same images by trained health professionals for recognition of pain, it was possible to automatically identify pain levels in these images on a continuous numerical scale abstracting the subjectivity of trained health professionals, quantifying human knowledge in the task of recognizing pain. These results were also compared with classifications of the same images, by the same professionals, who used a clinically validated, bedside method called the Neonatal Facial Coding System (NFCS). In addition, as other original contributions of this dissertation, reference images named Atlas were generated for each class "No Pain" and "With Pain" which have average characteristics of each group, also generated synthetic images of newborn faces that present the same characteristics of the set of original images of the database used, expanding the information base with data of high relevance for future studies.