Detecting emotions is very useful in many fields, from health-care to human-computer interaction. In this paper, we propose an iterative user-centered methodology for supporting the development of an emotion detection system based on low-cost sensors. Artificial Intelligence techniques have been adopted for emotion classification. Different kind of Machine Learning classifiers have been experimentally trained on the users’ biometrics data, such as hearth rate, movement and audio. The system has been developed in two iterations and, at the end of each of them, the performance of classifiers (MLP, CNN, LSTM, Bidirectional-LSTM and Decision Tree) has been compared. After the experiment, the SAM questionnaire is proposed to evaluate the user’s affective state when using the system. In the first experiment we gathered data from 47 participants, in the second one an improved version of the system has been trained and validated by 107 people. The emotional analysis conducted at the end of each iteration suggests that reducing the device invasiveness may affect the user perceptions and also improve the classification performance.