30Several studies demonstrate that since emotions are spontaneously manifested through different 31 measurable quantities (e.g. vocal and facial expressions), this makes possible a sort of automatic estimation 32 of emotion from objective measurements. However, the reliability of such estimations is strongly influenced 33 by the availability of the different sensor modalities used to monitor the affective status of a subject, and 34 furthermore the extraction of objective parameters is sometime thwarted in a noisy and disturbed 35 environment. This paper introduces a personalized emotion estimation based on a heterogeneous array of 36 physical sensors for the measurement of vocal, facial, and physiological (electro-cardiogram and electro-37 dermal) activities. As a proof of concept, changes in the levels of both emotion reactiveness and pleasantness 38are estimated under critical operative conditions. The estimator model takes advantage from the time-39varying selection of the most relevant non-spurious sensors features and the adaptation of the k-nearest 40 neighbour paradigm to the continuous identification of the most affine model templates. The model, once 41trained, demonstrated to autonomously embed new sensorial input and adapt to unwanted/unpredicted 42 sensor noise or emotion alteration. The proposed approach has been successfully tested on the RECOLA 43 database, a multi-sensorial corpus of spontaneous emotional interactions in French. 44