A correct recognition of nonverbal expressions is currently one of the most important challenges of research in the field of human computer interaction. The ability to recognize human actions could change the way to interact with machines in several environments and contexts, or even the way to live. In this paper, we describe the advances of a previous study finalized to design, implement and validate an innovative recognition system already developed by some of the authors. It was aimed at recognizing two opposite emotional conditions (resonance and dissonance) of a candidate to a job position interacting with the recruiter during a job interview. Results in terms of the accuracy, resonance rate, and dissonance rate of the three new optimized neural networkbased (NN) classifiers are discussed. Comparison with previous results of three NN classifiers is also presented based on three single domains: facial, vocal and gestural.