Future smart agents, like robots, should produce personalized behaviours based on user emotions and moods to fit more in ordinary users' activities. Besides, the emotions are also linked to human cognitive systems, thus their monitoring could be extremely useful in the case of neurodegenerative diseases such as dementia and Alzheimer. Literature works propose the use of music tracks and videos to stimulate emotions, and cameras to recorder the evoked reactions in human beings. However, these approaches may not be effective in everyday life, due to camera obstructions and different types of stimulation which can be related also with the interaction with other human beings. In this work, we investigate the Electrocardiogram, the ElectroDermal Activity and the Brain Activity signals as main informative channels, acquired through a wireless wearable sensor network. An experimental methodology was built to induce three different emotional states through social interaction. Collected data were classified with three supervised machine learning approaches with different kernels (Support Vector Machine, Decision Tree and k-nearest neighbour) considering the valence dimension and a combination of valence and arousal dimension evoked during the interaction. 34 healthy young participants were involved in the study and a total of 239 instances were analyzed. The supervised algorithms achieve an accuracy of 0.877 in the best case.