Emotion is one of the main reasons why people engage and interact with music [1]songs can express our inner feelings, produce goosebumps, bring us to tears, share an emotional state with a composer or performer, or trigger specific memories. An interest for a deeper understanding of the relation between music and emotion has motivated researchers from various areas of knowledge for decades [2], including computational researchers. Imagine an algorithm that could "predict" the emotions that a listener perceives in a musical piece or one that could dynamically generate music adapting to the mood of a conversation in a film -a particularly fascinating and provocative idea. These algorithms typify Music Emotion Recognition (MER), a computational task that attempts to automatically recognize either the emotional content in music or the emotions induced by music to the listener [3]. To do so, emotionally-relevant features are extracted from music, processed, evaluated, and then associated with certain emotions. MER is one of the most challenging high-level music description problems in Music Information Retrieval (MIR), an interdisciplinary research field that focuses on the development of computational systems to help humans better understand music collections. MIR integrates concepts and methodologies from several disciplines including music theory, music psychology, neuroscience, signal processing, and machine learning.