Music therapy is an effective tool to slow down the progress of dementia since interaction with music may evoke emotions that stimulates brain areas responsible for memory. This therapy is most successful when therapists provide adequate and personalized stimuli for each patient. This personalization is often hard. Thus, Artificial Intelligence (AI) methods may help in this task. This paper brings a systematic review of the literature in the field of affective computing in the context of music therapy. We particularly aim to assess AI methods to perform automatic emotion recognition applied to Human-Machine Musical Interfaces (HMMI). To perform the review, we conducted an automatic search in five of the main scientific databases on the fields of intelligent computing, engineering, and medicine. We search all papers released from 2016 and 2020, whose metadata, title or abstract contains the terms defined in the search string. The systematic review protocol resulted in the inclusion of 144 works from the 290 publications returned from the search. Through this review of the state-of-the-art, it was possible to list the current challenges in the automatic recognition of emotions. It was also possible to realize the potential of automatic emotion recognition to build non-invasive assistive solutions based on human-machine musical interfaces, as well as the artificial intelligence techniques in use in emotion recognition from multimodality data. Thus, machine learning for recognition of emotions from different data sources can be an important approach to optimize the clinical goals to be achieved through music therapy.