“…Related approaches Focus Mobile Deng et al [30] DIM, UH, ML -Ferwerda and Schedl [36] DISC, SN, MD -Le et al [75] DIM, SEN, FEED -Jazi et al [59] UB -Nair et al [91] SI, FEED -Polignano et al [99] SI, SN, UB -Kittimathaveenan et al [70] SIM, DISC -Jin et al [62] SI, MP, CCI Context and Emotion X Kasinathan et al [68] UA, MP X Çano et al [17] DISC, DIM, SEN, MP, CCI X Hu et al [51] SEN, MD, ML X Sen and Larson [109] DIM, MDI, CCI, MC X Yang and Teng [133] DIM, SI, MDI, UA, MP X Schedl [105] FEED, CCI, MD X Shen et al [111] UB, SN, CCI, MD -Wohlfahrt-Laymanna and Heimburgerh [128] DIM, MD, SIM -Giri and Harjoko [40] DISC, CCI, ML, -Yang et al [130] CCI, UH, MP -Braunhofer et al [14] CCI, MD, SIM, -Rho et al [100] DISC, MD, ON -Kaminskas et al [65] CCI, MD, SIM, -Chen et al [23] DIM, MD, CF, ML -Chen et al [22] SIM, ML -Yoon et al [134] DIM, SI, UH -Kaminskas and Ricci [64] DISC, MC, CCI -Han et al [42] DISC, MD, ON -Wang et al [123] DIM, FEED, MD, CF -On the other hand, when analyzing the music recommendation approaches that consider emotion, we observed in Sankey's diagram that many of the studies that consider emotion intensely explore approaches that use facial expression to obtain emotion, as well as subjective information from users, social networks, sensors, similarity, musical information, and machine learning. Also, most of the studies adopt models that describe emotions in continuous and discrete ways.…”