Affective computing technology can recognize emotional expressions in multimodal information. In this paper, a music performance emotion optimization method is proposed, which uses MFCCG-PCA to extract music emotion information and optimize it and then constructs a music emotion expression optimization model according to KTH rules and genetic algorithm. The experiments show that the average accuracy of emotion recognition in the open MFCCG-PCA test is 92.73%, and the accuracy of emotion calculation in five cultural types of music performances is 82.93%. The emotion optimization results were close to the performance requirements, with an emotion optimization accuracy of 86.9%, and the overall subjective score of the optimization results for musical performances was 4.12, which was a better performance than the comparison methods. The results of this research can be used to optimize emotion in multicultural music performance.