This study primarily designs a multitask emotion recognition model that combines Valence, Arousal, Dominance (VAD) three-dimensional continuous emotion analysis and discrete emotion classification, providing a more comprehensive and fine-grained emotional measurement tool for intelligent emotional interaction. It utilizes the correlation constraints between the two recognition tasks (category labels are points in the VAD three-dimensional emotion space) to enhance recognition accuracy. First, it provides a method and dataset for multi-dimensional continuous emotion recognition in the VAD three-dimensional space, which can describe emotional states more comprehensively and finely than traditional fixed emotion category labels, especially in dimension D, which is currently less researched. The integration of the Dominance (D) dimension enables a more comprehensive representation of emotional expressions, capturing nuanced variations related to dominance-related behaviors, particularly in contexts where understanding dominance cues is crucial. Second, because fixed category emotion labels represent a point in the VAD threedimensional space, it utilizes the correlation between them for multi-task joint learning and establishes constraints between emotion categories and the VAD multi-dimensional emotion space. In the experiment, it used the emotional labels available in the existing emotion category dataset FER2013, manually added VAD annotations, and used it for VAD emotional measurement. The prediction results indicate that the average losses for predicting V, A, and D decrease by 0.7%, 6%, and 0.4% respectively, which verifies the effectiveness of the proposed multi-task strategies. The annotated VAD dataset and multi-task emotion recognition codes are available in Github: https://github.com/YeeHoran/Multi-task-Emotion-Recognition.