As a significant form of physical expression, dance demands ongoing training for skill enhancement, particularly in expressiveness. However, such training often faces restrictions related to location and time. Moreover, the evaluation of dance performance tends to be subjective, which necessitates the development of effective training methods and objective evaluation techniques. In this research, we introduce a self-training system for dance that employs VR technology to create an immersive training environment that facilitates a comprehensive understanding of three-dimensional dance movements. Furthermore, the system incorporates markerless motion capture technology to accurately record dancers’ movements in real time and translate them into the VR avatar. Additionally, the use of deep learning enables multi-perspective dance performance assessment, providing feedback to users to aid their repetitive practice. To enable deep learning-based dance evaluations, we established a dataset that incorporates data from beginner-level dances along with expert evaluations of those dances. This dataset was specifically curated for practitioners in a dance studio setting by using a total of four cameras to record dances. Expert annotations were obtained from various perspectives to provide a comprehensive evaluation. This study also proposes three unique automatic evaluation models. A comparative analysis of the models, particularly contrastive learning (and autoencoder)-based expression learning and a reference-guided model (where a model dancer’s performance serves as a reference), revealed that the reference-guided model achieved superior accuracy. The proposed method was able to predict dance performance ratings with an accuracy of approximately ±1 point on a 10-point scale, compared to ratings by professional coaches. Our findings open up novel possibilities for future dance training and evaluation systems.