Deep Learning is becoming an emerging field in the Internet of Things (IoT) due to its ability to provide a comprehensive approach to automatic feature extraction and predictive modeling for analysis and decision-making. This paper introduces an IoT-based Dance Movement Recognition Model based on a Deep Learning Framework. The framework consists of a convolutional neural network (CNN) with a data-centric architecture to identify dance movements from the acquired data gathered by an IoT device. The IoT device collects 3D motion data captured by three accelerometers. Feature extraction is then done with the CNN architecture, resulting in a flattened matrix representing the movement. Subsequently, a Multi-Layer Perception (MLP) is used to classify the movements. The proposed system is experimentally evaluated on a standardized dataset of 16 dance steps with three-speed levels. The results show that our model outperforms state-of-the-art approaches in accuracy, evaluation time, and classification accuracy. The proposed model reached 90.74% accuracy, 87.12% precision, 83.78% recall and 84.39% F1-Score. The proposed model can serve as a basis for a reliable and intuitive system that can be used to monitor patient’s dance movements with accuracy.