The escalating global embrace of yoga as a holistic approach to well-being has accentuated the demand for refined and efficient techniques in yoga posture recognition. Traditional manual methods, although valuable, have exhibited protracted timelines and vulnerability to inaccuracies. In response, we introduce an innovative solution that harnesses the capabilities of deep learning (DL) models, elevating both the precision and accuracy of posture detection. Our approach predominantly leverages the Thunder variant of the MoveNet model, renowned for its exceptional proficiency in distinguishing an array of yoga postures. This model is seamlessly amalgamated with the MediaPipe technique, facilitating adept keypoint identification and skeletonization. In our proposed framework, input images undergo initial preprocessing, followed by skeletonization achieved through keypoint extraction. This pivotal process enables the encapsulation of distinctive points intrinsic to individual yoga poses. Central to our methodology is the incorporation of the large and diverse yoga (LDY) dataset, which encompasses five distinct yoga pose categories: Downdog, Goddess, Plank, Tree, and Warrior. A thorough evaluation demonstrates our approach's outstanding accuracy of 99.50% when deployed on the LDY dataset. As maintaining precise posture is pivotal in averting immediate discomfort and mitigating long-term health complexities, the implications of this advancement are profound. It charts a course toward more meticulous and accessible mechanisms for detecting yoga poses, thus profoundly influencing the physical and mental well-being of practitioners.