Identifying key frames is the first and necessary step before solving the variety of other Bharatanatyam problems. The paper aims to partition the momentarily stationary frames (key frames) from this dance video's motion frames. The proposed key frames (KFs) localization is novel, simple, and effective compared to the existing dance video analysis methods. It is distinctive from standard KFs detection algorithms as used in other human motion videos. In the dance's basic structure, the occurrence of KFs during performances is often not completely stationary and varies with the dance form and the performer. Hence, it is not easy to decide a global threshold (on the quantum of motion) to work across dancers and performances. The earlier approaches try to compute the threshold iteratively. However, the novelty of the paper is: (a) formulating an adaptive threshold, (b) adopting Machine Learning (ML) approach and, (c) generating the effective feature by combining three frame differencing and bit-plane technique for the KF detection. In ML, we use Support Vector Machine (SVM) and Convolutional Neural Network (CNN) as the classifiers. The proposed approaches are also compared and analyzed with the earlier approaches. Finally, the proposed ML techniques emerge as a winner with around 90% accuracy.