Pose estimation is a computer vision task used to detect and estimate the pose of a person or an object in images or videos. It has some challenges that can leverage advances in computer vision research and others that require efficient solutions. In this paper, we provide a preliminary review of the state‐of‐the‐art in pose estimation, including both traditional and deep learning approaches. Also, we implement and compare the performance of Hand Pose Estimation (HandPE), which uses PoseNet architecture for hand sign problems, for an ASL dataset by using different optimizers based on 10 common evaluation metrics on different datasets. Also, we discuss some related future research directions in the field of pose estimation and explore new architectures for pose estimation types. After applying the PoseNet model, the experiment results showed that the accuracy achieved was 99.9%, 89%, 97%, 79%, and 99% for the ASL alphabet, HARPET, Yoga, Animal, and Head datasets, comparing those with common optimizers and evaluation metrics on different dataset.