Hand gestures are becoming more common in HCI. No matter the project, the majority of the task will involve interaction with other parts of the body, the environment, etc. We demonstrate how to locate a user’s hand and movements, apply this information in a home situation, and interact with the system in this work. Using this information is also discussed. Sign language communicates visually using hand motions. Hand gestures may help hearing-impaired people communicate. Processing hand gestures lets people use their hands to communicate. Words can be made by combining signs and digits on the palms of one’s hands. Sightbased and glove-based hand motions are used nowadays. In this paper, we introduce deep convolutional neural networks for image classification and demonstrate their utility in hand gesture classification. In particular, we examine how these networks can recognise hand motions. Deep convolutional hand gesture can uncover many hand motions. Hand gestures can express many emotions. Time is needed to comprehend the hand motions Using a convolutional neural network to separate items or photographs saves time and effort. Hand actions demand more time to assess. In this case, the deep convolutional neural network speeds up the procedure. The system’s precision, sensitivity, and accuracy will improve with deep CNN. CNN, which divides the dataset into chunks, can be utilised to get high accuracy, specificity, and sensitivity from the dataset. Once the camera transmits the hand gesture image, the system will stop.