Hand position recognition is very significant for human-computer interaction. Different kinds of devices and technologies can be used for data acquisition; each has its specification and accuracy, one of these devices is Kinect V2 sensor. A three-dimensional location of the skeleton joints is taken from the Kinect device to create three types of data, the first is joint position raw data, the second is angles between joints, the third is combined of both types. These three types of data are used to train four classifiers, which are support vector machines, random forest, k nearest neighbors, and multilayer perceptron. The experiments are done on the datasets of 30,480 frames from 127 volunteers with saved trained models are used to predict and classify the eight positions of hand in a real-time system. The results show that our proposed approach performs well with highly efficient and accuracy reaching up to 99.07% in some cases and an average time spent on checking frame by frame sequentially very short period, and some cases, it reaches 0.59*10-3 seconds. This system can used in many applications such as controlling robots or devices, comparing physical exercises, or even monitoring elderly and patients, and more.