The application of wearable devices for fall detection has been the focus of much research over the past few years. One of the most common problems in established fall detection systems is the large number of false positives in the recognition schemes. In this paper, to make full use of the dependence between human joints and improve the accuracy and reliability of fall detection, a fall-recognition method based on the skeleton and spatial-temporal graph convolutional networks (ST-GCN) was proposed, using the human motion data of body joints acquired by inertial measurement units (IMUs). Firstly, the motion data of five inertial sensors were extracted from the UP-Fall dataset and a human skeleton model for fall detection was established through the natural connection relationship of body joints; after that, the ST-GCN-based fall-detection model was established to extract the motion features of human falls and the activities of daily living (ADLs) at the spatial and temporal scales for fall detection; then, the influence of two hyperparameters and window size on the algorithm performance was discussed; finally, the recognition results of ST-GCN were also compared with those of MLP, CNN, RNN, LSTM, TCN, TST, and MiniRocket. The experimental results showed that the ST-GCN fall-detection model outperformed the other seven algorithms in terms of accuracy, precision, recall, and F1-score. This study provides a new method for IMU-based fall detection, which has the reference significance for improving the accuracy and robustness of fall detection.