Traditional human-computer interaction technology relies heavily on input devices such as mice and keyboards, which limit the speed and naturalness of interaction and can no longer meet the more advanced interaction needs of users. With the development of computer vision (CV) technology, research on contactless gesture recognition has become a new research hotspot. However, current CV-based gesture recognition technology has the limitation of a limited number of gesture recognition and cannot achieve fast and accurate text input operations. To solve this problem, this paper proposes an over-the-air handwritten character recognition system based on the coordinate correction YOLOv5 algorithm and a lightweight convolutional neural network (LGR-CNN), referred to as Air-GR. Unlike the direct recognition of captured gesture pictures, the system uses the trajectory points of gesture actions to generate images for gesture recognition. Firstly, by combining YOLOv5 with the gesture coordinate correction algorithm proposed in this paper, the system can effectively improve gesture detection accuracy. Secondly, considering that the captured gesture coordinates may contain multiple gestures, this paper proposes a time-window-based algorithm for segmenting the gesture coordinates. Finally, the system recognizes user gestures by plotting the segmented gesture coordinates in a two-dimensional coordinate system and feeding them into the constructed lightweight convolutional neural network, LGR-CNN. For the gesture trajectory image classification task, the accuracy of LGR-CNN is 13.2%, 12.2%, and 4.5% higher than that of the mainstream networks VGG16, ResNet, and GoogLeNet, respectively. The experimental results show that Air-GR can quickly and effectively recognize any combination of 26 English letters and numbers, and its recognition accuracy reaches 95.24%.