Research on human motion recognition using various sensors has been steadily conducted. However, despite the increasing spread of smartphones and the activation of research using smartphone sensors, research on character recognition methods using signals with variable length is still at a standstill. The reason is that even with the same motion, the length of the signal is always different, making it difficult to process. To solve the problem, the study proposes a method of recognizing characters using an object detection neural network as a classifier. In this study, five different subjects collect data by drawing three alphabets, I, S, and Z letters 100 times each in the air and the collected data is used to train an object detection neural network and to evaluate the performance of the network. The data changes into images through denoising and normalization. Third-order spline interpolation and Fourier transform are used to remove noise from the raw signal. The values of the acceleration sensors x, y, and z axes correspond to the values of the R, G, and B channels of an image, respectively. As a result of evaluating character recognition performance by applying image data to YOLOv5, it showed an average accuracy of 99% for three characters.