Air-writing is a growing research topic in the field of gesture-based writing systems. This research proposes a unified, lightweight, and general-purpose deep learning algorithm for a trajectory-based air-writing recognition network (TARNet). We combine a convolutional neural network (CNN) with a long short-term memory (LSTM) network. The architecture and applications of CNN and LSTM networks differ. LSTM is good for time series prediction yet time-consuming; on the other hand, CNN is superior in feature generation but comparatively faster. In this network, the CNN and LSTM serve as a feature generator and a recognizer, optimizing the time and accuracy, respectively. The TARNet utilizes 1-dimensional separable convolution in the first part to obtain local contextual features from low-level data (trajectories). The second part employs the recurrent algorithm to acquire the dependency of high-level output. Four publicly available air-writing digit (RealSense trajectory digit), character (RealSense trajectory character), smart-band, and Abas datasets were employed to verify the accuracy. Both the normalized and nonnormalized conditions were considered. The use of normalized data required longer training times but provided better accuracy. The test time was the same as those for nonnormalized data. The accuracy for RTD, RTC, smart-band, and Abas datasets were 99.63%, 98.74%, 95.62%, and 99.92%, respectively.