The problem of airwriting recognition is focused on identifying letters written by movement of finger in free space. It is a type of gesture recognition where the dictionary corresponds to letters in a specific language. In particular, airwriting recognition using sensor data from wrist-worn devices can be used as a medium of user input for applications in humancomputer interaction (HCI). Recognition of in-air trajectories using such wrist-worn devices is limited in literature and forms the basis of the current work. In this letter, we propose an airwriting recognition framework by first encoding the time-series data obtained from a wearable inertial measurement unit (IMU) on the wrist as images and then utilizing deep learningbased models for identifying the written alphabets. The signals recorded from 3-axis accelerometer and gyroscope in IMU are encoded as images using different techniques such as self-similarity matrix (SSM), Gramian angular field (GAF), and Markov transition field (MTF) to form two sets of 3-channel images. These are then fed to two separate classification models, and letter prediction is made based on an average of the class conditional probabilities obtained from the two models. Several standard model architectures for image classification such as variants of ResNet, DenseNet, VGGNet, AlexNet, and GoogleNet have been utilized. Experiments performed on two publicly available datasets demonstrate the efficacy of the proposed strategy. The code for our implementation will be made available at https://github.com/ayushayt/ ImAiR.