Precisely detecting bare hands and recognizing the characters are two major stages in gesticulated character recognition systems. It is very challenging to implement them in an uncontrolled environment. Additional variations, particularly (i) background feature domination (BFD) effect and motion blur in detection, (ii) gesturing style, pattern, and case sensitivity in recognition, make the system more complex. To address these challenges, a gesticulated character recognition (GCR-Net) model is designed.To detect the bare hand precisely, a pixel-wise segmentation approach, HandSNet, is presented, which is able to overcome the BFD effect. To handle the motion blur in the frames, a tracking module comprised of a point-tracker and Kalman filter is applied.To reduce the computational time, a mini-SqueezeNet network is designed, which is used in HandSNet and recognition models as the backend network. It has 0.39 million parameters only. Four separate deep convolutional neural networks (DCNNs) are connected with the network section module at the recognition end. This network selection module activates one DCNN at a time to recognize the gesticulated characters accurately. The proposed GCR-Net reduces the complexity between similar characters and provides a high precision rate compared to the existing approaches.