Handwritten text recognition, i.e., the conversion of scanned handwritten documents into machine-readable text, is a complex exercise due to the variability and complexity of handwriting. A common approach in handwritten text recognition consists of a feature extraction step followed by a recognizer. In this paper, we propose a novel DNN architecture for handwritten text recognition that extracts discrete representation from the input text-line image. The proposed model is constructed of an encoder–decoder network with an added quantization layer which applies a dictionary of representative vectors to discretize the latent variables. The dictionary and the network parameters are trained jointly through the k-means algorithm and back propagation, respectively. The performance of the suggested model is evaluated through conducting extensive experiments on five datasets, analyzing the effect of discrete representation on handwriting recognition. The results demonstrate that the use of feature discretization improves the performance of deep handwriting text recognition models when compared to the conventional DNN models with continuous representation. Specifically, the character error rate is decreased by $$22\%$$
22
%
and $$21.1\%$$
21.1
%
on IAM and ICFHR18 datasets, respectively.