U-Net similar architectures are widely used in the task of document image binarization. However, despite the good quality of binarization, they also have high computational complexity, which greatly limits their use on mobile and embedded devices. The performance bottleneck of U-Net architectures is the first encoder layers and the last decoder layers, which operate on high-resolution input data and contain the largest number of operations. Based on this, in this paper we propose a new Threshold U-Net model: instead of predicting the final image, Threshold U-Net predicts a low-resolution adaptive threshold map, with which the input image is binarized. The proposed architecture naturally combines the ideas of classical algorithms that calculate the binarization threshold for a specific image region with an approach based on a deep learning model with a large receptive field and context understanding. Threshold U-Net demonstrates quality of binarization of historical documents comparable to U-Net on the DIBCO-2017 dataset. At the same time, depending on the resolution of the threshold map, Threshold U-Net is up to 2 times faster, requires up to 26% less RAM and consists up to 10% fewer parameters.