Performances of Handwritten Text Recognition (HTR) models are largely determined by the availability of labeled and representative training samples. However, in many application scenarios, labeled samples are scarce or costly to obtain. In this work, we propose a self-training approach to train a HTR model solely on synthetic samples and unlabeled data. The proposed training scheme uses an initial model trained on synthetic data to make predictions for the unlabeled target dataset.Starting from this initial model with rather poor performance, we show that a considerable adaptation is possible by training against the iteratively predicted pseudo-labels. Therefore, the investigated self-training method does not require any manually annotated training samples. We evaluate the proposed method on four benchmark datasets and show its effectiveness on reducing the gap to a model trained in a fully-supervised manner.