The difference in word orders between source and target languages is a serious hurdle for machine translation. Preordering methods, which reorder the words in a source sentence before translation to obtain a similar word ordering with a target language, significantly improve the quality in statistical machine translation. While the information on the preordering position improved the translation quality in recurrent neural network-based models, questions such as how to use preordering information and whether it is helpful for the Transformer model remain unaddressed. In this paper, we successfully employed preordering techniques in the Transformer-based neural machine translation. Specifically, we proposed a novel preordering encoding that exploits the reordering information of the source and target sentences as positional encoding in the Transformer model. Experimental results on ASPEC Japanese-English and WMT 2015 English-German, English-Czech, and English-Russian translation tasks confirmed that the proposed method significantly improved the translation quality evaluated by the BLEU scores of the Transformer model by 1.34 points in the Japanese-to-English task, 2.19 points in the English-to-German task, 0.15 points in the Czech-to-English task, and 1.48 points in the English-to-Russian task.