This paper investigates how to effectively incorporate a pre-trained masked language model (MLM), such as BERT, into an encoderdecoder (EncDec) model for grammatical error correction (GEC). The answer to this question is not as straightforward as one might expect because the previous common methods for incorporating a MLM into an EncDec model have potential drawbacks when applied to GEC. For example, the distribution of the inputs to a GEC model can be considerably different (erroneous, clumsy, etc.) from that of the corpora used for pre-training MLMs; however, this issue is not addressed in the previous methods. Our experiments show that our proposed method, where we first fine-tune a MLM with a given GEC corpus and then use the output of the finetuned MLM as additional features in the GEC model, maximizes the benefit of the MLM. The best-performing model achieves state-ofthe-art performances on the BEA-2019 and CoNLL-2014 benchmarks. Our code is publicly available at: https://github.com/ kanekomasahiro/bert-gec.