Zero pronoun resolution is an actively challenging NLP task in Thai. However, only a few previous studies have focused on this topic. Therefore, we explore a modern approach that could outperform existing state-of-the-art methods on various datasets and downstream tasks, the transformer-based, pre-trained language model, to apply to the Thai zero pronoun resolution task. We conduct two experiments on a small corpus, which are (1) using a pre-trained masked language model to predict zero pronominal expressions and (2) fine-tuning Wangchanberta on a token classification task to classify persons of pronouns. Based on our experiments, the results demonstrate the effectiveness of the pre-trained language model (1), which successfully encodes not only the grammatical features but also the system of Thai pronoun usage at the discourse level.