Domain-adaptive pre-training (DAPT) is a technique in natural language processing (NLP) that tailors pre-trained language models to specific domains, enhancing their performance in real-world applications. In this paper, we evaluate the effectiveness of DAPT in governmental text classification tasks, exploring how different factors, such as target domain dataset, pre-trained model language composition, and dataset size, impact model performance. We systematically vary these factors, creating distinct domain-adapted models derived from BERTimbau and LaBSE. Our experimental results reveal that selecting appropriate target domain datasets and pre-training strategies can notably enhance the performance of language models in governmental tasks.