Financial documents are increasing year by year, and natural language processing (NLP) techniques are widely applied to process these documents. Specifically, Transformer-based pre-trained models such as BERT have been successful in NLP in recent years. These cutting-edge models have been adapted to the financial domain by pretraining with financial corpora, while most financial pre-trained models are BERT-based and do not benefit from the mechanisms of recent state-of-the-art models. Many Japanese models need to perform word segmentation based on morphological analysis, which may reduce portability and processing efficiency. In this study, we show that models without pre-word segmentation have no performance disadvantage over models with pre-word segmentation in both the financial and general domains, while the models without pre-word segmentation may benefit from reduced computational complexity and increased model processing efficiency due to a reduced number of tokens. The length of tokens had little effect on the performance of the downstream classification tasks. We build both general models and financial models without pre-word segmentation on a large scale. We show that our financial pre-trained models perform better than conventional models on classification tasks in the financial domain, and that the general models can be good baselines to adapt to specialized domains. Our evaluation experiments show that additional pre-training is effective because it takes advantage of the performance of models constructed from large general corpora. We have released the pre-trained models at https://huggingface.co/izumi-lab. * 1 現所属:株式会社 Preferred Networks (Preferred Networks, Inc.) * 2 現所属:北海道大学 大学院情報科学研究院 (