Electroencephalography (EEG) signals are the brain signals acquired using the non-invasive approach. Owing to the high portability and practicality, EEG signals have found extensive application in monitoring human physiological states across various domains. In recent years, deep learning methodologies have been explored to decode the intricate information embedded in EEG signals. However, since EEG signals are acquired from humans, it has issues with acquiring enormous amounts of data for training the deep learning models. Therefore, previous research has attempted to develop pre-trained models that could show significant performance improvement through fine-tuning when data are scarce. Nonetheless, existing pre-trained models often struggle with constraints, such as the necessity to operate within datasets of identical configurations or the need to distort the original data to apply the pre-trained model. In this paper, we proposed the domainfree transformer, called DFformer, for generalizing the EEG pre-trained model. In addition, we presented the pre-trained model based on DFformer, which is capable of seamless integration across diverse datasets without necessitating architectural modification or data distortion. The proposed model achieved competitive performance across motor imagery and sleep stage classification datasets. Notably, even when fine-tuned on datasets distinct from the pre-training phase, DFformer demonstrated marked performance enhancements. Hence, we demonstrate the potential of DFformer to overcome the conventional limitations in pretrained model development, offering robust applicability across a spectrum of domains.