Automatic classification of remote sensing images using machine learning techniques is challenging due to the complex features of the images. The images are characterized by features such as multi-resolution, heterogeneous appearance and multi-spectral channels. Deep learning methods have achieved promising results in the analysis of remote sensing satellite images in the recent past. However, deep learning methods based on convolutional neural networks (CNN) experience difficulties in the analysis of intrinsic objects from satellite images. These techniques have not achieved optimum performance in the analysis of remote sensing satellite images due to their complex features, such as coarse resolution, cloud masking, varied sizes of embedded objects and appearance. The receptive fields in convolutional operations are not able to establish long-range dependencies and lack global contextual connectivity for effective feature extraction. To address this problem, we propose an improved deep learning-based vision transformer model for the efficient analysis of remote sensing images. The proposed model incorporates a multi-head local self-attention mechanism with patch shifting procedure to provide both local and global context for effective extraction of multi-scale and multi-resolution spatial features of remote sensing images. The proposed model is also enhanced by fine-tuning the hyper-parameters by introducing dropout modules and a decay linear learning rate scheduler. This approach leverages local self-attention for learning and extraction of the complex features in satellite images. Four distinct remote sensing image datasets, namely RSSCN, EuroSat, UC Merced (UCM) and SIRI-WHU, were subjected to experiments and analysis. The results show some improvement in the proposed vision transformer on the CNN-based methods.