Accurate segmentation of brain tumors from MRI sequences is essential across diverse clinical scenarios, facilitating precise delineation of anatomical structures and disease-affected areas. This study presents an innovative deep-learning method for segmenting glioma brain tumors, utilizing a hybrid architecture that combines ResNet U-Net with Transformer blocks. The proposed model adeptly encompasses both the local and global contextual details present in MRI scans. The architecture includes an encoder based on ResNet for extracting hierarchical features, followed by residual blocks to enhance feature representation while maintaining spatial information. Additionally, a central transformer block, incorporating Multi-Head Attention mechanisms, enables the modeling of long-range dependencies and contextual comprehension, progressively refining feature interactions. To handle structural scale variations within MRI images, skip connections are utilized during the decoding phase. Transposed convolutional layers in the decoder upsample feature maps, retaining details and incorporating contextual information from earlier layers. A rigorous assessment of the model's functionality was carried out with the BraTS2019 dataset, employing a comprehensive set of evaluation metrics including accuracy, IOU score, specificity, sensitivity, dice score, and precision. The evaluation focused on individual tumor classes, namely the whole, core, and enhancing tumor regions. During validation, the suggested model demonstrated remarkable dice scores of 0.91, 0.89, and 0.84 for the whole tumor, core tumor, and enhancing tumor, respectively, yielding an impressive overall accuracy rate of 98%.