Brain tumors are usually fatal diseases with low life expectancies due to the organs they affect, even if the tumors are benign. Diagnosis and treatment of these tumors are challenging tasks, even for experienced physicians and experts, due to the heterogeneity of tumor cells. In recent years, advances in deep learning (DL) methods have been integrated to aid in the diagnosis, detection, and segmentation of brain neoplasms. However, segmentation is a computationally expensive process, typically based on convolutional neural networks (CNNs) in the UNet framework. While UNet has shown promising results, new models and developments can be incorporated into the conventional architecture to improve performance. In this research, we propose three new, computationally inexpensive, segmentation networks inspired by Transformers. These networks are designed in a 4-stage deep encoder-decoder structure and implement our new cross-attention model, along with separable convolution layers, to avoid the loss of dimensionality of the activation maps and reduce the computational cost of the models while maintaining high segmentation performance. The new attention model is integrated in different configurations by modifying the transition layers, encoder, and decoder blocks. The proposed networks are evaluated against the classical UNet network, showing that our networks have differences of up to an order of magnitude in the number of training parameters. Additionally, one of the models outperforms UNet, achieving training in significantly less time and with a Dice Similarity Coefficient (DSC) of up to 94%, ensuring high effectiveness in brain tumor segmentation.