Automated segmentation of breast tumors in breast ultrasound images has been a challenging frontier issue. The morphological diversity, boundary ambiguity, and heterogeneity of malignant tumors in breast lesions constrain the improvement of segmentation accuracy. To address these challenges, we propose an innovative deep learning-based method, namely Dual-Channel Deep Residual Attention UPerNet (DDRA-net), for efficient and accurate segmentation of breast tumor regions. The core of DDRAnet lies in the Dual-Channel Deep Residual Attention module (DDRA), which integrates depth-wise separable convolution and Convolutional Block Attention Module (CBAM). This design aims to enhance the extraction of crucial features within the receptive field to better capture subtle details of breast lesions. Through extensive experimental evaluation, DDRA-net demonstrates remarkable performance on a publicly available breast ultrasound datasets, exhibiting higher segmentation accuracy and stability compared to contemporary mainstream deep learning methods. Importantly, it is worth emphasizing that the flexibility of this method allows easy integration with other network structures to further improve the performance and applicability of breast tumor segmentation. In the segmentation of the Breast Ultrasound Image dataset , our precision, recall, IoU, F1 score, Dice, and Hausdorff Distance achieved the following values: 95.31%, 90.79%, 88.00%, 92.39%, 95.46%, and 3.02, respectively. Compared to the original UPerNet, DDRA-net demonstrated improvements of 2.92%, 4.64%, 5.52%, 4.97%, 3.4%, and 24.5% in these six metrics on the Breast Ultrasound Image dataset.