The sustainable development of marine fisheries depends on the accurate measurement of data on fish stocks. Semantic segmentation methods based on deep learning can be applied to automatically obtain segmentation masks of fish in images to obtain measurement data. However, general semantic segmentation methods cannot accurately segment fish objects in underwater images. In this study, a Dual Pooling-aggregated Attention Network (DPANet) to adaptively capture long-range dependencies through an efficient and computing-friendly manner to enhance feature representation and improve segmentation performance is proposed. Specifically, a novel pooling-aggregate position attention module and a poolingaggregate channel attention module are designed to aggregate contexts in the spatial dimension and channel dimension, respectively. These two modules adopt pooling operations along the channel dimension and along the spatial dimension to aggregate information, respectively, thus reducing computational costs. In these modules, attention maps are generated by four different paths and are aggregated into one. The authors conduct extensive experiments to validate the effectiveness of the DPANet and achieve new state-ofthe-art segmentation performance on the well-known fish image dataset DeepFish as well as on the underwater image dataset SUIM, achieving a Mean IoU score of 91.08% and 85.39% respectively, while significantly reducing FLOPs of attention modules by about 93%.