As a fundamental task in computer vision, object detection has long been a challenging visual task. However, current object detection models lack attention to salient features when fusing the lateral connections and top-down information flows in feature pyramid networks (FPNs). To address this, we propose a method for object detection based on an enhanced bi-directional attention feature pyramid network, which aims to enhance the feature representation capability of lateral connections and top-down links in FPN. This method adopts the triplet module to give attention to salient features in the original multi-scale information in spatial and channel dimensions, establishing an enhanced triplet attention. In addition, it introduces improved top and down attention to fuse contextual information using the correlation of features between adjacent scales. Furthermore, adaptively spatial feature fusion and self-attention are introduced to expand the receptive field and improve the detection performance of deep levels. Extensive experiments conducted on the PASCAL VOC, MS COCO, KITTI, and CrowdHuman datasets demonstrate that our method achieves performance gains of 1.8%, 0.8%, 0.5%, and 0.2%, respectively. These results indicate that our method has significant effects and is competitive compared with advanced detectors.