Compared with point features, line features can provide more geometric information in vision tasks. Although traditional line descriptor methods have been proposed for a long time, learning-based line descriptor methods still need to be strengthened. Inspired by the message passing mechanism of graph neural networks, we propose a new neural network architecture named LDAM that alternately uses two attention mechanisms to augment line descriptors and extract more line correspondences. Compared with previous methods, our method learns the geometric properties and prior knowledge of images through the mutual aggregation of features between a pair of images. The experiments on real data verify the good performance of LDAM in terms of matching accuracy. Furthermore, LDAM is also robust to viewpoint change or occlusion.