The task of image multi-label classification is to accurately recognize multiple objects in an input image. Most of the recent works need to leverage the label co-occurrence matrix counted from training data to construct the graph structure, which are inflexible and may degrade model generalizability. In addition, these methods fail to capture the semantic correlation between the channel feature maps to further improve the model performance. To address these issues, we propose a
D
ouble
A
ttention framework based on
G
raph
A
ttention ne
T
work (DA-GAT) to effectively learn the correlation between labels from training data. Firstly, we devise a new channel attention mechanism to enhance the semantic correlation between channel feature maps, so as to implicitly capture the correlation between labels. Secondly, we propose a new label attention mechanism to avoid the adverse impact of manually constructed label co-occurrence matrix. It only needs to leverage the label embedding as the input of network, and then automatically constructs the label relation matrix to explicitly establish the correlation between labels. Finally, we effectively fuse the output of these two attention mechanisms to further improve the model performance. Extensive experiments are conducted on three public multi-label classification benchmarks. Our DA-GAT model achieves the mAPs of 87.1%, 96.6%, and 64.3% on MS-COCO 2014, PASCAL VOC 2007, and NUS-WIDE respectively, and obviously outperforms other existing state-of-the-art methods. In addition, visual analysis experiments demonstrate that each attention mechanism can capture the correlation between labels well, and significantly promote the model performance.