2020
DOI: 10.1609/aaai.v34i04.5842
|View full text |Cite
|
Sign up to set email alerts
|

DIANet: Dense-and-Implicit Attention Network

Abstract: Attention networks have successfully boosted the performance in various vision problems. Previous works lay emphasis on designing a new attention module and individually plug them into the networks. Our paper proposes a novel-and-simple framework that shares an attention module throughout different network layers to encourage the integration of layer-wise information and this parameter-sharing module is referred to as Dense-and-Implicit-Attention (DIA) unit. Many choices of modules can be used in the DIA unit.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4
1

Relationship

2
8

Authors

Journals

citations
Cited by 35 publications
(17 citation statements)
references
References 19 publications
0
17
0
Order By: Relevance
“…SGN has also outperformed recent work such as MIM [47], CLS-GAN [48], DSN [49], and BinaryConnect [50]. On the CIFAR-100, SGN has achieved 84.71% outperforming recent studies such as MixMatch [51] Mish [52], DIANet [53], and ResNet-1001 [54]. Table V shows more experimental results on the CIFAR-100.…”
Section: Modelmentioning
confidence: 72%
“…SGN has also outperformed recent work such as MIM [47], CLS-GAN [48], DSN [49], and BinaryConnect [50]. On the CIFAR-100, SGN has achieved 84.71% outperforming recent studies such as MixMatch [51] Mish [52], DIANet [53], and ResNet-1001 [54]. Table V shows more experimental results on the CIFAR-100.…”
Section: Modelmentioning
confidence: 72%
“…Through previous work, we found that the results of some models can be effectively improved by introducing low-level features, even for simple aggregations [22]. Beyond that, we inspired by [9] that ConvlSTM is a very powerful module for connecting and integrating multiple layers of information. Therefore, we consider to introduce CONVLSTM module between multi-layer feature map to help feature fusion between different levels.…”
Section: Convlstm For Navigation From High-level To Low-levelmentioning
confidence: 99%
“…First, the added-in module extracts internal information of a networks which can be squeezed channel-wise information (Hu, Shen, and Sun 2018; or spatial information . Next, the module processes the extraction and generates a mask to measure the importance of features via a fully connected layer (Hu, Shen, and Sun 2018), convolution layer (Wang et al 2018) or LSTM (Huang et al 2019b). Last, the mask is applied back to the features to adjust feature importance.…”
Section: Related Workmentioning
confidence: 99%