2023
DOI: 10.1109/tcyb.2021.3087662
|View full text |Cite
|
Sign up to set email alerts
|

₁ Sparsity-Regularized Attention Multiple-Instance Network for Hyperspectral Target Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(8 citation statements)
references
References 52 publications
0
8
0
Order By: Relevance
“…To solve this problem, we use attention thresholding after Equation (4). [30] A a,g ¼ & A a,g if A a > γ 0 else , g ∈ 1, : : : , G; a ∈ 1, : : : , A…”
Section: Attentional Filtering Blockmentioning
confidence: 99%
See 1 more Smart Citation
“…To solve this problem, we use attention thresholding after Equation (4). [30] A a,g ¼ & A a,g if A a > γ 0 else , g ∈ 1, : : : , G; a ∈ 1, : : : , A…”
Section: Attentional Filtering Blockmentioning
confidence: 99%
“…To solve this problem, we use attention thresholding after Equation (). [ 30 ] Aa,g={ Aa,gif Aa>γ0else,g1,,G;a1,,A$$A_{a , g} = \left{\right. \begin{matrix}A_{a , g} i f A_{a} > \gamma \\ 0 e l s e\end{matrix} g \in 1 , \hdots , G , a \in 1 , \hdots , A$$where γ is the threshold value, and g is the index of packets.…”
Section: Proposed Deep Learning‐based Ble 51 Aoa Ipsmentioning
confidence: 99%
“…However, the localness limits the ability to model long-term and non-consecutive dependencies. Subsequently, non-localized sparsity mechanisms were proposed, including DropAttention [17], Cluster-Former [18] and L 1 sparsity-regularized attention (L1-attention) [19], [20]. DropAttention randomly sets attention weights to zero, interpreted as dropping a set of neurons along different dimensions.…”
Section: Introductionmentioning
confidence: 99%
“…The second type (Cluster-Former Layer) encodes global information beyond the initial chunked sequences [18]. The L 1 sparsityregularized attention introduced a L 1 sparse prior, which minimizes contributions of the irrelevant connections in the feature learning process [19]. These methods encourage the model to make decisions relying on the full context of the input sequences rather than a few pieces of input.…”
Section: Introductionmentioning
confidence: 99%
“…In summary, to overcome the obstacles in table tennis ball detection tasks, this study designed a dedicated detection network, which was able to reduce the use of network layers by efficiently reusing the feature information to achieve the lightweight requirements. The implementation of the network mainly relies on the feature reuse module, which stores the feature information extracted in the previous iteration and passes it to the corresponding network layer in the next iteration, so that the feature information can be fully extracted with only a few network layers [ 13 , 14 , 15 ]. At the same time, the Transformer module was added to the network to utilize its excellent capability of global feature information extraction, while combining with the local feature extraction capability of the convolutional network, in order to improve the networks’ capacity of precepting small targets [ 16 , 17 , 18 , 19 ].…”
Section: Introductionmentioning
confidence: 99%