2022
DOI: 10.1109/access.2022.3204706
|View full text |Cite
|
Sign up to set email alerts
|

Learning Embedding Features Based on Multisense-Scaled Attention Architecture to Improve the Predictive Performance of Air Combat Intention Recognition

Abstract: In modern air combat, acquiring the opponent's air combat intention is one of essential prerequisites to evaluate the air combat situation effectively and master the battlefield initiative. On account of multi-dimensional and temporal characteristics of the target state, a recognition model is proposed to identify tactical intention of aerial target based on a multi-sense-scaled attention architecture. First of all, the multi-dimensional feature information, including target state attributes, battlefield envir… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 14 publications
(23 reference statements)
0
2
0
Order By: Relevance
“…The highest recall, precision, and F1 score are obtained for attack intention because the maneuvers and tactical actions of attack intention are apparent, and the model can learn its characters better. The method used are XGboost [40], SVM [37], Random Forrest (RF) [38], FCN [43], CNN [43], LSTM [43], BiGRU-ATTENTION [52], CNN-BiLSTM-ATTENTION [53]. Under the same sliding window length, intention feature and space, the methods are trained, and the final recognition performance are obtained.…”
Section: ) Ablation Experimentsmentioning
confidence: 99%
See 1 more Smart Citation
“…The highest recall, precision, and F1 score are obtained for attack intention because the maneuvers and tactical actions of attack intention are apparent, and the model can learn its characters better. The method used are XGboost [40], SVM [37], Random Forrest (RF) [38], FCN [43], CNN [43], LSTM [43], BiGRU-ATTENTION [52], CNN-BiLSTM-ATTENTION [53]. Under the same sliding window length, intention feature and space, the methods are trained, and the final recognition performance are obtained.…”
Section: ) Ablation Experimentsmentioning
confidence: 99%
“…Finally, it is worth noting that more complex network structures do not necessarily lead to performance gains, e.g., the recognition accuracy of CNN-BiLSTM-ATTENTION is 91.17%, which is lower than the recognition accuracy of LSTM and BiGRU-ATTENTION. XGboost [40] 92.33 0.9206 SVM [37] 80.20 0.7901 RF [38] 81.75 0.7912 FCN [43] 85.25 0.8319 CNN [43] 89.84 0.8934 LSTM [43] 92.67 0.9205 BiGRU-ATTENTION [52] 95.11 0.9491 CNN-BiLSTM-ATTENTION [53] 91.17 0.9058 WSPTCTE-IR 96.90 0.9676…”
Section: ) Ablation Experimentsmentioning
confidence: 99%