2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom) 2020
DOI: 10.1109/trustcom50675.2020.00042
|View full text |Cite
|
Sign up to set email alerts
|

Densely Connected Residual Network for Attack Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…Random forests then use the obtained prediction results for the final classification. The authors of [19] suggested a residual densely connected network (Densely-ResNet) to identify attacks. It was put together with the help of surviving core modules, each comprising several Conv-GRU subnets connected by wide links.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Random forests then use the obtained prediction results for the final classification. The authors of [19] suggested a residual densely connected network (Densely-ResNet) to identify attacks. It was put together with the help of surviving core modules, each comprising several Conv-GRU subnets connected by wide links.…”
Section: Background and Related Workmentioning
confidence: 99%
“…However, as proved in our previous work [26], with the depth of the aforementioned plain networks increases, they will suffer from the problem of performance degradation, that is, a reduction in the detection accuracy of the model. Densely-ResNet [27] and DualNet [28] are two state-of-theart designs by reusing features to handle this issue. Densely-ResNet is a densely connected residual network.…”
Section: Supervised Machine Learning Approachesmentioning
confidence: 99%
“…In addition, a max-pooling (MP) layer is behind the DSC and a powerful regularizer dropout [33] is followed by the GRU to prevent overfitting and further reduce the computational cost, where MP is by down-sampling its inputs and dropout is by randomly removing several neurons. Besides, linear bridging (LB) [27] is appended to transform a series of nonlinear parameter layers into a linear space for stabilizing the learning process.…”
Section: Plain Blockmentioning
confidence: 99%