2020
DOI: 10.1007/978-3-030-41579-2_43
|View full text |Cite
|
Sign up to set email alerts
|

A Character-Level BiGRU-Attention for Phishing Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…The scores of the targets are given to the BiGRU layer. BiGRU [16] is a combination of two GRU (forward and back ward) which is used to find the key features using two gates i) reset gate ignores the irrelevant words and ii) update gate reserves the important information. The gates shall get the value between 0 and 1 where 0 means the resultant data is unimportant and 1 means resultant data is important.…”
Section: Bigrumentioning
confidence: 99%
“…The scores of the targets are given to the BiGRU layer. BiGRU [16] is a combination of two GRU (forward and back ward) which is used to find the key features using two gates i) reset gate ignores the irrelevant words and ii) update gate reserves the important information. The gates shall get the value between 0 and 1 where 0 means the resultant data is unimportant and 1 means resultant data is important.…”
Section: Bigrumentioning
confidence: 99%
“…Gated Recurrent Unit (GRU) is another variant of RNN and is a lightweight version of LSTM [23]. While working on small datasets, the performance of GRU is similar to LSTM [48]. Some of the previous studies that implemented GRU in their phishing detection models are provided in Table A4…”
Section: Gated Recurrent Unit (Gru)mentioning
confidence: 99%
“…There are a limited number of studies on the implementation of GRU for phishing detection. GRU and Bidirectional GRU can be employed as a single classifier [41,48], or as a replacement to the max-pooling layer in a CNN model [34]. Similar to LSTM, in implementing GRU-based phishing detection models, only neural network architecture, learning rate, and epoch were specified, but not batch size and dropout rate [41,48].…”
Section: Gated Recurrent Unit (Gru)mentioning
confidence: 99%
See 2 more Smart Citations