2020 International Joint Conference on Neural Networks (IJCNN) 2020
DOI: 10.1109/ijcnn48605.2020.9207521
|View full text |Cite
|
Sign up to set email alerts
|

Similitude Attentive Relation Network for Click-Through Rate Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
1
1
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…where W s and W t are learnable parameters, ReLU(•) is Rectified Linear Unit which guarantees the output is nonnegative vectors [20] and • 2 is two-norm. It is worth mentioning that mapping embedding vectors to nonnegative vectors is usually beneficial [21,22]. On the one hand, it can be considered as an efficient dimension reduction via Nonnegative Matrix Factorization [23], on the other hand, it restricts e i ,j within [0, 1] and avoids getting negative values, which may neutralize the final relational features in (10).…”
Section: Similarity Based Relational Modulementioning
confidence: 99%
See 1 more Smart Citation
“…where W s and W t are learnable parameters, ReLU(•) is Rectified Linear Unit which guarantees the output is nonnegative vectors [20] and • 2 is two-norm. It is worth mentioning that mapping embedding vectors to nonnegative vectors is usually beneficial [21,22]. On the one hand, it can be considered as an efficient dimension reduction via Nonnegative Matrix Factorization [23], on the other hand, it restricts e i ,j within [0, 1] and avoids getting negative values, which may neutralize the final relational features in (10).…”
Section: Similarity Based Relational Modulementioning
confidence: 99%
“…It is possible to take h i ,j as the output of the module directly. However, MLP itself is believed to be not so good at handling entangled tasks in CTR prediction [14,22,24]. Instead, we separate relational reasoning and representation learning by wrapping h i ,j with items via inner product operation.…”
Section: Mlp Based Relational Modulementioning
confidence: 99%