Proceedings of the 2019 Conference of the North 2019
DOI: 10.18653/v1/n19-1103
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Convolution for Multi-Relational Learning

Abstract: We consider the problem of learning distributed representations for entities and relations of multi-relational data so as to predict missing links therein. Convolutional neural networks have recently shown their superiority for this problem, bringing increased model expressiveness while remaining parameter efficient. Despite the success, previous convolution designs fail to model full interactions between input entities and relations, which potentially limits the performance of link prediction. In this work we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
55
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 126 publications
(55 citation statements)
references
References 22 publications
(21 reference statements)
0
55
0
Order By: Relevance
“…Variants of relation specific parameters θ r RAGAT restricts θ r to be W r for training speed. In fact, θ r can be extended to more parameterized forms like multi-Layer perceptron, convolution kernel [31], etc. More interestingly, we find our idea associated with two most recent works, ParamE [45] and CoPER [46].…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Variants of relation specific parameters θ r RAGAT restricts θ r to be W r for training speed. In fact, θ r can be extended to more parameterized forms like multi-Layer perceptron, convolution kernel [31], etc. More interestingly, we find our idea associated with two most recent works, ParamE [45] and CoPER [46].…”
Section: Discussionmentioning
confidence: 99%
“…InteractE [29] improves the performance of ConvE by feature permutation, checkered reshaping, and circular convolution. More CNN based approaches include ConvKB [30], ConvR [31], CapsE [32].…”
Section: Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…This type of methods are more suitable for the single-hop reasoning. Tensor decomposition models [26], such as DistMult [27], ComplEx [28], Analogy [29], SimplE [30], HolE [31], TuckER [32], and deep learning models, such as ConvE [33], ConvKB [34], ConvR [35], CapsE [36], RSN [37], are also used to represent entities and relations.…”
Section: Related Workmentioning
confidence: 99%
“…ConvE (Dettmers et al, 2018) models entity inference process via 2D convolution over the reshaped then concatenated embedding of the known entity and relation. ConvR (Jiang et al, 2019) further adaptively constructs convolution filters from relation embedding and applies these filters across entity embedding to generate convolutional features. SENN (Guan et al, 2018) models the inference processes of head entities, tail entities, and relations via fullyconnected neural networks, and integrates them into a unified framework.…”
Section: Knowledge Inference On Binary Factsmentioning
confidence: 99%