2013
DOI: 10.1007/978-3-642-42042-9_27
|View full text |Cite
|
Sign up to set email alerts
|

Deep Relational Machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(28 citation statements)
references
References 11 publications
0
28
0
Order By: Relevance
“…Incorporating domain-knowledge into deep neural networks has shown considerable success over the years. Possibly the earliest approach is propositional-isation [31,32] leading to construction of deep relational machines [33,34] where the deep network is a multi-layered perceptron. Recent studies on domainknowledge inclusion includes: vertex-enrichment [35], bottom-graph construction via [4].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Incorporating domain-knowledge into deep neural networks has shown considerable success over the years. Possibly the earliest approach is propositional-isation [31,32] leading to construction of deep relational machines [33,34] where the deep network is a multi-layered perceptron. Recent studies on domainknowledge inclusion includes: vertex-enrichment [35], bottom-graph construction via [4].…”
Section: Related Workmentioning
confidence: 99%
“…It is a technique to transform a relational representation into a propositional single-table representation where each column in the table corresponds a feature that represents a relation constructed from data and domain-knowledge. Propositionalisation is the core technique in construction of deep relational machines [34,35,36]: these are multi-layered perceptrons constructed from propositionalised representation of relational data and domain-knowledge. Recent studies on domain-knowledge inclusion include construction of graph neural networks (GNNs) that can learn not only from relational (graph-structured) data but also symbolic domain-knowledge.…”
Section: Related Workmentioning
confidence: 99%
“…Deep neural networks are effective learners in numeric space, capable of constructing intermediate knowledge constructs and thereby improve semantics of baseline input representation. Training deep neural networks on propositionalized relational data were explored by Srinivasan et al (2019), following the work of Lodhi (2013), where Deep Relational Machines (DRMs) were first introduced. In Lodhi's work, the DRMs used bodies of first order Horn clauses as input to restricted Boltzmann machines, where conjuncts of bonds and other molecular structure information compose individual complex features; when all structural properties are present in a given instance, the target's value is true, and false otherwise.…”
Section: Deep Relational Machinesmentioning
confidence: 99%
“…Note that the propositionalized data set P is usually a sparse matrix, which can represent additional challenge for neural networks. The DRMs proposed by Lodhi (2013) were used for prediction of protein folding properties, as well as mutagenicity assessment of small molecules. This approach used feature selection with information theoretic measures such as information gain as the sparse matrix resulting from the propositionalization was not suitable as an input to the neural network.…”
Section: Deep Relational Machinesmentioning
confidence: 99%
See 1 more Smart Citation