2022
DOI: 10.48550/arxiv.2201.10222
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Explanatory Learning: Beyond Empiricism in Neural Networks

Abstract: We introduce Explanatory Learning (EL), a framework to let machines use existing knowledge buried in symbolic sequences -e.g. explanations written in hieroglyphic -by autonomously learning to interpret them. In EL, the burden of interpreting symbols is not left to humans or rigid human-coded compilers, as done in Program Synthesis. Rather, EL calls for a learned interpreter, built upon a limited collection of symbolic sequences paired with observations of several phenomena. This interpreter can be used to make… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…(ii) The recognition tag transitioned from being an output pulled out of the image by the neural stack (the label) to become an input that should be interpreted, and therefore processed by its own encoder (the free text caption). This corresponds to an epistemological perspective shift, as discussed by Norelli et al (2022).…”
Section: Closely Related Workmentioning
confidence: 96%
“…(ii) The recognition tag transitioned from being an output pulled out of the image by the neural stack (the label) to become an input that should be interpreted, and therefore processed by its own encoder (the free text caption). This corresponds to an epistemological perspective shift, as discussed by Norelli et al (2022).…”
Section: Closely Related Workmentioning
confidence: 96%