2020
DOI: 10.3390/fi12120218
|View full text |Cite
|
Sign up to set email alerts
|

Pat-in-the-Loop: Declarative Knowledge for Controlling Neural Networks

Abstract: The dazzling success of neural networks over natural language processing systems is imposing an urgent need to control their behavior with simpler, more direct declarative rules. In this paper, we propose Pat-in-the-Loop as a model to control a specific class of syntax-oriented neural networks by adding declarative rules. In Pat-in-the-Loop, distributed tree encoders allow to exploit parse trees in neural networks, heat parse trees visualize activation of parse trees, and parse subtrees are used as declarative… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…In order to try this test, we need to build machines that are partially innate and therefore, able to learn from experience, as a machine learning paradigm, but at the same time act by human hand. This architecture should be similar to Pat-in-the-Loop [63], a system that allows humans to input rules into a neural network. The results must be interpretable, so humans must be able to understand the decisions made by the system.…”
Section: Measuring Knowledgementioning
confidence: 99%
“…In order to try this test, we need to build machines that are partially innate and therefore, able to learn from experience, as a machine learning paradigm, but at the same time act by human hand. This architecture should be similar to Pat-in-the-Loop [63], a system that allows humans to input rules into a neural network. The results must be interpretable, so humans must be able to understand the decisions made by the system.…”
Section: Measuring Knowledgementioning
confidence: 99%
“…The rationale behind this trend is that deep architecture training allows rules and structural information about language to emerge directly from sentences in the target language, sacrificing the interpretable and transparent definition of language regularities. Some exceptions exist where structural syntactic information is explicitly encoded in multilayer perceptrons (Zanzotto et al, 2020) with relevant results on unseen sentences (Onorati et al, 2023). Yet, pre-trained transformers (Vaswani et al, 2017;Devlin et al, 2019) are offered as versatile universal sentence/text encoders that contain whatever is needed to solve any downstream task.…”
Section: Introductionmentioning
confidence: 99%