1988
DOI: 10.21236/ada218903
|View full text |Cite
|
Sign up to set email alerts
|

Using Rules and Task Division to Augment Connectionist Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

1988
1988
1994
1994

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…This processing is slow, serial, and effortful, requiring many shifts among those modules that are allowed to transmit. The control information necessary to limit message interference also enables verbal rules to be executed (Oliver and Schneider, 1988). With each execution of a verbal rule, associative connections between the modules change such that the input will evoke the output without moderation by control processing.…”
Section: -0ctober 1988mentioning
confidence: 99%
“…This processing is slow, serial, and effortful, requiring many shifts among those modules that are allowed to transmit. The control information necessary to limit message interference also enables verbal rules to be executed (Oliver and Schneider, 1988). With each execution of a verbal rule, associative connections between the modules change such that the input will evoke the output without moderation by control processing.…”
Section: -0ctober 1988mentioning
confidence: 99%
“…Figure 2 contains a simple example. KBANN has been applied to successfully refining domain theories for real-world problems such as gene finding (Towell et al, 1990), protein folding (Maclin & Shavlik, 1993), and the control of a simple chemical plant (Scott, Shavlik, & Ray, 1992) Various groups have found that knowledge-based neural networks train faster than do "standard" neural networks (Berenji, 1991;Oliver & Schneider, 1988;Omlin & Giles, 1992;Shavlik & Towell, 1989), presumably because the initial information is used to choose a good starting point for the network. More importantly, though, experiments have shown that knowledge-based networks generalize better to future examples than do standard networks, as well as several other methods for inductive learning and theory refinement (Omlin & Giles, 1992;Maclin & Shavlik, 1993;McMillan et al, 1992;Roscheisen, Hofmann, & Tresp, 1992;Scott et al, 1992;Towell, 1992;Towell et al, 1990;Tresp, Hollatz, & Ahmad, 1993).…”
Section: How Can We Get Symbolic Information Into Neural Network?mentioning
confidence: 99%
“…Figure 2 contains a simple example. KBANN has been applied to successfully refining domain theories for real-world problems such as gene finding (Towell et al, 1990), protein folding (Maclin & Shavlik, 1993), and the control of a simple chemical plant (Scott, Shavlik, & Ray, 1992) Various groups have found that knowledge-based neural networks train faster than do "standard" neural networks (Berenji, 1991;Oliver & Schneider, 1988;Omlin & Giles, 1992;, presumably because the initial information is used to choose a good starting point for the network. More importantly, though, experiments have shown that knowledge-based networks generalize better to future examples than do standard networks, as well as several other methods for inductive learning and theory refinement (Omlin & Giles, 1992;Maclin & Shavlik, 1993;McMillan et al, 1992;Roscheisen, Hofmann, & Tresp, 1992;Scott et al, 1992;Towell, 1992;Towell et al, 1990;Tresp, Hollatz, & Ahmad, 1993).…”
Section: How Can We Get Symbolic Information Into Neural Network?mentioning
confidence: 99%