2019
DOI: 10.1007/s40732-019-00337-6
|View full text |Cite
|
Sign up to set email alerts
|

Identifying Accurate and Inaccurate Stimulus Relations: Human and Computer Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 44 publications
0
3
0
Order By: Relevance
“…Notably, they have developed a research agenda for extending the application of EVA to analyzing both theoretical and applied aspects of stimulus equivalence and derived stimulus control. For example, they have used EVA to explore the basic training requirements for human participants to derive stimulus relations and generalize to other training sets in a task directed at establishing stimulus relations between algebraic expressions (Ninness et al, 2019), and more recently they have discussed the implications of simulating more challenging human performances in neural networks (Ninness & Ninness, 2020). Particularly, they have included more hidden layers in the EVA architecture, in a way that the final network includes four layers (one input layer, two hidden layers, and one output layer) instead of the typical three-layer architecture, which provides the network with additional computational power, approaching the methods used in deep neural networks (where "deep" indicates the addition of processing layers).…”
Section: Feedforward Network Using Compound Stimulimentioning
confidence: 99%
“…Notably, they have developed a research agenda for extending the application of EVA to analyzing both theoretical and applied aspects of stimulus equivalence and derived stimulus control. For example, they have used EVA to explore the basic training requirements for human participants to derive stimulus relations and generalize to other training sets in a task directed at establishing stimulus relations between algebraic expressions (Ninness et al, 2019), and more recently they have discussed the implications of simulating more challenging human performances in neural networks (Ninness & Ninness, 2020). Particularly, they have included more hidden layers in the EVA architecture, in a way that the final network includes four layers (one input layer, two hidden layers, and one output layer) instead of the typical three-layer architecture, which provides the network with additional computational power, approaching the methods used in deep neural networks (where "deep" indicates the addition of processing layers).…”
Section: Feedforward Network Using Compound Stimulimentioning
confidence: 99%
“…A good overview of existing CMs is provided in (Ninness et al, 2018) along with a working example of a neural network called emergent virtual analytics (EVA) (see Ninness et al, 2019, for more simulations with EVA). Through EVA, the process of applying neural network simulations in behavior-analytic research is demonstrated.…”
Section: Computational Models Of Formation Of Stimulus Equivalence CLmentioning
confidence: 99%
“…Some of the computational analogs informed by RFT have been drawn from machine learning expert systems and artificial intelligence neural networks. Examples include the use of Deep Neural Networks (DNN) to detect effects in multiple baseline single case design graphed data ( Lanovaz and Bailey, 2020 ) and forecast human participant learning of trigonometry ( Ninness et al, 2019 ; Ninness and Ninness, 2020 ); the use of Kohonen Self-Organizing Maps (SOM: Kohonen, 1988 ) for behavioral pattern detection in legislature voting, breast cancer diagnosis ( Ninness et al, 2012 ), visual symmetry detection ( Dresp-Langley and Wandeto, 2021 ), and surgical expertise detection ( Dresp-Langley et al, 2021 ); and blends of DNN and SOM architectures to model decision making in child welfare systems ( Ninness et al, 2021 ). Additional work with Connectionist Models (CM) has provided confirmatory validation of methodological nuances in relational training sequencing for humans ( Lyddy and Barnes-Holmes, 2007 ).…”
Section: Introductionmentioning
confidence: 99%