The effects of five kinds of questioning, two interpersonal atmospheres of interviewing, and five levels of item difficulty on the accuracy and completeness of testimony about a short film were tested in a legal interrogation setting. Subjects enjoyed the supportive style of interviewing more than the challenging style, but atmosphere had no important effect on recall performance. The type of questioning produced almost no differences in affective or cognitive reactions. However, as the specificity of questions increased, so did the completeness of testimony. Accuracy of testimony showed slight decreases for more specific questions. The trade‐off between accuracy and completeness was mediated by item difficulty. It was very pronounced for items of high difficulty and not apparent for items of low difficulty. Leading questions by themselves or in interaction with atmosphere did not produce special distortions in accuracy.
We propose an intrinsic developmental algorithm that is designed to allow a mobile robot to incrementally progress through levels of increasingly sophisticated behavior. We believe that the core ingredients for such a developmental algorithm are abstractions, anticipations, and self-motivations. We describe a multilevel, cascaded discovery and control architecture that includes these core ingredients. As a first step toward implementing the proposed architecture, we explore two novel mechanisms: a governor for automatically regulating the training of a neural network and a pathplanning neural network driven by patterns of ''mental states'' that represent protogoals.
Abstract-This paper introduces Interactional Motivation (IM) as a way to implement self-motivation in artificial systems. An interactionally motivated agent selects behaviors for the sake of enacting the behavior itself rather than for the value of the behavior's outcome. IM contrasts with extrinsic motivation by the fact that it defines the agent's motivation independently from the environment's state. Because IM does not refer to the environment's states, we argue that IM is a form of selfmotivation on the same level as intrinsic motivation. IM, however, differs from intrinsic motivation by the fact that IM allows specifying the agent's inborn value system explicitly. This paper introduces a formal definition of the IM paradigm and compares it to the reinforcement-learning paradigm as traditionally implemented in Partially Observable Markov Decision Processes (POMDPs).
A novel way to model an agent interacting with an environment is introduced, called an Enactive Markov Decision Process (EMDP). An EMDP keeps perception and action embedded within sensorimotor schemes rather than dissociated, in compliance with theories of embodied cognition. Rather than seeking a goal associated with a reward, as in reinforcement learning, an EMDP agent learns to master the sensorimotor contingencies offered by its coupling with the environment. In doing so, the agent exhibits a form of intrinsic motivation related to the autotelic principle (Steels, 2004), and a value system attached to interactions called interactional motivation. This modeling approach allows the design of agents capable of autonomous self-programming, which provides rudimentary constitutive autonomy-a property that theoreticians of enaction consider necessary for autonomous sense-making (e.g., Froese & Ziemke, 2009). A cognitive architecture is presented that allows the agent to autonomously discover, memorize, and exploit spatio-sequential regularities of interaction, called Enactive Cognitive Architecture (ECA). In our experiments, behavioral analysis shows that ECA agents develop active perception and begin to construct their own ontological perspective on the environment.
This paper describes Metacat, an extension of the Copycat model of analogy-making. The development of Copycat focused on modelling context-sensitive concepts and the ways in which they interact with perception within an abstract microworld of analogy problems. This approach differs from most other models of analogy in its insistence that concepts acquire their semantics from within the system itself, through perception, rather than being imposed from the outside. The present work extends these ideas by incorporating self-perception, episodic memory, and reminding into the model. These mechanisms enable Metacat to explain the similarities and differences that it perceives between analogies, and to monitor and respond to patterns that occur in its own behaviour as it works on analogy problems. This introspective capacity overcomes several limitations inherent in the earlier model, and affords the program a powerful degree of self-control. Metacat's architecture includes aspects of both symbolic and connectionist systems. The paper outlines the principal components of the architecture, analyses several sample runs and examples of program-generated commentary about analogies, and discusses Metacat's relation to some other well-known models of analogy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.