Three experiments investigated the effects of information order and representativeness on schema abstraction in a category learning task. A set of category members, in which the variability and frequency of member types were correlated, was divided into four study samples. In the high-variance condition, each sample was representative of the allowable variation in the category and the frequency with which it occurred. In the low-variance condition, the initial study sample focused only on the most frequently occurring category members. Subsequent samples gradually introduced exemplars, and hence additional variance, from remaining member types. After the fourth study sample, all subjects in all conditions had seen the same category members. Experiment 1 revealed that transfer performance was better if subjects began with a low-variance sample and were gradually introduced to the allowable variation on subsequent samples than if they consistently saw representative samples. Experiments 2 and 3 suggested that this information-order effect may interact with learning mode: Subjects induced to be more analytic about the material performed better if their initial and subsequent samples were representative of the category variation.To a large extent, learning involves the incorporation of new information into some existing knowledge structure. A leamer's first exposure to some domain may determine the nature of that structure, which in tum can influence how subsequent information in that domain is processed and incorporated into what is already known. This study investigated how abstraction of information about ill-defined categories is affected by varying the nature of initial category exemplars that a learner encounters and how subsequent exemplars are introduced.The idea that the order in which information is received could affect both the learning process and the ultimate knowledge representation is not particularly new. Generalleaming theories, such as Rumelhart and Norman's (1978) model of accretion, tuning, and restructuring, as well as social cognition models of impression formation (N. R. Anderson, 1968;Asch, 1946), are sensitive to the notion that initial information can affect the manner in which the learner incorporates subsequent knowledge into what he or she already knows. Early concept-identification research demonstrated that information order can affect the discovery of simple classification rules. Bruner, Goodnow, and Austin (1956) suggested that the learner uses some This research was supported in part by an NSF graduate fellowship to the first author, who is now at the Alberta Research Council, and ONR Contract NOOOl4-8l-0335 and NSF Grant IST-80-l5357 to the second author. We would like to thank two anonymous reviewers for their helpful comments and suggestions. Reprint requests should be sent to Renee Elio, Computing Department, Alberta Research Council, 11315 87th Avenue, Edmonton, Alberta T6G 2C2, Canada. 20 aspects of the first instances enountered to form a set of hypotheses. Subsequent in...
Continued practice on a task is characterized by several quantitative and qualitative changes in performance. The most salient is the speed‐up in the time to execute the task. To account for these effects, some models of skilled performance have proposed automatic mechanisms that merge knowlege structures associated with the task into fewer, larger structures. The present study investigated how the representation of similar cognitive procedures might interact with the success of such automatic mechanisms. In five experiments, subjects learned complex, multistep mental arithmetic procedures. These procedures included two types of knowledge thought to characterize most cognitive procedures: “component” knowledge for achieving intermediate results and “integrative” knowledge for organizing and integrating intermediate results. Subjects simultaneously practiced two procedures that had either the same component steps or the same integrative structure. Practiceeffect models supported a procedure‐independent representation for common component steps. The availability of these common steps for use in a new procedure was also measured. Steps practiced in the context of two procedures were expected to show greater transfer to a new procedure than steps learned in the context of a single procedure. This did not always occur. A model using component/integrative knowledge distinction reconciled these results by proposing that integrative knowledge operated on all steps of the procedure: An integral part of the knowledge associated with achieving an intermediate result or state includes how it contributes to later task demands. These results are discussed in the context of automatic mechanisms for skill acquisition.
This study examines the problem of belief revision, defined as deciding which of several initially accepted sentences to disbelieve, when new information presents a logical inconsistency with the initial set. In the first three experiments, the initial sentence set included a conditional sentence, a non-conditional (ground) sentence, and an inferred conclusion drawn from the first two. The new information contradicted the inferred conclusion. Results indicated that conditional sentences were more readily abandoned than ground sentences, even when either choice would lead to a consistent belief state, and that this preference was more pronounced when problems used natural language cover stories rather than symbols. The pattern of belief revision choices differed depending on whether the contradicted conclusion from the initial belief set had been a modus ponens or modus tollens inference. Two additional experiments examined alternative model-theoretic definitions of minimal change to a belief state, using problems that contained multiple models of the initial belief state and of the new information that provided the contradiction. The results indicated that people did not follow any of four formal definitions of minimal change on these problems. The new information and the contradiction it offered was not, for example, used to select a particular model of the initial belief state as a way of reconciling the contradiction.The preferred revision was to retain only those initial sentences that had the same, unambiguous truth value within and across both the initial and new information sets. The study and results are presented in the context of certain logicbased formalizations of belief revision, syntactic and model-theoretic representations of belief states, and performance models of human deduction. Principles by which some types of sentences might be more "entrenched" than others in the face of contradiction are also discussed from the perspective of induction and theory revision.
This research presents a computer model called EUREKA that begins with novice‐like strategies and knowledge organizations for solving physics word problems and acquires features of knowledge organizations and basic approaches that characterize experts in this domain. EUREKA learns a highly interrelated network of problem‐type schemas with associated solution methodologies. Initially, superficial features of the problem statement form the basis for both the problem‐type schemas and the discriminating features that organize them in the P‐MOP (Problem Memory Organization Packet) network. As EUREKA solves more problems, the content of the schemas and the discriminating features change to reflect more fundamental physics principles. This changing network allows EUREKA to shift from a novicelike means‐ends strategy to a more expertlike “knowledge development” strategy in which the presence of abstract concepts are triggered by problem features. In this model, the strategy shift emerges as a natural consequence of the evolving expertlike organization of problem‐type schemos. EUREKA captures many of the descriptive models of novice expert differences, and also suggests a number of empirically testable assumptions regarding problem‐solving strategies and the representation of problem‐solving knowledge.
This is a position paper concerning the role of empirical studies of human default reasoning in the formalization of AI theories of default reasoning. We note that AI motivates its theoretical enterprise by reference to human skill at default reasoning, but that the actual research does not make any use of this sort of information and instead relies on intuitions of individual investigators. We discuss two reasons theorists might not consider human performance relevant to formalizing default reasoning: (a) that intuitions are sufficient to describe a model, and (b) that human performance in this arena is irrelevant to a competence model of the phenomenon. We provide arguments against both these reasons. We then bring forward three further considerations against the use of intuitions in this arena: (a) it leads to an unawareness of predicate ambiguity, (b) it presumes an understanding of ordinary language statements of typicality, and (c) it is similar to discredited views in other fields. We advocate empirical investigation of the range of human phenomena that intuitively embody default reasoning. Gathering such information would provide data with which to generate formal default theories and against which to test the claims of proposed theories. Our position is that such data are the very phenomena that default theories are supposed to explain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.