The ability to evaluate scientific claims and evidence is an important aspect of scientific literacy and requires various epistemic competences. Readers spontaneously validate presented information against their knowledge and beliefs but differ in their ability to strategically evaluate the soundness of informal arguments. The present research investigated how students of psychology, compared to scientists working in psychology, evaluate informal arguments. Using a think-aloud procedure, we identified the specific strategies students and scientists apply when judging the plausibility of arguments and classifying common argumentation fallacies. Results indicate that students, compared to scientists, have difficulties forming these judgements and base them on intuition and opinion rather than the internal consistency of arguments. Our findings are discussed using the mental model theory framework. Although introductory students validate scientific information against their knowledge and beliefs, their judgements are often erroneous, in part because their use of strategy is immature. Implications for systematic trainings of epistemic competences are discussed. ARTICLE HISTORY Received 14 April 2015; Accepted 27 November 2015KEYWORDS Informal argument evaluation; epistemic competences; mental model theory; think-aloud procedure; competences in higher education Arguments can affect our daily lives in many ways, whether we think of politicians trying to persuade us to vote for a particular party, a newspaper article providing a certain perspective on a societal issue, or taking decisions about which kind of career we would like to pursue. In scientific discourse, arguments also play a central role, because they link theoretical claims to supporting empirical evidence. Students entering university are confronted with scientific literature that presents different and at times conflicting theories backed up by more or less compelling evidence. The ability to evaluate This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://creativecommons.org/Licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way. THINKING & REASONING, 2016 VOL. 22, NO. 2, 221À249 http://dx.doi.org/10.1080/13546783.2015 scientific claims and evidence is an important aspect of scientific literacy and requires various epistemic competences (Britt, Richter, & Rouet, 2014). The present research investigated how students of psychology, compared to scientists working in psychology, evaluate arguments and which strategies they use to judge their plausibility. Successful readers possess a broad number of general processing strategies that they use in a flexible way, depending on the processing goal (Wyatt et al., 1993). Although argumentation skills are generally not formally taught in higher education, we expect scie...
Background The evaluation of informal arguments is a key component of comprehending scientific texts and scientific literacy. Aim The present study examined the nomological network of university students’ ability to evaluate informal arguments in scientific texts and the relevance of this ability for academic success. Sample A sample of 225 university students from the social and educational sciences participated in the study. Methods Judgements of plausibility and the ability to recognize argumentation fallacies were assessed with a novel computer‐based diagnostic instrument (Argument Judgement Test; AJT). Results The items of the AJT partly conform to a 1‐PL model and test scores were systematically related to epistemological beliefs and verbal intelligence. Item‐by‐item analyses of responses and response times showed that implausible arguments were more difficult to process and correct responses to these items required increased cognitive effort. Finally, the AJT scores predicted academic success at university even if verbal intelligence and grade point average were controlled for. Conclusion These findings suggest that the ability to evaluate arguments in scientific texts is an aspect of rationality, relies on reflective processes, and is relevant for academic success.
The ability to comprehend informal arguments is essential for scientific literacy but students often lack structural knowledge about these arguments, especially when the arguments are more complex. This study used a pre-post-test design with a follow-up 4 weeks later to investigate whether a computerised training in identifying structural components of informal arguments can improve university students' competences to understand complex arguments. The training was embedded in a constructivist learning environment and contents were based on the Toulmin model of argument structure, according to which arguments can be deconstructed into several functional components: Claim, datum, warrant, backing evidence, and rebuttal. Being able to identify the warrant is central for scientific literacy, as the warrant determines whether a conclusion is justified given the data. Results indicate that training in argument structure did not generally improve performance for all students and argument types, but that it was particularly helpful for identifying more complex arguments with a less typical structure and relational aspects between key components (i.e. warrants). High achieving students profited the most from this intervention, and the intervention was also helpful for students with high pretest accuracy scores. Our results suggest that interventions to foster argumentation skills should be included into the curriculum and these interventions should be designed to match learners' ability level.
Zusammenfassung. Informelle Argumente sind in wissenschaftlichen Texten allgegenwärtig. Um solche Argumente verstehen und bewerten zu können, müssen Studierende ihre Struktur entschlüsseln. Zur Erfassung dieser Kompetenz wurde der computergestützte Argumentstrukturtest (AST) für Studierende sozial- und erziehungswissenschaftlicher Fächer sowie Lehramtsstudierende entwickelt. Die Testpersonen lesen kurze Texte mit informellen Argumenten und identifizieren ihre funktionalen Komponenten (z. B. Behauptung, Begründung, Schlussregel). Anhand einer Stichprobe von 225 Studierenden wurde der AST einer ersten Überprüfung seiner Reliabilität und Validität unterzogen. Dabei erwies sich der AST als intern valide, mit einer breiten Streuung der Itemschwierigkeiten. In einem explanatorischen Item-Response-Modell konnten die Itemschwierigkeiten sehr präzise durch theoretisch relevante Itemmerkmale, die das Argumentverstehen beeinflussen, vorhergesagt werden. Korrelationen mit verbaler Intelligenz und Schul- und Studienleistungen sprechen darüber hinaus für die Kriteriumsvalidität des Instruments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.