Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Large language models (LLMs) have shown their capabilities in numerical and logical reasoning, yet their capabilities in higher-order cognitive tasks, particularly causal reasoning, remain less explored. Current research on LLMs in causal reasoning has focused primarily on tasks such as identifying simple cause-effect relationships, answering basic “what-if” questions, and generating plausible causal explanations. However, these models often struggle with complex causal structures, confounding variables, and distinguishing correlation from causation. This work addresses these limitations by systematically evaluating LLMs’ causal reasoning abilities across three representative scenarios, namely analyzing causation from effects, tracing effects back to causes, and assessing the impact of interventions on causal relationships. These scenarios are designed to challenge LLMs beyond simple associative reasoning and test their ability to handle more nuanced causal problems. For each scenario, we construct four paradigms and employ three types of prompt scheme, namely zero-shot prompting, few-shot prompting, and Chain-of-Thought (CoT) prompting in a set of 36 test cases. Our findings reveal that most LLMs encounter challenges in causal cognition across all prompt schemes, which underscore the need to enhance the cognitive reasoning capabilities of LLMs to better support complex causal reasoning tasks. By identifying these limitations, our study contributes to guiding future research and development efforts in improving LLMs’ higher-order reasoning abilities.
Large language models (LLMs) have shown their capabilities in numerical and logical reasoning, yet their capabilities in higher-order cognitive tasks, particularly causal reasoning, remain less explored. Current research on LLMs in causal reasoning has focused primarily on tasks such as identifying simple cause-effect relationships, answering basic “what-if” questions, and generating plausible causal explanations. However, these models often struggle with complex causal structures, confounding variables, and distinguishing correlation from causation. This work addresses these limitations by systematically evaluating LLMs’ causal reasoning abilities across three representative scenarios, namely analyzing causation from effects, tracing effects back to causes, and assessing the impact of interventions on causal relationships. These scenarios are designed to challenge LLMs beyond simple associative reasoning and test their ability to handle more nuanced causal problems. For each scenario, we construct four paradigms and employ three types of prompt scheme, namely zero-shot prompting, few-shot prompting, and Chain-of-Thought (CoT) prompting in a set of 36 test cases. Our findings reveal that most LLMs encounter challenges in causal cognition across all prompt schemes, which underscore the need to enhance the cognitive reasoning capabilities of LLMs to better support complex causal reasoning tasks. By identifying these limitations, our study contributes to guiding future research and development efforts in improving LLMs’ higher-order reasoning abilities.
Les développements récents des recherches en neurosciences conduisent à renouveler la question des relations entre neurosciences et liberté. Cet article prend la forme d’une rencontre entre les points de vue scientifique et philosophique. L’hypothèse de base est que, si l’être humain peut être qualifié d’être libre, le langage joue un rôle décisif dans l’émergence de cette liberté. Nous nous centrons sur l’apprentissage du langage. Ce processus est tout d’abord envisagé du point de vue du neuroscientifique qui décrit les différentes phases de l’apprentissage du langage. En un second temps, le philosophe des sciences analyse ce processus en rapport avec la question du réductionnisme, en rapport avec la philosophie du langage, et en rapport avec la question du déterminisme. L’ensemble débouche sur une conception où l’être humain articule son comportement au système de significations qu’il adopte. La voie est ouverte pour une réconciliation des neurosciences et d’une anthropologie qui fait place à la liberté.
Post- and transhumanist discourses have evolved out of the humanist discourse and deal with the social, economic, and ethical challenges that arise in the wake of technological advances. These are questions such as the following: If the technical prerequisites are met, should we support human cloning, radical life extension, or the creation of artificial bodies into which we can upload our minds? Although there are numerous publications by Muslim legal scholars contributing to the posthumanist discourse within the framework of the bioethical discourse, there is still little to no scholarship tackling the post- and transhumanist discourse from the perspective of kalām, the systematic theology of Islam. So far, only a few articles that approach this task can be identified. This article gives an overview of the core theological positions of Muslim researchers in post- and transhumanist discourse and offers a systematic critical analysis of the presented contributions from the perspective of kalām. It argues that the Critical Posthumanist approach provides a fertile ground for Muslim scholars to contribute in. Hence it can also be read as a contribution to the emerging scholarly field of new kalām (kalām al-jadīd).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.