We analyse Hutto and Myin´s three arguments against computationalism (Hutto and Myin 2012, 2017; Hutto et al. forthcoming). The Hard Problem of Content targets computationalism that relies on semantic notion of computation, claiming that it cannot account for the natural origins of content. The Intentionality Problem is targeted against computationalism using non-semantic accounts of computation, arguing that it fails in explaining intentionality. The Abstraction Problem claims that causal interaction between concrete physical processes and abstract computational properties is problematic. We argue that these arguments are flawed and are not enough to rule out computationalism.
The so-called integration problem concerning mechanistic and computational explanation asks how they are related to each other. One approach is that a computational explanation is a species of mechanistic explanation. According to this view, computational or mathematical descriptions are mechanism sketches or macroscopic descriptions that include computationally relevant and exclude computationally irrelevant physical properties. Some suggest that this results in a so-called single hierarchy view of physical computation, where computational or mathematical properties sit together in the same mechanistic hierarchy with the implementational properties. This view can be contrasted with a separate hierarchy view, according to which computational and physical descriptions have their own hierarchies which are related to each other via a bridging implementation relation. The single hierarchy view has been criticized for downplaying the explanatory value of computational explanations and not being hospitable to multiple realization of cognitive processes. In this paper, I argue that (1) the aforementioned criticisms fail, and (2) there might be a deeper problem with the single hierarchy view, which is that the single hierarchy view might collapse into a separate hierarchy view. The kind of abstraction used by the single hierarchy view does not seem to grant mathematical or computational descriptions but only more stripped physical or implementational descriptions.
Descriptive abstraction means omission of information from descriptions of phenomena. In this paper, I introduce a distinction between vertical and horizontal descriptive abstraction. Vertical abstracts away levels of mechanism or organization, while horizontal abstracts away details within one level of organization. The distinction is implicit in parts of the literature, but it has received insufficient attention and gone mainly unnoticed. I suggest that the distinction can be used to clarify how computational descriptions are formed in some variants of the mechanistic account of physical computation. Furthermore, I suggest that, if this suggestion is adopted, it can be used to resolve what I call abstraction, hierarchy, and generality problems raised against mechanistic account of physical computation. According to the abstraction problem, the mechanistic account of physical computation is conceptually confused in claiming that physical systems process computational, abstract properties. An existing solution distinguishes between descriptive and metaphysical abstraction, suggesting that the abstraction problem unnecessarily postulates metaphysically abstract entities. The solution has been criticized for leading to what I call hierarchy and generality problems: it results in two separate hierarchies, one physical and one computational, making it problematic both to account for the generality of computational descriptions and to specify how the two hierarchies are related to each other. Adopting the vertical-horizontal distinction and the view that computational descriptions are achieved by horizontal abstraction allows one to account for the generality of computational descriptions, and to form a single hierarchy in which there are no separate hierarchies in need of integration.
According to radical enactivists, cognitive sciences should abandon the representational framework. Perceptuomotor cognition and action control are often provided as paradigmatic examples of nonrepresentational cognitive phenomena. In this article, we illustrate how motor and action control are studied in research that uses reinforcement learning algorithms. Crucially, this approach can be given a representational interpretation. Hence, reinforcement learning provides a way to explicate action-oriented views of cognitive systems in a representational way.
The fundamental cognitive problem for active organisms is to decide what to do next in a changing environment. In this article, we analyze motor and action control in computational models that utilize reinforcement learning (RL) algorithms. In reinforcement learning, action control is governed by an action selection policy that maximizes the expected future reward in light of a predictive world model. In this paper we argue that RL provides a way to explicate the so-called action-oriented views of cognitive systems in representational terms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.