NetPath, a novel community resource of curated human signaling pathways is presented and its utility demonstrated using immune signaling data.
Word associations have been used widely in psychology, but the validity of their application strongly depends on the number of cues included in the study and the extent to which they probe all associations known by an individual. In this work, we address both issues by introducing a new English word association dataset. We describe the collection of word associations for over 12,000 cue words, currently the largest such English-language resource in the world. Our procedure allowed subjects to provide multiple responses for each cue, which permits us to measure weak associations. We evaluate the utility of the dataset in several different contexts, including lexical decision and semantic categorization. We also show that measures based on a mechanism of spreading activation derived from this new resource are highly predictive of direct judgments of similarity. Finally, a comparison with existing English word association sets further highlights systematic improvements provided through these new norms.
Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes.We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of "rational process models" that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson's (1990Anderson's ( , 1991 Rational Model of Categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose two alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure appropriate when all stimuli are presented simultaneously, and particle filters, which sequentially approximate the posterior distribution with a small number of samples that are updated as new data become available. Applying these algorithms to several existing datasets shows that a particle filter with a single particle provides a good description of human inferences. Rational Approximations to Category Learning 3Rational approximations to rational models:Alternative algorithms for category learning Rational models of cognition aim to explain human thought and behavior as an optimal solution to the computational problems that are posed by our environment (Anderson, 1990;Chater & Oaksford, 1999;Marr, 1982; Oaksford & Chater, 1998). This approach has been used to model several aspects of cognition, including memory (Anderson, 1990;Shiffrin & Steyvers, 1997), reasoning (Oaksford & Chater, 1994), generalization (Shepard, 1987;Tenenbaum & Griffiths, 2001a), and causal induction (Anderson, 1990;Griffiths & Tenenbaum, 2005). However, executing optimal solutions to these problems can be extemely computationally expensive, a point that is commonly raised as an argument against the validity of rational models (e.g., Gigerenzer & Todd, 1999;Tversky & Kahneman, 1974). This establishes a basic challenge for advocates of rational models of cognition: identifying psychologically plausible mechanisms that would allow the human mind to approximate optimal performance.The question of how rational models of cognition can be approximated by psychologically plausible mechanisms addresses a fundamental issue in cognitive science: bridging levels of analysis. Rational models provide answers to questions posed at Marr's (1982) computational level -questions about the abstract computational problems involved in cognition. Th...
In this article, we describe the most extensive set of word associations collected to date. The database contains over 12,000 cue words for which more than 70,000 participants generated three responses in a multipleresponse free association task. The goal of this study was (1) to create a semantic network that covers a large part of the human lexicon, (2) to investigate the implications of a multiple-response procedure by deriving a weighted directed network, and (3) to show how measures of centrality and relatedness derived from this network predict both lexical access in a lexical decision task and semantic relatedness in similarity judgment tasks. First, our results show that the multiple-response procedure results in a more heterogeneous set of responses, which lead to better predictions of lexical access and semantic relatedness than do singleresponse procedures. Second, the directed nature of the network leads to a decomposition of centrality that primarily depends on the number of incoming links or in-degree of each node, rather than its set size or number of outgoing links. Both studies indicate that adequate representation formats and sufficiently rich data derived from word associations represent a valuable type of information in both lexical and semantic processing.Keywords Word associations . Semantic network . Lexical decision . Semantic relatedness . Lexical centrality Associative knowledge is a central component in many accounts of recall, recognition, and semantic representations in word processing. There are multiple ways to tap into this knowledge, but word associations are considered to be the most direct route for gaining insight into our semantic knowledge (Nelson, McEvoy, & Schreiber, 2004;Mollin, 2009) and human thought in general (Deese, 1965). The type of information produced by word associations is capable of expressing any kind of semantic relationship between words. Because of this flexibility, networks are considered the natural representation of word associations, where nodes correspond to lexicalized concepts and links indicate semantic or lexical relationships between two nodes. These networks correspond to an idealized localist representation of our mental lexical network. The properties derived from such a network have been instrumental in three different research traditions, which will be described below. These traditions have focused on (1) direct association strength, (2) second-order strength and distributional similarity, and (3) network topology and centrality measures.The first tradition has used word associations to calculate a measure of associative strength and was inspired by a behaviorist view of language in terms of stimulus-response patterns. This notion of associative strength plays an important role in studies that have focused on inhibition and facilitation in list learning (e.g., Roediger & Neely, 1982), studies on episodic memory (e.g., Nelson et al., 2004), and studies that have tried to distinguish semantic and associative priming (for a recent over...
After the embargo period via non-commercial hosting platforms such as their institutional repository via commercial sites with which Elsevier has an agreement In all cases accepted manuscripts should: link to the formal publication via its DOI bear a CC-BY-NC-ND license -this is easy to do, click here to find out how if aggregated with other manuscripts, for example in a repository or other site, be shared in alignment with our hosting policy not be added to or enhanced in any way to appear more like, or to substitute for, the published journal article AbstractWe propose a new method for quickly calculating the probability density function for first passage times in simple Wiener diffusion models, extending an earlier method used by Van Zandt, Colonius and Proctor (2000). The method relies on the observation that there are two distinct infinite series expansions of this probability density, one of which converges quickly for small time values, while the other converges quickly at large time values. By deriving error bounds associated with finite truncation of either expansion, we are able to determine analytically which of the two versions should be applied in any particular context. The bounds indicate that, even for extremely stringent error tolerances, no more than 8 terms are required to calculate the probability density. By making the calculation of this distribution tractable, the goal is to allow more complex extensions of Wiener diffusion models to be developed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.