2022
DOI: 10.1037/rev0000358
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic analogical mapping with semantic relation networks.

Abstract: The human ability to flexibly reason using analogies with domain-general content depends on mechanisms for identifying relations between concepts, and for mapping concepts and their relations across analogs. Building on a recent model of how semantic relations can be learned from nonrelational word embeddings, we present a new computational model of mapping between two analogs. The model adopts a Bayesian framework for probabilistic graph matching, operating on semantic relation networks constructed from distr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 21 publications
(11 citation statements)
references
References 130 publications
1
10
0
Order By: Relevance
“…Our findings also suggest that competition from the relation itself as well as the semantic information associated with that relation must be inhibited in order to resolve analogical mapping problems to which more than one relation may be associated. This is consistent with empirical evidence and recent computational models demonstrating that representations of analogies must involve both abstract relations (i.e., relation vectors) and semantic elements (i.e., semantic vectors) encoded together in order to produce consistent, human-like reasoning performance (Lu et al, 2022).…”
Section: Discussionsupporting
confidence: 90%
“…Our findings also suggest that competition from the relation itself as well as the semantic information associated with that relation must be inhibited in order to resolve analogical mapping problems to which more than one relation may be associated. This is consistent with empirical evidence and recent computational models demonstrating that representations of analogies must involve both abstract relations (i.e., relation vectors) and semantic elements (i.e., semantic vectors) encoded together in order to produce consistent, human-like reasoning performance (Lu et al, 2022).…”
Section: Discussionsupporting
confidence: 90%
“…That is, few false alarms would be expected if a test pair instantiated a familiar relation, but was composed of unstudied words. Moreover, even complex analogical reasoning by humans appears to be guided by lexical similarity of words in addition to similarity of explicit relations between words (Lu et al, 2022). It appears that a complete account of both reasoning and episodic memory will require integration of multiple types of similarity.…”
Section: Discussionmentioning
confidence: 99%
“…One strength of this approach is that because the learned relations are represented implicitly in the networks’ weight matrices, they are functional in the sense that they directly impact the model’s behavior: Given one term of a relation, for instance, along with a weight matrix representing a relation, a network of this kind can produce the other term of the relation as an output (see, e.g., Leech et al, 2008; Lu et al, 2012; but cf. Lu et al, 2021). By contrast, models based on more explicit representations of relations (e.g., Anderson, 2007; Doumas et al, 2008; Falkenhainer et al, 1989; Hummel & Holyoak, 1997, 2003), including the model presented here, must explicitly decide how to apply the relations it knows to the task at hand (e.g., by adding an inferred proposition to a database of known facts; see Anderson, 2007).…”
Section: Relational Representations and Generalizationmentioning
confidence: 99%