2022
DOI: 10.24251/hicss.2022.161
|View full text |Cite
|
Sign up to set email alerts
|

Analogical Reasoning: An Algorithm Comparison for Natural Language Processing

Abstract: There is a continual push to make Artificial Intelligence (AI) as human-like as possible; however, this is a difficult task. A significant limitation is the inability of AI to learn beyond its current comprehension. Analogical reasoning (AR), whereby learning by analogy occurs, has been proposed as one method to achieve this goal. Current AR models have their roots in symbolist, connectionist, or hybrid approaches which indicate how analogies are evaluated. No current studies have compared psychologically-insp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 28 publications
0
6
0
Order By: Relevance
“…The modified Sternberg and Nigro dataset is particularly fascinating due to its inclusion of abstract and ambiguous concepts such as "true" and "false". The inability to visually represent these concepts has limited visual analogical reasoning research, which is intended to be expanded through the application of AIGC [114]. However, for this research, the individual words within the analogies were used as inputs to the text-to-image model.…”
Section: Textual Prompts: Modified Sternberg and Nigro Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…The modified Sternberg and Nigro dataset is particularly fascinating due to its inclusion of abstract and ambiguous concepts such as "true" and "false". The inability to visually represent these concepts has limited visual analogical reasoning research, which is intended to be expanded through the application of AIGC [114]. However, for this research, the individual words within the analogies were used as inputs to the text-to-image model.…”
Section: Textual Prompts: Modified Sternberg and Nigro Datasetmentioning
confidence: 99%
“…The text-to-image model selected was Craiyon V3 (formerly known as DALL-E Mini), which uses a transformer and generator to create images from a textual prompt [35][36][37]. Craiyon was selected due to having a free tier (unlike Midjourney and DALL-E 2) and considering its previous success established in the literature [114][115][116]. Internally, Craiyon creates its prompt based on the initial prompt to generate nine images per prompt.…”
Section: Text-to-image Model: Craiyonmentioning
confidence: 99%
“…The success of analogical reasoning in solving analogy problems has been proven in both the visual/pictorial (Polya, 1990;Zhang, Gao, Baoxiong, Zhu, & Song-Chun, 2019) and text/verbal space (French, 2002;Rogers, Drozd, & Li, 2017). Considerable emphasis has been on the development of analogical reasoning for text-based analogies with many algorithms developed to address the wide range of text-based analogy problems (Combs, Bihl, Ganapathy, & Staples, 2022). These text-based problems range from novel word problems (e.g., king:queen::man:woman) to mapping sentence elements (e.g., "She is growing like a weed") to drawing parallels between stories (Ichien, Lu, & Holyoak, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…These text-based problems range from novel word problems (e.g., king:queen::man:woman) to mapping sentence elements (e.g., "She is growing like a weed") to drawing parallels between stories (Ichien, Lu, & Holyoak, 2020). Initially, analogical reasoning started as psychologically-based algorithms (see (Gentner, 1983); (Holyoak & Thagard, 1989); (Hofstadter & Mitchell, 1995)) but recently, with the rise of natural language processing, vector space models and artificial neural network approaches have increased in popularity (Combs, Bihl, Ganapathy, & Staples, 2022). To date, the most prominent vector space models include Word2Vec (Mikolov, Sutskever, Chen, Corrado, & Dean, 2013;Mikolov, Tomas, Yih, & Zweig, 2013), Global Vectors (GloVe) (Pennington, Socher, & Manning, 2014), and fastText (Bojanowski, Grave, Joulin, & Mikolov, 2017).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation