2022
DOI: 10.48550/arxiv.2202.04728
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Predicting Human Similarity Judgments Using Large Language Models

Abstract: Similarity judgments provide a well-established method for accessing mental representations, with applications in psychology, neuroscience and machine learning. However, collecting similarity judgments can be prohibitively expensive for naturalistic datasets as the number of comparisons grows quadratically in the number of stimuli. One way to tackle this problem is to construct approximation procedures that rely on more accessible proxies for predicting similarity. Here we leverage recent advances in language … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 18 publications
0
8
0
Order By: Relevance
“…Recent studies have used large language models (LLMs) to investigate the extent to which language contains information about human perception across sensory modalities by examining the accuracy of LLM's inference about human similarity judgments [14,15,8]. These studies have successfully applied LLMs such as GPT-3 to elicit similarity scores between stimuli, which then have their underlying psychological spaces visualized through application of multidimensional scaling (MDS) [8].…”
Section: Similarity Judgment Tasks Using Large Language Modelsmentioning
confidence: 99%
“…Recent studies have used large language models (LLMs) to investigate the extent to which language contains information about human perception across sensory modalities by examining the accuracy of LLM's inference about human similarity judgments [14,15,8]. These studies have successfully applied LLMs such as GPT-3 to elicit similarity scores between stimuli, which then have their underlying psychological spaces visualized through application of multidimensional scaling (MDS) [8].…”
Section: Similarity Judgment Tasks Using Large Language Modelsmentioning
confidence: 99%
“…When an approximate solution is acceptable, Peterson et al (2018) showed that pre-trained computer vision models can be used to approximate human perceptual similarity judgments over images. Marjieh et al (2022a) showed that human perceptual similarity can be more accurately, but still efficiently, approximated from human-produced natural language de- scriptions of the stimuli of interest (for example by using large language models to estimate similarity over pairs of these descriptions). Marjieh et al (2022b) extended this result to a variety of domains (vision, audio, and video) and measured alignment for hundreds of pre-trained models.…”
Section: Related Workmentioning
confidence: 99%
“…Although several recent papers have proposed ways to measure (Marjieh et al, 2022a;, explain (Muttenthaler et al, 2022;Kumar et al, 2022), and even improve (Peterson et al, 2018;Fel et al, 2022) the representational alignment of models, few have focused on studying the downstream impact of a model being representationally aligned with humans, and many studies simply rely on the intuition that better alignment leads to better performance to justify pursuing increased alignment. While there is recent evidence to suggest that alignment may help humans learn across domains and perform zero-shot generalization (Aho et al, 2022), there is also evidence to suggest that alignment may not always be beneficial for models, with models scoring low on alignment metrics achieving higher performance on downstream tasks like image classification (Kumar et al, 2022;Muttenthaler et al, 2022;Fel et al, 2022).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations