2022
DOI: 10.31234/osf.io/6dfgk
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Using cognitive psychology to understand GPT-3

Abstract: We study GPT-3, a recent large language model, using tools from cognitive psychology. More specifically, we assess GPT-3's decision-making, information search, deliberation, and causal reasoning abilities on a battery of canonical experiments from the literature. We find that much of GPT-3's behavior is impressive: it solves vignette-based tasks similarly or better than human subjects, is able to make decent decisions from descriptions, outperforms humans in a multi-armed bandit task, and shows signatures of m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
23
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(25 citation statements)
references
References 40 publications
1
23
0
1
Order By: Relevance
“…One can view this as a form of weak supervision, where we do not force the top-level GWT representations to be human-specified labels, only that there is a simple relationship between these representations and utterances which humans would often associate with the corresponding meaning. Our discussion about causality should also suggest that passive observation may be insufficient: in order to capture the causal structure understood by humans, it may be necessary for learning agents to be embedded in an environment in which they can act and thus discover its causal structure [188,189]. Studying this kind of set-up was the motivation for our work on the Baby AI environment [130].…”
Section: High-level Representations Describe Verbalizable Conceptsmentioning
confidence: 99%
“…One can view this as a form of weak supervision, where we do not force the top-level GWT representations to be human-specified labels, only that there is a simple relationship between these representations and utterances which humans would often associate with the corresponding meaning. Our discussion about causality should also suggest that passive observation may be insufficient: in order to capture the causal structure understood by humans, it may be necessary for learning agents to be embedded in an environment in which they can act and thus discover its causal structure [188,189]. Studying this kind of set-up was the motivation for our work on the Baby AI environment [130].…”
Section: High-level Representations Describe Verbalizable Conceptsmentioning
confidence: 99%
“…Our work stands in contrast to the prevailing research trend of studying humanlike aspects of LLMs, which views each LLM as a single entity with its own knowledge base (Petroni et al 2019), its own ability to pass as human (e.g., Uchendu et al 2021), and its own personality (Karra, Nguyen, and Tulabandhula 2022). Binz and Schulz (2022) use cognitive psychology to address the question of whether LLMs "learn and think like people." Jones and Steinhardt (2022) use human cognitive biases, such as anchoring and framing effects, to evaluate an LLM's "errors" where it deviates from rational behavior.…”
Section: Related Workmentioning
confidence: 99%
“…Currently, researchers are using numerous theories and tools to investigate these language systems. For example, Binz and Schulz (2022) use cognitive psychology to investigate GPT‐3. This paper used the well‐known psychology text analysis software LIWC – 2015 (Language Inquiry and Word Count) to review LaMDA’s responses provided as part of the Lemoine (2022) interview.…”
Section: Liwc Text Analysis Of Lamda’s Dialoguementioning
confidence: 99%