2024
DOI: 10.1101/2024.08.15.608196
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Instruction-tuned large language models misalign with natural language comprehension in humans

Changjiang Gao,
Zhengwu Ma,
Jiajun Chen
et al.

Abstract: Transformer-based language models have significantly advanced our understanding of meaning representation in the human brain. Prior research utilizing smaller models like BERT and GPT-2 suggests that "next-word prediction" is a computational principle shared between machines and humans. However, recent advancements in large language models (LLMs) have highlighted the effectiveness of instruction tuning beyond next-word prediction. It remains to be tested whether instruction tuning can further align the model w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 23 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?