2024
DOI: 10.1038/s41586-024-07930-y
|View full text |Cite
|
Sign up to set email alerts
|

Larger and more instructable language models become less reliable

Lexin Zhou,
Wout Schellaert,
Fernando Martínez-Plumed
et al.

Abstract: The prevailing methods to make large language models more powerful and amenable have been based on continuous scaling up (that is, increasing their size, data volume and computational resources1) and bespoke shaping up (including post-filtering2,3, fine tuning or use of human feedback4,5). However, larger and more instructable large language models may have become less reliable. By studying the relationship between difficulty concordance, task avoidance and prompting stability of several language model familie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
references
References 28 publications
0
0
0
Order By: Relevance