2024
DOI: 10.31219/osf.io/e3v5x
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fine-tuning Llama For Better Performance With the MMLU Benchmark

Mei-Ling Yim,
Chun-Hei Yip

Abstract: Enhancements in the performance of Llama 2 on the Massive Multitask Language Understanding (MMLU) benchmark reflect a significant leap forward in language model development. The application of sophisticated fine-tuning techniques, including adaptive learning strategies and advanced data preprocessing, has resulted in notable increases in accuracy and adaptability across diverse domains. These results not only underscore the model's improved proficiency in handling complex language tasks but also enhance its po… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 12 publications
(16 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?