2024
DOI: 10.1073/pnas.2405840121
|View full text |Cite
|
Sign up to set email alerts
|

Democratizing protein language models with parameter-efficient fine-tuning

Samuel Sledzieski,
Meghana Kshirsagar,
Minkyung Baek
et al.

Abstract: Proteomics has been revolutionized by large protein language models (PLMs), which learn unsupervised representations from large corpora of sequences. These models are typically fine-tuned in a supervised setting to adapt the model to specific downstream tasks. However, the computational and memory footprint of fine-tuning (FT) large PLMs presents a barrier for many research groups with limited computational resources. Natural language processing has seen a similar explosion in the size of models, where these c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 10 publications
references
References 44 publications
0
0
0
Order By: Relevance