2023
DOI: 10.3390/math11204317
|View full text |Cite
|
Sign up to set email alerts
|

Structure-Aware Low-Rank Adaptation for Parameter-Efficient Fine-Tuning

Yahao Hu,
Yifei Xie,
Tianfeng Wang
et al.

Abstract: With the growing scale of pre-trained language models (PLMs), full parameter fine-tuning becomes prohibitively expensive and practically infeasible. Therefore, parameter-efficient adaptation techniques for PLMs have been proposed to learn through incremental updates of pre-trained weights, such as in low-rank adaptation (LoRA). However, LoRA relies on heuristics to select the modules and layers to which it is applied, and assigns them the same rank. As a consequence, any fine-tuning that ignores the structural… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
references
References 26 publications
0
0
0
Order By: Relevance