Fine-tuning pretrained protein language models (PLMs)
has emerged
as a prominent strategy for enhancing downstream prediction tasks,
often outperforming traditional supervised learning approaches. As
a widely applied powerful technique in natural language processing,
employing parameter-efficient fine-tuning techniques could potentially
enhance the performance of PLMs. However, the direct transfer to life
science tasks is nontrivial due to the different training strategies
and data forms. To address this gap, we introduce SES-Adapter, a simple,
efficient, and scalable adapter method for enhancing the representation
learning of PLMs. SES-Adapter incorporates PLM embeddings with structural
sequence embeddings to create structure-aware representations. We
show that the proposed method is compatible with different PLM architectures
and across diverse tasks. Extensive evaluations are conducted on 2
types of folding structures with notable quality differences, 9 state-of-the-art
baselines, and 9 benchmark data sets across distinct downstream tasks.
Results show that compared to vanilla PLMs, SES-Adapter improves downstream
task performance by a maximum of 11% and an average of 3%, with significantly
accelerated convergence speed by a maximum of 1034% and an average
of 362%, the training efficiency is also improved by approximately
2 times. Moreover, positive optimization is observed even with low-quality
predicted structures. The source code for SES-Adapter is available
at https://github.com/tyang816/SES-Adapter.