The Transformer-based language models, including vanilla Transformer, BERT, and GPT-3, have achieved revolutionary breakthroughs in the field of natural language processing. Since there are inherent similarities between various biological sequences and natural languages, the remarkable interpretability and adaptability of these models have prompted a new wave of their application in bioinformatics research. To provide a timely and comprehensive review, we introduce key developments of Transformer-based language models by describing the detailed structure of Transformers and summarize their contribution to a wide range of bioinformatics research from basic sequence analysis to drug discovery. While Transformer-based applications in bioinformatics are diverse and multifaceted, we identify and discuss the common challenges, including heterogeneity of training data, computational expense and model interpretability, and opportunities in the context of bioinformatics research. We hope that the broader community of natural language processing researchers, bioinformaticians, and biologists will be brought together to foster future research and development in Transformer-based language models, and inspire novel bioinformatics applications that are unattainable by traditional methods.
Supplementary information
Supplementary data are available at Bioinformatics Advances online.