2023
DOI: 10.3390/biology12071033
|View full text |Cite
|
Sign up to set email alerts
|

Transformer Architecture and Attention Mechanisms in Genome Data Analysis: A Comprehensive Review

Abstract: The emergence and rapid development of deep learning, specifically transformer-based architectures and attention mechanisms, have had transformative implications across several domains, including bioinformatics and genome data analysis. The analogous nature of genome sequences to language texts has enabled the application of techniques that have exhibited success in fields ranging from natural language processing to genomic data. This review provides a comprehensive analysis of the most recent advancements in … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 40 publications
(11 citation statements)
references
References 161 publications
0
11
0
Order By: Relevance
“…Firstly, BERT’s bidirectional nature enables it to capture nuanced information within molecular structures, including interactions among atoms and the molecule’s overall configuration. This capability offers a highly detailed representation for complex drug molecules [ 75–77 ]. Furthermore, BERT employs a pre-train/fine-tune paradigm, allowing it to be initially trained on large datasets and later fine-tuned for specific tasks.…”
Section: Attention-based Models and Their Advantages In Drug Discoverymentioning
confidence: 99%
“…Firstly, BERT’s bidirectional nature enables it to capture nuanced information within molecular structures, including interactions among atoms and the molecule’s overall configuration. This capability offers a highly detailed representation for complex drug molecules [ 75–77 ]. Furthermore, BERT employs a pre-train/fine-tune paradigm, allowing it to be initially trained on large datasets and later fine-tuned for specific tasks.…”
Section: Attention-based Models and Their Advantages In Drug Discoverymentioning
confidence: 99%
“…This heat is then used to The transformer is the most novel and ubiquitous DL model, consisting of encoder and decoder blocks, which in turn contain self-attention modules that can handle various input sizes and effectively capture long-range dependencies. It currently dominates natural language processing (NLP) but is also utilized in various domains, such as computer vision, audio processing, and generative AI [47][48][49].…”
Section: Weather and Variable Renewable Energy Typesmentioning
confidence: 99%
“…Transformer-based models are fundamentally structured around the attention mechanism and its derived framework, multi-head attention ( Choi & Lee, 2023 ). Leveraging this core architecture, transformer-based models can overcome the limitations of RNN models, which cannot parallelize input processing across all time steps, while still effectively capturing the positional dependencies inherent in sequential data in tasks such as language translation, text summarization, image captioning, and speech recognition.…”
Section: Artificial Intelligencementioning
confidence: 99%
“…Leveraging this core architecture, transformer-based models can overcome the limitations of RNN models, which cannot parallelize input processing across all time steps, while still effectively capturing the positional dependencies inherent in sequential data in tasks such as language translation, text summarization, image captioning, and speech recognition. The transformer includes encoder and decoder structures for processing sequence inputs and generating corresponding outputs, respectively ( Choi & Lee, 2023 ). The architectural variant BERT, which only incorporates the encoder structure, employs random sequence masking during its pre-training tasks, leading to superior outcomes in protein 3D structure prediction ( Lin et al, 2022b ) and single-cell annotation ( Yang et al, 2022 ).…”
Section: Artificial Intelligencementioning
confidence: 99%