2020 2nd International Conference on Machine Learning, Big Data and Business Intelligence (MLBDBI) 2020
DOI: 10.1109/mlbdbi51377.2020.00019
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Features of Human Mental Disorders through Methylation Profile and Machine Learning Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 6 publications
0
5
0
Order By: Relevance
“…This block consists of two sub-layers: a multi-head self-attention mechanism and a point-wise fully connected feed-forward network 23 . Layer normalization was applied before each sub-layer to stabilize the training process 27 . Additionally, dropout and residual addition were implemented after each sub-layer to reduce the risk of overfitting and mitigate the vanishing gradient problem.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…This block consists of two sub-layers: a multi-head self-attention mechanism and a point-wise fully connected feed-forward network 23 . Layer normalization was applied before each sub-layer to stabilize the training process 27 . Additionally, dropout and residual addition were implemented after each sub-layer to reduce the risk of overfitting and mitigate the vanishing gradient problem.…”
Section: Methodsmentioning
confidence: 99%
“…These features are then input into the Transformer block, which captures complex dependencies between them by modeling positional relationships through positional encoding and attention mechanism. The Pre-LN configuration stabilizes the training process, eliminating the need for a learning rate warm-up stage required by the original Post-Layer Normalization (Post-LN) setting 23,27 . Pre-trained with vast amounts of data, this model can function as a foundational model that can be adapted to various new diseases and tissues through fine-tuning.…”
Section: Mainmentioning
confidence: 99%
“…Fig. 1-b illustrates the design of LucaOne, which utilizes the Transformer-Encoder ( 21 ) archi-tecture with the following enhancements: The vocabulary of LucaOne comprises 39 tokens, including both nucleotide and amino acid symbols(refer to Supplementary B ); The model employs Pre-Layer Normalization over Post-Layer Normalization, facilitating the training of deeper networks ( 45 ); Rotary Position Embedding(RoPE ( 46 )) is implemented instead of absolute positional encoding, enabling the model to handle sequences longer than those seen during training; It incorporates mixed training of nucleic acid and protein sequences by introducing token-type embeddings, assigning 0 for nucleotides and 1 for amino acids; Besides the pre-training masking tasks for nucleic acid and protein sequences, eight supervised pre-training tasks have been implemented based on selected annotation informa-tion(refer to Supplementary D ). …”
Section: Model Architecturementioning
confidence: 99%
“…The model employs Pre-Layer Normalization over Post-Layer Normalization, facilitating the training of deeper networks ( 45 );…”
Section: Model Architecturementioning
confidence: 99%
“…Classification of cases and controls for certain diseases is also performed using DNA methylation data. Examples of machine learning applications using epigenetic data include classification of coronary heart disease, neurodevelopmental syndromes, schizophrenia, Alzheimer's disease, psychiatric disorders and others [33][34][35][36][37][38][39].…”
Section: Introduction 1backgroundmentioning
confidence: 99%