2019
DOI: 10.1186/s12859-019-3259-6
|View full text |Cite
|
Sign up to set email alerts
|

Research on predicting 2D-HP protein folding using reinforcement learning with full state space

Abstract: BackgroundProtein structure prediction has always been an important issue in bioinformatics. Prediction of the two-dimensional structure of proteins based on the hydrophobic polarity model is a typical non-deterministic polynomial hard problem. Currently reported hydrophobic polarity model optimization methods, greedy method, brute-force method, and genetic algorithm usually cannot converge robustly to the lowest energy conformations. Reinforcement learning with the advantages of continuous Markov optimal deci… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…Cao and Lu ( Lu et al, 2021 ) avoided loss of information due to truncation by introducing a fag vector and used a variable length dynamic two-way gated cyclic unit model to predict protein. Yang ( Wu et al, 2019 ) designed a reward function to model the protein input under full-state reinforcement learning. Wu and Huang ( Wu et al, 2019 ) et al used random forests to build their own model and used binary reordering to make their predictions more efficient.…”
Section: Introductionmentioning
confidence: 99%
“…Cao and Lu ( Lu et al, 2021 ) avoided loss of information due to truncation by introducing a fag vector and used a variable length dynamic two-way gated cyclic unit model to predict protein. Yang ( Wu et al, 2019 ) designed a reward function to model the protein input under full-state reinforcement learning. Wu and Huang ( Wu et al, 2019 ) et al used random forests to build their own model and used binary reordering to make their predictions more efficient.…”
Section: Introductionmentioning
confidence: 99%
“…Because once there is a conflict, the episode will end immediately and the agent will receive a bad reward. However, current reinforcement learning based research still suffers from low long-term prediction accuracy and cannot fold sequences well when the length is larger than 30 [4][5][6].…”
Section: Introductionmentioning
confidence: 99%