2021
DOI: 10.1016/j.csbj.2021.08.041
|View full text |Cite
|
Sign up to set email alerts
|

Interpreting a black box predictor to gain insights into early folding mechanisms

Abstract: Graphical abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 35 publications
0
2
0
Order By: Relevance
“…With the breakthroughs in protein structure prediction techniques, the exploration and prediction of protein folding pathways have garnered substantial attention from the computational structure biology community. Various methods have been proposed, including the simulation of an inverse folding pathway from native state to unfolded state, , prediction of early folding residues using machine learning , and the prediction of protein folding intermediates based on templates . Although these methods have shown promising results to some extent, accurately predicting protein folding pathways remains a major challenge.…”
Section: Challenges For Protein Structure Prediction Methodsmentioning
confidence: 99%
“…With the breakthroughs in protein structure prediction techniques, the exploration and prediction of protein folding pathways have garnered substantial attention from the computational structure biology community. Various methods have been proposed, including the simulation of an inverse folding pathway from native state to unfolded state, , prediction of early folding residues using machine learning , and the prediction of protein folding intermediates based on templates . Although these methods have shown promising results to some extent, accurately predicting protein folding pathways remains a major challenge.…”
Section: Challenges For Protein Structure Prediction Methodsmentioning
confidence: 99%
“…By and large, post-hoc explanation methods can either rely on the inner structures of the explained model or be completely model-agnostic. Prominent approaches in the latter category include feature attribution methods, such as Shapley additive explanations (SHAP) [1], feature importance [2], [3], local interpretable model-agnostic explanations (LIME) [4], or global surrogate models [5]- [7]. On the other hand, posthoc explanation methods specifically designed for neural networks cover multilayer [8], [9], convolutional [10]- [13], graph [14], [15], as well as recurrent [16], [17] architectures.…”
Section: Introductionmentioning
confidence: 99%