2013
DOI: 10.1186/1758-2946-5-33
|View full text |Cite
|
Sign up to set email alerts
|

Inferring multi-target QSAR models with taxonomy-based multi-task learning

Abstract: BackgroundA plethora of studies indicate that the development of multi-target drugs is beneficial for complex diseases like cancer. Accurate QSAR models for each of the desired targets assist the optimization of a lead candidate by the prediction of affinity profiles. Often, the targets of a multi-target drug are sufficiently similar such that, in principle, knowledge can be transferred between the QSAR models to improve the model accuracy. In this study, we present two different multi-task algorithms from the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
23
0
1

Year Published

2016
2016
2024
2024

Publication Types

Select...
3
3
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 44 publications
(24 citation statements)
references
References 45 publications
0
23
0
1
Order By: Relevance
“…The present work uses transfer learning 49,50 to train an ML potential that is accurate, transferable, extensible, and therefore, broadly applicable. In transfer learning, one begins with a model trained on data from one task and then retrains the model on data from a different, but related task, often yielding high accuracy predictions [51][52][53] even when data is sparsely available. In our application, we begin by training a neural network on a large quantity of lower-accuracy DFT data (the ANI-1x dataset with 5M non-equilibrium molecular conformations 25 ), and then we retrain to a much smaller dataset (about 500k intelligently selected conformations from ANI-1x) at the CCSD(T)/CBS level of accuracy.…”
Section: Introductionmentioning
confidence: 99%
“…The present work uses transfer learning 49,50 to train an ML potential that is accurate, transferable, extensible, and therefore, broadly applicable. In transfer learning, one begins with a model trained on data from one task and then retrains the model on data from a different, but related task, often yielding high accuracy predictions [51][52][53] even when data is sparsely available. In our application, we begin by training a neural network on a large quantity of lower-accuracy DFT data (the ANI-1x dataset with 5M non-equilibrium molecular conformations 25 ), and then we retrain to a much smaller dataset (about 500k intelligently selected conformations from ANI-1x) at the CCSD(T)/CBS level of accuracy.…”
Section: Introductionmentioning
confidence: 99%
“…The present work uses transfer learning 49,50 to train an ML potential that is accurate, transferable, extensible, and therefore, broadly applicable. In transfer learning, one begins with a model trained on data from one task and then retrains the model on data from a different, but related task, often yielding high accuracy predictions [51][52][53] even when data is sparsely available. In our application, we begin by training a neural network on a large quantity of lower-accuracy DFT data (the ANI-1x dataset with 5M molecular conformations 45 ), and then we retrain to a much smaller dataset (about 500k select conformations from ANI-1x) at the CCSD(T)/CBS level of accuracy.…”
Section: Main Text: Introductionmentioning
confidence: 99%
“…The Bayesian network is a probabilistic graphic model that represents a set of biological measurements and their dependencies. Multitask learning that learns several related problems together at the same time using a shared representation of multiple data modalities has proved to be a valuable approach for inferring multitarget quantitative structure-activity relationship models for lead optimization (62). The advances in big data technology provide new opportunities for model combination.…”
Section: Model Integrationmentioning
confidence: 99%