2003
DOI: 10.1002/chin.200322232
|View full text |Cite
|
Sign up to set email alerts
|

Active Learning with Support Vector Machines in the Drug Discovery Process.

Abstract: Computers in chemistryComputers in chemistry V 0380 Active Learning with Support Vector Machines in the Drug Discovery Process. -(WARMUTH*, M. K.; LIAO, J.; RAETSCH, G.; MATHIESON, M.; PUTTA, S.; LEMMEN, C.; J. Chem. Inf. Comput. Sci. 43 (2003) 2, 667-673; Comp. Sci. Dep., Univ. Calif., Santa Cruz, CA 95064, USA; Eng.) -Lindner 22-232

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

1
205
0
1

Year Published

2003
2003
2017
2017

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 152 publications
(207 citation statements)
references
References 3 publications
1
205
0
1
Order By: Relevance
“…13 SVMs were originally developed for binary classification problems and have become popular in the chemoinformatics field. [16][17][18] In a typical SVM analysis, training compounds belonging to two different classes (e.g., active versus inactive) are projected into chemical reference space and a separating hyperplane is derived. Then, test compounds are evaluated in this reference space to predict their class labels dependent on which side of the hyperplane they fall.…”
Section: Introductionmentioning
confidence: 99%
“…13 SVMs were originally developed for binary classification problems and have become popular in the chemoinformatics field. [16][17][18] In a typical SVM analysis, training compounds belonging to two different classes (e.g., active versus inactive) are projected into chemical reference space and a separating hyperplane is derived. Then, test compounds are evaluated in this reference space to predict their class labels dependent on which side of the hyperplane they fall.…”
Section: Introductionmentioning
confidence: 99%
“…SVMs have also been applied in chemistry, for example, the prediction of retention index of protein, and other QSAR studies. [13][14][15][16][17][18][19][20][21] Compared with traditional regression and neural networks methods, SVMs have some advantages, including global optimum, good generalization ability, simple implementation, few free parameters, and dimensional independence. [22][23][24] The flexibility in classification and ability to approximate continuous function make SVMs very suitable for QSAR and QSPR studies.…”
Section: Introductionmentioning
confidence: 99%
“…Due to its remarkable generalization performance, the SVM has attracted attention and gained extensive application. [9][10][11][12][13][14][15] Based on the Structural Risk Minimization principle which seeks to minimize an upper bound of the generalization error rather than minimize the empirical error commonly implemented in other neural networks, SVMs achieve a higher generalization performance than traditional neural networks in solving these machine learning problems. Another key property is that unlike the training of other networks, which requires nonlinear optimization with the danger of getting stuck in local minima, training SVMs is equivalent to solving a linearly constrained quadratic programming problem.…”
Section: Introductionmentioning
confidence: 99%