Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2011
DOI: 10.1145/2020408.2020519
|View full text |Cite
|
Sign up to set email alerts
|

Bounded coordinate-descent for biological sequence classification in high dimensional predictor space

Abstract: We present a framework for discriminative sequence classification where the learner works directly in the high dimensional predictor space of all subsequences in the training set. This is possible by employing a new coordinate-descent algorithm coupled with bounding the magnitude of the gradient for selecting discriminative subsequences fast. We characterize the loss functions for which our generic learning algorithm can be applied and present concrete implementations for logistic regression (binomial log-like… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
51
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 42 publications
(51 citation statements)
references
References 30 publications
0
51
0
Order By: Relevance
“…In this work we study a sequence classification method, the Sequence Learner (SEQL), introduced in [10], [11]. Due to its greedy optimization approach, SEQL can quickly capture the distinct patterns of sequence data in very high-dimensional spaces.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…In this work we study a sequence classification method, the Sequence Learner (SEQL), introduced in [10], [11]. Due to its greedy optimization approach, SEQL can quickly capture the distinct patterns of sequence data in very high-dimensional spaces.…”
Section: Related Workmentioning
confidence: 99%
“…Sequence Learner SEQL learns discriminative subsequences from training data by exploiting the all-subsequence space using a coordinate gradient descent approach [10], [11]. The key idea is to exploit the structure of the subsequence space in order to efficiently optimize a classification loss function, such as the binomial log-likelihood loss of Logistic Regression or squared hinge loss of Support Vector Machines.…”
Section: Classification With Sequence Learnermentioning
confidence: 99%
See 2 more Smart Citations
“…Studies in [76], [77] designed linear classifiers to train explicit mappings of sequence data, where features correspond to subsequences. Using the relation between subsequences, they are able to design efficient training methods for very high dimensional mappings.…”
Section: A Training and Testing Explicit Data Mappings Via Linear CLmentioning
confidence: 99%