2010
DOI: 10.1007/s10994-010-5221-8
|View full text |Cite
|
Sign up to set email alerts
|

Dual coordinate descent methods for logistic regression and maximum entropy models

Abstract: Most optimization methods for logistic regression or maximum entropy solve the primal problem. They range from iterative scaling, coordinate descent, quasi-Newton, and truncated Newton. Less efforts have been made to solve the dual problem. In contrast, for linear support vector machines (SVM), methods have been shown to be very effective for solving the dual problem. In this paper, we apply coordinate descent methods to solve the dual form of logistic regression and maximum entropy. Interestingly, many detail… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
201
0
3

Year Published

2014
2014
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 303 publications
(204 citation statements)
references
References 16 publications
0
201
0
3
Order By: Relevance
“…In this research, we picked popular classification algorithms as classifiers covering various types, i.e., Logistic Regression [8], Support Vector Machine [9], Random Forest [10], Gradient Boosting [11] and Nearest Neighbors [12]. In this study, all the algorithms are implemented using Scikit-learn, a powerful machine learning library in Python [13].…”
Section: Model Learningmentioning
confidence: 99%
“…In this research, we picked popular classification algorithms as classifiers covering various types, i.e., Logistic Regression [8], Support Vector Machine [9], Random Forest [10], Gradient Boosting [11] and Nearest Neighbors [12]. In this study, all the algorithms are implemented using Scikit-learn, a powerful machine learning library in Python [13].…”
Section: Model Learningmentioning
confidence: 99%
“…Examples of scenes, their descriptions, and corresponding templates are shown in Table 3 (template nsubj,aux,verb,det,dobj is the most frequent in the data). We train a logistic regression classifier (Yu et al, 2011) on scene-template pairs, and learn to assign a template for a new unseen scene. The "templatepredictor" uses variety of features based on the alignment between clip-art objects and POS-tags as well as objects and dependency roles.…”
Section: Model Comparisonmentioning
confidence: 99%
“…This is because the objective function (9) is not well defined at α ik = 0 due to the logarithm appearance. Finally, the optimal dual variables are achieved when the following condition is satisfied for all examples (Yu et al, 2011):…”
Section: The Dual Problemmentioning
confidence: 99%
“…Similarly, Yu et al (2011) proposed a two-level dual coordinate descent method for maximum entropy classifier.…”
Section: Introductionmentioning
confidence: 99%