2016
DOI: 10.1007/s10994-016-5574-8
|View full text |Cite
|
Sign up to set email alerts
|

$$\text {ALR}^n$$ ALR n : accelerated higher-order logistic regression

Abstract: This paper introduces Accelerated Logistic Regression: a hybrid generativediscriminative approach to training Logistic Regression with high-order features. We present two main results: (1) that our combined generative-discriminative approach significantly improves the efficiency of Logistic Regression and (2) that incorporating higher order features (i.e. features that are the Cartesian products of the original features) reduces the bias of Logistic Regression, which in turn significantly reduces its error on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 25 publications
0
5
0
Order By: Relevance
“…Recently it has been shown that for big datasets, one can train an LR by building all higherorder features [19]. One can speed-up the convergence of this higher-order LR by integrating the softmax TRON algorithm proposed in this work.…”
Section: Discussionmentioning
confidence: 99%
“…Recently it has been shown that for big datasets, one can train an LR by building all higherorder features [19]. One can speed-up the convergence of this higher-order LR by integrating the softmax TRON algorithm proposed in this work.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, it is important to address the impact of high correlation among predictor variables, which can lead to challenges related to multicollinearity, potentially affecting the stability and interpretability of models [ 39 ]. However, it is essential to acknowledge that removing predictor variables from the dataset may result in information loss [ 40 ].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Again, we follow 1 In case of ties, we assign the average (or fractional) ranking. For example, if there is one winner, two seconds and a loser [1,2,2,4], then the fractional ranking will be [1,2.5,2.5,4]. standard practices for classifier comparison [19] and perform a Nemenyi test to compare pairs of methods.…”
Section: A Cold Startmentioning
confidence: 99%
“…We can influence the variance by adding/removing representation bias to the classifier (see e.g. [4]); adding representation bias typically reduces variance and conversely. However, in many cases it can be easier to express our knowledge of the problem (i.e.…”
Section: Introductionmentioning
confidence: 99%