2020
DOI: 10.3390/e22091036
|View full text |Cite
|
Sign up to set email alerts
|

Regularization Methods Based on the Lq-Likelihood for Linear Models with Heavy-Tailed Errors

Abstract: We propose regularization methods for linear models based on the Lq-likelihood, which is a generalization of the log-likelihood using a power function. Regularization methods are popular for the estimation in the normal linear model. However, heavy-tailed errors are also important in statistics and machine learning. We assume q-normal distributions as the errors in linear models. A q-normal distribution is heavy-tailed, which is defined using a power function, not the exponential function. We find that the pro… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 21 publications
0
1
0
Order By: Relevance
“…In each penalty function, two tuning parameters were to be determined: the tuning parameter for the negative binomial component λ NB and the tuning parameter for the zero component λ BI . In addition, there was an additional tuning parameter in the SCAD and MCP penalty functions [19][20][21].…”
Section: Resultsmentioning
confidence: 99%
“…In each penalty function, two tuning parameters were to be determined: the tuning parameter for the negative binomial component λ NB and the tuning parameter for the zero component λ BI . In addition, there was an additional tuning parameter in the SCAD and MCP penalty functions [19][20][21].…”
Section: Resultsmentioning
confidence: 99%