2022
DOI: 10.1155/2022/6769421
|View full text |Cite
|
Sign up to set email alerts
|

More on the Ridge Parameter Estimators for the Gamma Ridge Regression Model: Simulation and Applications

Abstract: The Gamma ridge regression estimator (GRRE) is commonly used to solve the problem of multicollinearity, when the response variable follows the gamma distribution. Estimation of the ridge parameter estimator is an important issue in the GRRE as well as for other models. Numerous ridge parameter estimators are proposed for the linear and other regression models. So, in this study, we generalized these estimators for the Gamma ridge regression model. A Monte Carlo simulation study and two real-life applications a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 31 publications
0
4
0
Order By: Relevance
“…As a result, the variances of the estimated weight coefficients are very large when using the ordinary least-squares method. Ridge regression is designed to address this problem by imposing a penalty on the size of the coefficients. , Then, the solution of the original normal equations is w ( α ) = false( X T X + ξ I false) 1 X T y where ξ is the ridge coefficient, also known as the penalty coefficient. From another perspective, the optimal solution of eq can be obtained by minimizing the following loss function L = min w ( || italicXw y || 2 2 + ξ || w || 2 2 ) As shown, the linear least-squares function and regularization are given by the l2-norm in Ridge regression.…”
Section: Multiple-output Machine Learning Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…As a result, the variances of the estimated weight coefficients are very large when using the ordinary least-squares method. Ridge regression is designed to address this problem by imposing a penalty on the size of the coefficients. , Then, the solution of the original normal equations is w ( α ) = false( X T X + ξ I false) 1 X T y where ξ is the ridge coefficient, also known as the penalty coefficient. From another perspective, the optimal solution of eq can be obtained by minimizing the following loss function L = min w ( || italicXw y || 2 2 + ξ || w || 2 2 ) As shown, the linear least-squares function and regularization are given by the l2-norm in Ridge regression.…”
Section: Multiple-output Machine Learning Modelsmentioning
confidence: 99%
“…Ridge regression is designed to address this problem by imposing a penalty on the size of the coefficients. 41,42 Then, the solution of the original normal equations is w X X I X y ( ) ( )…”
Section: Ridge Regression Modelmentioning
confidence: 99%
“…As λ increases, the intensity of regularization increases and the complexity of the model decreases. Through cross-validation, we can find the best λ value that minimizes the cross-validation error [23][24][25][26]. In this paper, we use five-fold cross-validation to get the best λ value under different datasets, as shown in Table 8.…”
Section: Modelingmentioning
confidence: 99%
“…e ridge, Liu, Liu-type, and other estimators given by several authors are alternatives to MLE to overcome the multicollinearity in the linear regression model [18,19]. ese estimators have been extended to the GLMs [20][21][22][23][24][25][26][27][28][29]. Also, robust estimators have been proposed to handle the problem of multicollinearity and outlier values together (See [30]- [31]).…”
Section: Introductionmentioning
confidence: 99%