2020
DOI: 10.2139/ssrn.3726609
|View full text |Cite
|
Sign up to set email alerts
|

Maximizing the Sharpe Ratio: A Genetic Programming Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 49 publications
1
4
0
Order By: Relevance
“…We consider three dimension-reduction techniques: The first takes the cross-sectional average of the individual predictors, the second extracts the first principal component from the set of predictors Ng (2007, 2009)), and the third, an implementation of partial least squares (PLS; Wold (1966)), extracts the first target-relevant factor from the set of predictors Pruitt (2013, 2015), Huang et al (2015)). As indicated previously, the forecasts based on strategies 3 Our study complements recent studies that employ machine-learning techniques to predict stock returns using alternative predictor variables in high-dimensional settings, including Rapach, Strauss, and Zhou (2013); Chinco, Clark-Joseph, and Ye (2019); Rapach et al (2019);Freyberger, Neuhierl, and Weber (2020); Gu, Kelly, and Xiu (2020); Kozak, Nagel, and Santosh (2020); Avramov, Cheng, and Metzker (2021); Chen, Pelger, and Zhu (2021); Cong et al (2021); Han et al (2021);and Liu, Zhou, and Zhu (2021).…”
supporting
confidence: 52%
“…We consider three dimension-reduction techniques: The first takes the cross-sectional average of the individual predictors, the second extracts the first principal component from the set of predictors Ng (2007, 2009)), and the third, an implementation of partial least squares (PLS; Wold (1966)), extracts the first target-relevant factor from the set of predictors Pruitt (2013, 2015), Huang et al (2015)). As indicated previously, the forecasts based on strategies 3 Our study complements recent studies that employ machine-learning techniques to predict stock returns using alternative predictor variables in high-dimensional settings, including Rapach, Strauss, and Zhou (2013); Chinco, Clark-Joseph, and Ye (2019); Rapach et al (2019);Freyberger, Neuhierl, and Weber (2020); Gu, Kelly, and Xiu (2020); Kozak, Nagel, and Santosh (2020); Avramov, Cheng, and Metzker (2021); Chen, Pelger, and Zhu (2021); Cong et al (2021); Han et al (2021);and Liu, Zhou, and Zhu (2021).…”
supporting
confidence: 52%
“…It reduces overfitting in a complex model with many predictors and helps mitigate the problem of multicollinearity. LASSO regression automates the variable selection process in multiple linear models by penalizing large coefficients in absolute values [12].…”
Section: Lasso Regressionmentioning
confidence: 99%
“…Given the choice of target return, most ML studies focus on minimizing a loss function such as the mean squared error between predicted and realized returns. Wang et al (2019) and Liu, Zhou, and Zhu (2021) propose alternative ML methods that attempt to directly maximize portfolio riskadjusted returns and find that these give superior Sharpe ratios. Standard loss functions may result in top-minus-bottom quintile portfolios that are not necessarily optimal from a Sharpe ratio perspective.…”
Section: Choice Of Targetmentioning
confidence: 99%