2020
DOI: 10.1093/rfs/hhz123
|View full text |Cite
|
Sign up to set email alerts
|

Dissecting Characteristics Nonparametrically

Abstract: We propose a nonparametric method to study which characteristics provide incremental information for the cross-section of expected returns. We use the adaptive group LASSO to select characteristics and to estimate how selected characteristics affect expected returns nonparametrically. Our method can handle a large number of characteristics and allows for a flexible functional form. Our implementation is insensitive to outliers. Many of the previously identified return predictors don’t provide incremental infor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
58
1
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 481 publications
(63 citation statements)
references
References 114 publications
2
58
1
2
Order By: Relevance
“…We consider three dimension-reduction techniques: The first takes the cross-sectional average of the individual predictors, the second extracts the first principal component from the set of predictors Ng (2007, 2009)), and the third, an implementation of partial least squares (PLS; Wold (1966)), extracts the first target-relevant factor from the set of predictors Pruitt (2013, 2015), Huang et al (2015)). As indicated previously, the forecasts based on strategies 3 Our study complements recent studies that employ machine-learning techniques to predict stock returns using alternative predictor variables in high-dimensional settings, including Rapach, Strauss, and Zhou (2013); Chinco, Clark-Joseph, and Ye (2019); Rapach et al (2019);Freyberger, Neuhierl, and Weber (2020); Gu, Kelly, and Xiu (2020); Kozak, Nagel, and Santosh (2020); Avramov, Cheng, and Metzker (2021); Chen, Pelger, and Zhu (2021); Cong et al (2021); Han et al (2021);and Liu, Zhou, and Zhu (2021).…”
supporting
confidence: 53%
“…We consider three dimension-reduction techniques: The first takes the cross-sectional average of the individual predictors, the second extracts the first principal component from the set of predictors Ng (2007, 2009)), and the third, an implementation of partial least squares (PLS; Wold (1966)), extracts the first target-relevant factor from the set of predictors Pruitt (2013, 2015), Huang et al (2015)). As indicated previously, the forecasts based on strategies 3 Our study complements recent studies that employ machine-learning techniques to predict stock returns using alternative predictor variables in high-dimensional settings, including Rapach, Strauss, and Zhou (2013); Chinco, Clark-Joseph, and Ye (2019); Rapach et al (2019);Freyberger, Neuhierl, and Weber (2020); Gu, Kelly, and Xiu (2020); Kozak, Nagel, and Santosh (2020); Avramov, Cheng, and Metzker (2021); Chen, Pelger, and Zhu (2021); Cong et al (2021); Han et al (2021);and Liu, Zhou, and Zhu (2021).…”
supporting
confidence: 53%
“…Nonetheless, extending our work to alternative SDF specifications and applying it to shed light on other cross-sectional and possibly also time-series patterns in asset prices is likely to be a fruitful avenue for future research. It would be particularly interesting to study whether accounting for coskewness can contribute to recent efforts to organize the factor "zoo" (e.g., Cochrane (2011), Freyberger, Neuhierl, and Weber (2020), Feng, Giglio, and Xiu (2020)).…”
Section: Discussion and Additional Resultsmentioning
confidence: 99%
“…However, sparsity could be an issue with our data set and a careful imposition of structure in the statistical modelling process is helpful. Note further that the use of nonlinear functions f in (1) has shown evidence of much stronger stock return predictability when compared to their linear counterparts (Lettau & Van Nieuwerburgh, 2008;Chen & Hong, 2010;Yang et al, 2010;Cheng et al, 2019;Caldeira et al, 2020;Freyberger et al, 2020). Thus, the local-linear smoother based on the standard L 2 -loss function is ideally suited.…”
Section: Literature Reviewmentioning
confidence: 99%