2017
DOI: 10.1016/j.ejor.2016.10.031
|View full text |Cite
|
Sign up to set email alerts
|

Deep neural networks, gradient-boosted trees, random forests: Statistical arbitrage on the S&P 500

Abstract: Standard-Nutzungsbedingungen:Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Zwecken und zum Privatgebrauch gespeichert und kopiert werden.Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich machen, vertreiben oder anderweitig nutzen.Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, gelten abweichend von diesen Nutzungsbedingungen die in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

5
290
4
12

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 495 publications
(311 citation statements)
references
References 54 publications
5
290
4
12
Order By: Relevance
“…Specifically, we expand on the recent work of Krauss et al (2017) on the same data sample for the sake of comparability. The authors use deep learning, random forests, gradient-boosted trees, and different ensembles as forecasting methods on all S&P 500 constituents from 1992 to 2015.…”
Section: Introductionmentioning
confidence: 99%
“…Specifically, we expand on the recent work of Krauss et al (2017) on the same data sample for the sake of comparability. The authors use deep learning, random forests, gradient-boosted trees, and different ensembles as forecasting methods on all S&P 500 constituents from 1992 to 2015.…”
Section: Introductionmentioning
confidence: 99%
“…The reason is that it affects the correlation among tree output and, therefore, the generalization error of the model. Krauss et al [11] show that random forests are the dominant single method for deriving trading decisions on the Standard and Poor s (S&P) 500 stock universe vis-à-vis deep neural networks and gradient-boosted trees. Other general properties that are relevant in the IPO underpricing prediction domain are its relative robustness with regard to outliers in training data and the fact that it does not overfit.…”
mentioning
confidence: 99%
“…As in Krauss et al (2016), our choice is motivated by market efficiency, computational feasibility, and liquidity. The S&P 500 contains the leading 500 companies of the U.S. stock market, comprising approximately 80 percent of available market capitalization (S&P Dow Jones Indices, 2015).…”
Section: Datamentioning
confidence: 99%
“…This highly liquid subset serves as an acid test for any trading strategy, in light of significant investor scrutiny and intense analyst coverage. We proceed along the lines of Krauss et al (2016) Strategy variants: In particular, we benchmark four strategy variants against each other.…”
Section: Datamentioning
confidence: 99%
See 1 more Smart Citation