the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Deep neural networks (DNNs) are powerful types of artificial neural networks (ANNs) that use several hidden layers. They have recently gained considerable attention in the speech transcription and image recognition community (Krizhevsky et al., 2012) for their superior predictive properties including robustness to overfitting. However their application to algorithmic trading has not been previously researched, partly because of their computational complexity. This paper describes the application of DNNs to predicting financial market movement directions. In particular we describe the configuration and training approach and then demonstrate their application to backtesting a simple trading strategy over 43 different Commodity and FX future mid-prices at 5-minute intervals. All results in this paper are generated using a C++ implementation on the Intel Xeon Phi co-processor which is 11.4x faster than the serial version and a Python strategy backtesting environment both of which are available as open source code written by the authors.
Deep learning applies hierarchical layers of hidden variables to construct nonlinear high dimensional predictors. Our goal is to develop and train deep learning architectures for spatio-temporal modeling. Training a deep architecture is achieved by stochastic gradient descent and dropout for parameter regularization with a goal of minimizing out-of-sample predictive mean squared error. To illustrate our methodology, we first predict the sharp discontinuities in traffic flow data, and secondly, we develop a classification rule to predict short-term futures market prices using order book depth. Finally, we conclude with directions for future research.
We propose a simple non-equilibrium model of a financial market as an open system with a possible exchange of money with an outside world and market frictions (trade impacts) incorporated into asset price dynamics via a feedback mechanism. Using a linear market impact model, this produces a non-linear two-parametric extension of the classical Geometric Brownian Motion (GBM) model, that we call the "Quantum Equilibrium-Disequilibrium" (QED) model. The QED model incorporates non-linear mean-reverting dynamics, broken scale invariance, and corporate defaults. In the simplest one-stock (1D) formulation, our parsimonious model has only one degree of freedom, yet calibrates to both equity returns and credit default swap spreads. Defaults and market crashes are associated with dissipative tunneling events, and correspond to instanton (saddle-point) solutions of the model. When market frictions and inflows/outflows of money are neglected altogether, "classical" GBM scale-invariant dynamics with an exponential asset growth and without defaults are formally recovered from the QED dynamics. However, we argue that this is only a formal mathematical limit, and in reality the GBM limit is non-analytic due to non-linear effects that produce both defaults and divergence of perturbation theory in a small market friction parameter.
Recurrent neural networks (RNNs) are types of artificial neural networks (ANNs) that are well suited to forecasting and sequence classification. They have been applied extensively to forecasting univariate financial time series, however their application to high frequency trading has not been previously considered. This paper solves a sequence classification problem in which a short sequence of observations of limit order book depths and market orders is used to predict a next event price-flip. The capability to adjust quotes according to this prediction reduces the likelihood of adverse price selection. Our results demonstrate the ability of the RNN to capture the non-linear relationship between the near-term price-flips and a spatio-temporal representation of the limit order book. The RNN compares favorably with other classifiers, including a linear Kalman filter, using S&P500 E-mini futures level II data over the month of August 2016. Further results assess the effect of retraining the RNN daily and the sensitivity of the performance to trade latency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.