2012 IEEE Conference on Computational Intelligence for Financial Engineering &Amp; Economics (CIFEr) 2012
DOI: 10.1109/cifer.2012.6327783
|View full text |Cite
|
Sign up to set email alerts
|

Behavior based learning in identifying High Frequency Trading strategies

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(12 citation statements)
references
References 19 publications
0
12
0
Order By: Relevance
“…Once we have the order book at any given event tick, we take the market depth at five different levels as our base variables and then discretize these variables to generate an MDP model state space. This study extends the MDP model documented by Yang et al (2012) to obtain five variables, i.e. order volume imbalance between the best bid and the best ask prices, order volume imbalance between the 2nd best bid and the 2nd best ask prices, order volume imbalance between the 3rd best bid and the 3rd best ask prices, the order book imbalance at the 5th best bid and the 5th ask prices, and the inventory level/holding position (see figure 2(b)).…”
Section: Constructing An Mdp Model From Order Book Datamentioning
confidence: 96%
See 1 more Smart Citation
“…Once we have the order book at any given event tick, we take the market depth at five different levels as our base variables and then discretize these variables to generate an MDP model state space. This study extends the MDP model documented by Yang et al (2012) to obtain five variables, i.e. order volume imbalance between the best bid and the best ask prices, order volume imbalance between the 2nd best bid and the 2nd best ask prices, order volume imbalance between the 3rd best bid and the 3rd best ask prices, the order book imbalance at the 5th best bid and the 5th ask prices, and the inventory level/holding position (see figure 2(b)).…”
Section: Constructing An Mdp Model From Order Book Datamentioning
confidence: 96%
“…Consequently, strategies developed under certain value frameworks can be observed, learned and even reproduced in a different environment (such as a simulated financial market where impact of these strategies can be readily assessed). As documented by Yang et al (2012), Hayes et al (2012) and Paddrik et al (2012), the manipulative or disruptive algorithmic strategies can be studied and monitored by market operators and regulators to prevent unfair trading practices. Furthermore, new emerging algorithmic trading practices can be assessed and new regulations and policies can be evaluated to maintain the overall health of the financial markets.…”
Section: Introductionmentioning
confidence: 99%
“…Hayes et al (2013) used supervised and unsupervised learning to reverse engineer fund allocation strategies for groups of human participants in simulated trading competition. As documented by Yang et al (2012) and Hayes et al (2012), algorithmic trading strategies can be monitored by market operators and regulators to prevent unfair trading practices and improve the health of the financial markets. Qiao and Beling (2013) proposed a general approach to behavior recognition in sequential decision problems that is based on Markov decision process (MDP) models and Gaussian process inverse reinforcement learning (cf.…”
Section: Trading Strategy Recognitionmentioning
confidence: 99%
“…Furthermore, Naïve Bayes is a good classifier for predicting potential trades associated to market manipulation [16]. For the case of spoofing trading, detection can be done with the implementation of supervised learning algorithms [17], or can be identified by modelling trading decisions as MDPs and using Apprenticeship Learning to learn the reward function [18]. Though research is extensive in the area of market manipulation, few develop generative models of what encourages these economic agents to follow the disruptive strategies.…”
Section: Related Workmentioning
confidence: 99%
“…A model that fits the problem of manipulation under the representation Fig. 1 is that of a Markov Decision Process (MDP) [18], [22]. In general, an MDP is defined by the tuple {S, A, T, R}, where S and A are sets of states and actions, respectively (s ∈ S and a ∈ A), R is the set of rewards (r ∈ R), and T is a set of transition probabilities ({P (s |s, a)} ∈ T , where P (s |s, a) represents the probability of transitioning to state s from s after action a).…”
Section: A Spoofing As a Markov Decision Processmentioning
confidence: 99%