2015
DOI: 10.1057/jors.2014.137
|View full text |Cite
|
Sign up to set email alerts
|

Strategy selection and outcome prediction in sport using dynamic learning for stochastic processes

Abstract: Stochastic processes are natural models for the progression of many individual and team sports. Such models have been applied successfully to select strategies and to predict outcomes in the context of games, tournaments and leagues. This information is useful to participants and gamblers, who often need to make decisions while the sports are in progress. In order to apply these models, much of the published research uses parameters estimated from historical data, thereby ignoring the uncertainty of the parame… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 40 publications
0
11
0
1
Order By: Relevance
“…In the author's opinion, modeling baseball games as a stochastic process and applying dynamic learning using "within-game" data should bring better predictive results than general models (Percy, 2015). In this sense, data mining methods have achieved success for selecting strategies and predicting outcomes in the context of some specific baseball game situations.…”
Section: Discussionmentioning
confidence: 99%
“…In the author's opinion, modeling baseball games as a stochastic process and applying dynamic learning using "within-game" data should bring better predictive results than general models (Percy, 2015). In this sense, data mining methods have achieved success for selecting strategies and predicting outcomes in the context of some specific baseball game situations.…”
Section: Discussionmentioning
confidence: 99%
“…In simple terms, the Ergodic theorem for a Markov chain states that the chain can go from any state to any other state and that the chain does not repeat the same loop. This means that no samples are replicated [11].…”
Section: Markov Chainsmentioning
confidence: 99%
“…Other refinements of the basic model have been proposed in which, for example, strengths are time-varying (e.g. Crowder et al, 2002;Owen, 2011;Koopman and Lit, 2015;Percy, 2015), or in which scores are dependent (e.g. Dixon and Coles, 1997;Karlis and Nzoutfras , 2003;Scarf, 2007, 2011).…”
Section: The Poisson-matchmentioning
confidence: 99%