2016
DOI: 10.1111/mafi.12125
|View full text |Cite
|
Sign up to set email alerts
|

Convergence of a Least‐squares Monte Carlo Algorithm for American Option Pricing With Dependent Sample Data

Abstract: We analyze the convergence of the Longstaff-Schwartz algorithm relying on only a single set of independent Monte Carlo sample paths that is repeatedly reused for all exercise time-steps. We prove new estimates on the stochastic component of the error of this algorithm whenever the approximation architecture is any uniformly bounded set of L 2 functions of finite Vapnik-Chervonenkis dimension (VC-dimension), but in particular need not necessarily be either convex or closed. We also establish new overall error e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
29
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 23 publications
(29 citation statements)
references
References 27 publications
0
29
0
Order By: Relevance
“…Assumptions (H1-H2) are well-known in the classical case of the Longstaff-Schwartz algorithm based on Markov chains. See e.g Egloff [12], Zanger [40,41] and other references therein. for 1 ≤ j ≤ e(k, T ) − 1, N ≥ 2 and k ≥ 1.…”
Section: Error Estimatesmentioning
confidence: 99%
See 3 more Smart Citations
“…Assumptions (H1-H2) are well-known in the classical case of the Longstaff-Schwartz algorithm based on Markov chains. See e.g Egloff [12], Zanger [40,41] and other references therein. for 1 ≤ j ≤ e(k, T ) − 1, N ≥ 2 and k ≥ 1.…”
Section: Error Estimatesmentioning
confidence: 99%
“…for 1 ≤ j ≤ e(k, T ) − 1, N ≥ 2 and k ≥ 1. There are some conditions for the existence of both minimizers as discussed in Remark 5.4 by Zanger [40], for instance, compactness of H k N,j . We observe that we may assume that S j k ; 1 ≤ j ≤ e(k, T ) they are compact because all the hitting times (T k n ) e(k,T ) n=1…”
Section: Error Estimatesmentioning
confidence: 99%
See 2 more Smart Citations
“…The main obstacle is the obtention of continuation values (expressed as F ti -conditional expectations) in the dynamic programming algorithm. In this case, least-squares Monte-Carlo methods can be employed by using non-parametric regression techniques based on a suitable choice of regression polynomials (see e.g [34] and other references therein). Another popular approach is to make use of some available representations for conditional expectations in terms of a suitable ratio of unconditional expectations obtained by using Malliavin calculus (see e.g [12]).…”
Section: Introductionmentioning
confidence: 99%