Recent advances in mobile health (mHealth) technology provide an effective way to monitor individuals' health statuses and deliver just-in-time personalized interventions. However, the practical use of mHealth technology raises unique challenges to existing methodologies on learning an optimal dynamic treatment regime. Many mHealth applications involve decision-making with large numbers of intervention options and under an infinite time horizon setting where the number of decision stages diverges to infinity. In addition, temporary medication shortages may cause optimal treatments to be unavailable, while it is unclear what alternatives can be used. To address these challenges, we propose a Proximal Temporal consistency Learning (pT-Learning) framework to estimate an optimal regime that is adaptively adjusted between deterministic and stochastic sparse policy models. The resulting minimax estimator avoids the double sampling issue in the existing algorithms.It can be further simplified and can easily incorporate off-policy data without mismatched distribution corrections. We study theoretical properties of the sparse policy and establish finite-sample bounds on the excess risk and performance error. The proposed method is implemented by our proximalDTR package and is evaluated through extensive simulation studies and the OhioT1DM mHealth dataset.
Learning an individualized dose rule in personalized medicine is a challenging statistical problem. Existing methods often suffer from the curse of dimensionality, especially when the decision function is estimated nonparametrically. To tackle this problem, we propose a dimension reduction framework that effectively reduces the estimation to a lower-dimensional subspace of the covariates. We exploit that the individualized dose rule can be defined in a subspace spanned by a few linear combinations of the covariates, leading to a more parsimonious model. The proposed framework does not require the inverse probability of the propensity score under observational studies due to a direct maximization of the value function. This distinguishes us from the outcome weighted learning framework, which also solves decision rules directly. Under the same framework, we further propose a pseudo-direct learning approach that focuses more on estimating the dimensionality-reduced subspace of the treatment outcome. Parameters in both approaches can be estimated efficiently using an orthogonality constrained optimization algorithm on the Stiefel manifold. Under mild regularity assumptions, the results on the asymptotic normality of the proposed estimators are established, respectively. We also derive the consistency and convergence rate for the value function under the estimated optimal dose rule. We evaluate the performance of the proposed approaches through extensive simulation studies and a warfarin pharmacogenetic dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.