2022
DOI: 10.5705/ss.202022.0047
|View full text |Cite
|
Sign up to set email alerts
|

Data-guided Treatment Recommendation with Feature Scores

Abstract: Despite the availability of large amounts of genomics data, medical treatment recommendations have not successfully used them. In this paper, we consider the utility of high dimensional genomic-clinical data and nonparametric methods for making cancer treatment recommendations. This builds upon the framework of the individualized treatment rule [Qian and Murphy 2011] but we aim to overcome their method's limitations, specifically in the instances when the method encounters a large number of covariates and an i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 39 publications
0
3
0
Order By: Relevance
“…The most commonly used one is to fit regression models for μ1 (x) and μ0 (x), and then τ (x) = μ1 (x)−μ 0 (x). As an addition to the rich literature, [Chen et al 2022] considered very general regression models and applied dimension reduction for high-dimensional covariates. Moreover, supervised learning algorithms are used to estimate μ1 and μ0 by the machine learning community [Hu et al 2021], including Bayesian Additive Regression Trees (BART) [Chipman et al 2010] and Random Forest (RF) [Wager and Athey 2018].…”
Section: Recent Development On Estimation Of Htementioning
confidence: 99%
See 2 more Smart Citations
“…The most commonly used one is to fit regression models for μ1 (x) and μ0 (x), and then τ (x) = μ1 (x)−μ 0 (x). As an addition to the rich literature, [Chen et al 2022] considered very general regression models and applied dimension reduction for high-dimensional covariates. Moreover, supervised learning algorithms are used to estimate μ1 and μ0 by the machine learning community [Hu et al 2021], including Bayesian Additive Regression Trees (BART) [Chipman et al 2010] and Random Forest (RF) [Wager and Athey 2018].…”
Section: Recent Development On Estimation Of Htementioning
confidence: 99%
“…We can further justify this by providing a relationship between the value function and the estimation error. More specifically, [Chen et al 2022] have showed that for any ITR d, the reduction in value is upper bounded by the estimation error (See Lemma 1 in [Chen et al 2022]):…”
Section: Connection Between the Two Research Areasmentioning
confidence: 99%
See 1 more Smart Citation