2023
DOI: 10.3390/a16050226
|View full text |Cite
|
Sign up to set email alerts
|

Heterogeneous Treatment Effect with Trained Kernels of the Nadaraya–Watson Regression

Abstract: A new method for estimating the conditional average treatment effect is proposed in this paper. It is called TNW-CATE (the Trainable Nadaraya–Watson regression for CATE) and based on the assumption that the number of controls is rather large and the number of treatments is small. TNW-CATE uses the Nadaraya–Watson regression for predicting outcomes of patients from control and treatment groups. The main idea behind TNW-CATE is to train kernels of the Nadaraya–Watson regression by using a weight sharing neural n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 83 publications
0
4
0
Order By: Relevance
“…A new method called BENK for solving the CATE problem under the condition of censored data has been presented. It extends the idea behind TNW-CATE proposed in [16] to the case of censored data. In spite of many similar parts of TNW-CATE and BENK, they are different because BENK is based on using the Beran estimator for training and can be successfully applied to survival analysis of controls and treatments.…”
Section: Discussionmentioning
confidence: 90%
See 3 more Smart Citations
“…A new method called BENK for solving the CATE problem under the condition of censored data has been presented. It extends the idea behind TNW-CATE proposed in [16] to the case of censored data. In spite of many similar parts of TNW-CATE and BENK, they are different because BENK is based on using the Beran estimator for training and can be successfully applied to survival analysis of controls and treatments.…”
Section: Discussionmentioning
confidence: 90%
“…At the same time, a small amount of training data may lead to overfitting and unsatisfactory results. Therefore, the problem of overcoming this possible limitation motivated researchers to introduce a neural network of a special architecture, which implements the trainable kernels in the Nadaraya-Watson regression [16].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations