2009
DOI: 10.2197/ipsjjip.17.138
|View full text |Cite
|
Sign up to set email alerts
|

Direct Density Ratio Estimation for Large-scale Covariate Shift Adaptation

Abstract: Covariate shift is a situation in supervised learning where training and test inputs follow different distributions even though the functional relation remains unchanged. A common approach to compensating for the bias caused by covariate shift is to reweight the loss function according to the importance, which is the ratio of test and training densities. We propose a novel method that allows us to directly estimate the importance from samples without going through the hard task of density estimation. An advant… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
65
0

Year Published

2009
2009
2022
2022

Publication Types

Select...
4
4

Relationship

4
4

Authors

Journals

citations
Cited by 105 publications
(66 citation statements)
references
References 9 publications
1
65
0
Order By: Relevance
“…Tsuboi et al (2008) have pointed out a similar relation for the M-estimator based on the Kullback-Leibler divergence. Now we give an interpretation of (15) through an analogous optimization example in the Euclidean space.…”
Section: Kernel Mean Matching (Kmm)mentioning
confidence: 57%
“…Tsuboi et al (2008) have pointed out a similar relation for the M-estimator based on the Kullback-Leibler divergence. Now we give an interpretation of (15) through an analogous optimization example in the Euclidean space.…”
Section: Kernel Mean Matching (Kmm)mentioning
confidence: 57%
“…Covariate shift is a situation in supervised learning where the training and test input distributions are different while the conditional distribution of output remains unchanged [2]. In many real-world applications such as robot control [3], bioinformatics [4], spam filtering [5], natural language processing [6], brain-computer interfacing [7], and speaker identification [8], covariate shift adaptation has been shown to be useful. Covariate shift is also naturally induced in selective sampling or active learning scenarios and adaptation improves the generalization performance [9]- [12].…”
Section: Introductionmentioning
confidence: 99%
“…Similarly, applying the direct density-ration estimation method based on the log-loss called the Kullback-Leibler Importance Estimation Procedure (KLIEP) [28], [29], [37], [38], [43], [44], [48], we can obtain a log-loss variant of the proposed method. A valiant of the KLIEP method explored in the papers [43], [44] uses a log-linear model (a.k.a. a maximum entropy model [16]) for density ratio estimation:…”
Section: Discussionmentioning
confidence: 99%
“…Other methods of direct density ratio estimation [28], [29], [37], [38], [43], [44], [48] employs the KullbackLeibler divergence [22] as the loss function, instead of the squared-loss. It is possible to use these methods for conditional density estimation in the same way as the proposed method, but it is computationally rather inefficient [17], [18].…”
Section: Other Methods Of Density Ratio Estimationmentioning
confidence: 99%