2018
DOI: 10.1527/tjsai.d-h92
|View full text |Cite
|
Sign up to set email alerts
|

Learning Future Classifiers without Additional Data

Abstract: SummaryWe propose probabilistic models for predicting future classifiers given labeled data with timestamps collected until the current time. In some applications, the decision boundary changes over time. For example, in activity recognition using sensor data, the decision boundary can vary since user activity patterns dynamically change. Existing methods require additional labeled and/or unlabeled data to learn a time-evolving decision boundary. However, collecting these data can be expensive or impossible. B… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 22 publications
0
8
0
Order By: Relevance
“…Our work also closely relates to the field of Domain Adaptation, specifically the task of predicting future classifiers given past labeled data with timestamps [6], [8]. These are complementary to our work, as we use such solutions as a component of our framework.…”
Section: System and Demonstration Overviewmentioning
confidence: 94%
“…Our work also closely relates to the field of Domain Adaptation, specifically the task of predicting future classifiers given past labeled data with timestamps [6], [8]. These are complementary to our work, as we use such solutions as a component of our framework.…”
Section: System and Demonstration Overviewmentioning
confidence: 94%
“…The method cannot hence discover relationships between domains in an end-to-end fashion. Other methods like [7] and [8] consider only a logistic regression classifier and model the parameters as stochastic processes like vector autoregression (VAR) or a Gaussian process (GP). These methods require computing the posterior of GP-smoothed parameters, and this inference does not scale for multi-layered neural networks.…”
Section: Related Workmentioning
confidence: 99%
“…However, these domain adaptation approaches assume the presence of unlabeled data in the future, which is not available in our case. Another set of approaches aim to make network parameters a function of time [7,8,9] but they require external smoothness kernels as hyper-parameters for each parameter-type, which cannot be easily trained end-to-end on deep neural networks.…”
Section: Introductionmentioning
confidence: 99%
“…In contrast, the proposed method can infer domain-specific models by using the set of feature vectors in each domain. Some methods assume the additional semantic descriptors can be used that represent domains such as time information [19,17,18], device and location information [42], and instructions for reinforcement learning [28]. Although these semantic descriptors enhance the generalization ability, such data cannot be used under some circumstances.…”
Section: Related Workmentioning
confidence: 99%
“…It is important to infer an appropriate domain-specific model for each domain in the training phase since the characteristics of each domain differ. Existing methods infer such models by using additional semantic descriptors characterizing domains such as time information [19,17,18], device and location information [42], and pose and illumination information [29]. Although they improve performance, assuming semantic descriptors restricts the applicability of zero-shot domain adaptation because semantic descriptors cannot always be obtained.…”
Section: Introductionmentioning
confidence: 99%