2014
DOI: 10.1007/s00357-014-9147-x
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Mixture Discriminant Analysis for Supervised Learning with Unobserved Classes

Abstract: International audienceIn supervised learning, an important issue usually not taken into account by classical methods is the possibility of having in the test set individuals belonging to a class which has not been observed during the learning phase. Classical supervised algorithms will automatically label such observations as belonging to one of the known classes in the training set and will not be able to detect new classes. This work introduces a model-based discriminant analysis method, called adaptive mixt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
42
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(42 citation statements)
references
References 39 publications
0
42
0
Order By: Relevance
“…By letting these two parameters go off to infinity, we can also recover the degenerate case P T r j = δˆ j where the Dirac's delta denotes a point mass centered inˆ j . That is, the prior beliefs extracted from the training set can be flexibly updated by gradually transitioning from transductive to inductive inference by increasing λ T r and ν T r (Bouveyron 2014). Similarly, we set H ≡ N I W (m 0 , λ 0 , ν 0 , S 0 ) , where the hyperparameters are chosen to induce a flat prior for the novel components.…”
Section: Stage Ii: Bnp Novelty Detection In Test Datamentioning
confidence: 99%
See 1 more Smart Citation
“…By letting these two parameters go off to infinity, we can also recover the degenerate case P T r j = δˆ j where the Dirac's delta denotes a point mass centered inˆ j . That is, the prior beliefs extracted from the training set can be flexibly updated by gradually transitioning from transductive to inductive inference by increasing λ T r and ν T r (Bouveyron 2014). Similarly, we set H ≡ N I W (m 0 , λ 0 , ν 0 , S 0 ) , where the hyperparameters are chosen to induce a flat prior for the novel components.…”
Section: Stage Ii: Bnp Novelty Detection In Test Datamentioning
confidence: 99%
“…In the training set, we identify as outliers units with implausible labels and/or values. Cappozzo et al (2020) extend the work of Bouveyron (2014) addressing this problem by using a robust estimator that relies on impartial trimming (Gordaliza 1991). In short, the most unlikely data points under the currently estimated model are discarded.…”
Section: Introductionmentioning
confidence: 99%
“…Accordingly, an EDDA model is designed and evaluated for the prediction of student risk within course environments, as considered within the hard categories of "at risk", or "not at risk". The supervised classification algorithms did not take into consideration the impact of unlabeled data on one of class [22]. EDDA capable autonomously discover unobserved latent engagement and assign these unlabeled data to one of the classes.…”
Section: Experiments Setupmentioning
confidence: 99%
“…EDDA capable autonomously discover unobserved latent engagement and assign these unlabeled data to one of the classes. The mixture model is powerful inference framework can approximate represented high dimensional data as a linear combination of multiple Gaussian components [22] [23].…”
Section: Experiments Setupmentioning
confidence: 99%
“…Applications of GMM in data monitoring can also be found in the literature, for example in sensor monitoring (Zhu et al 2014), fault detection and diagnosis (Jiang et al 2016;Yu 2012). Some other applications include data classification (Bouveyron 2014), image segmentation (Greggio et al 2011) and many others.…”
Section: Introductionmentioning
confidence: 99%