2015
DOI: 10.1007/s13042-015-0390-1
|View full text |Cite
|
Sign up to set email alerts
|

Semi-supervised classification with privileged information

Abstract: The privileged information that is available only for the training examples and not available for test examples, is a new concept proposed by Vapnik and Vashist (Neural Netw 22(5-6):544-557, 2009). With the help of the privileged information, learning using privileged information (LUPI) (Neural Netw 22(5-6):544-557, 2009) can significantly accelerate the speed of learning. However, LUPI is a standard supervised learning method. In fact, in many real-world problems, there are also a lot of unlabeled data. This… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…These mechanisms can be used for accelerating the speed of learning. In Qi et al (2015), a semi-supervised learning approach using privileged information is proposed. This approach can exploit both the distribution information in unlabeled data and privileged information, to improve the efficiency of the learning.…”
Section: Related Workmentioning
confidence: 99%
“…These mechanisms can be used for accelerating the speed of learning. In Qi et al (2015), a semi-supervised learning approach using privileged information is proposed. This approach can exploit both the distribution information in unlabeled data and privileged information, to improve the efficiency of the learning.…”
Section: Related Workmentioning
confidence: 99%
“…In our future work, we intend to pursue extensive empirical experiments in order to compare the proposed self-labeled method AAST with various methods, belonging to other SSL classes such as generative mixture models [14,41], transductive SVMs [42][43][44], graph-based methods [45][46][47][48][49], extreme learning methods [50][51][52], expectation maximization with generative mixture models [14,53]. Furthermore, since our experimental results are quite encouraging, our next step is the use of other supervised classifiers as base learners, such as neural networks [54] and support vector machines [55] or ensemble-based learners [26] aiming to enhance our proposed framework with more sophisticated and theoretically motivated selection criteria for the most promising classifier in order to study the behavior of AAST at each cycle.…”
Section: Conclusion and Future Researchmentioning
confidence: 99%