2021
DOI: 10.1109/tnnls.2020.2979607
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic Semi-Supervised Learning via Sparse Graph Structure Learning

Abstract: We focus on developing a novel scalable graph-based semi-supervised learning (SSL) method for a small number of labeled data and a large amount of unlabeled data. Due to the lack of labeled data and the availability of large-scale unlabeled data, existing SSL methods usually encounter either suboptimal performance because of an improper graph or the high computational complexity of the large-scale optimization problem. In this paper, we propose to address both challenging problems by constructing a proper grap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(3 citation statements)
references
References 44 publications
0
3
0
Order By: Relevance
“…1. Li Wang presented a probabilistic semi-supervised learning (SSL) framework based on sparse graph structure learning, and proposed a simple inference approach for the embeddings of unlabeled data based on point estimation and kernel representation, which will get promising results in the setting of SSL compared with many existing methods and significant improvements on small amounts of labeled data [11]. Wang Hong proposed a form of supervised discrete hash algorithm to learn more stable hash codes by learning mutual similarity and using the relationship between different semantic tags [12].…”
Section: Related Work and Our Contributionsmentioning
confidence: 99%
See 1 more Smart Citation
“…1. Li Wang presented a probabilistic semi-supervised learning (SSL) framework based on sparse graph structure learning, and proposed a simple inference approach for the embeddings of unlabeled data based on point estimation and kernel representation, which will get promising results in the setting of SSL compared with many existing methods and significant improvements on small amounts of labeled data [11]. Wang Hong proposed a form of supervised discrete hash algorithm to learn more stable hash codes by learning mutual similarity and using the relationship between different semantic tags [12].…”
Section: Related Work and Our Contributionsmentioning
confidence: 99%
“…In addition, the true positive rate TPR and false positive rate FPR indicators are usually introduced to evaluate the sensitivity and misjudgment of the machine learning model [20], which is shown in formula (11).…”
Section: Performance Indicatormentioning
confidence: 99%
“…Sparse representation is an extremely effective method for exploiting the intrinsic structure of high-dimensional data [21], [25], [37], [43], [45]. The goal of sparse representation is to find a compact representation of high-dimensional data by selecting a small subset of a given dictionary to identify low-dimensional structures in high-dimensional data.…”
Section: Introductionmentioning
confidence: 99%