2022
DOI: 10.1007/s10115-022-01702-8
|View full text |Cite
|
Sign up to set email alerts
|

Graph-based PU learning for binary and multiclass classification without class prior

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 23 publications
0
4
0
Order By: Relevance
“…The results are presented in Tables VI and VII, respectively. It can be seen that the generating ability of our proposed style loss is significantly better than that of 3C-GAN [32] and KEGNET [31], with respect to solving the problem of source domain generation in UniDA without source data. In addition, our style loss maintains a more prominent and uniform performance in perclass accuracy.…”
Section: ) Comparison Of Different Data Diversity Generation Schemesmentioning
confidence: 94%
See 3 more Smart Citations
“…The results are presented in Tables VI and VII, respectively. It can be seen that the generating ability of our proposed style loss is significantly better than that of 3C-GAN [32] and KEGNET [31], with respect to solving the problem of source domain generation in UniDA without source data. In addition, our style loss maintains a more prominent and uniform performance in perclass accuracy.…”
Section: ) Comparison Of Different Data Diversity Generation Schemesmentioning
confidence: 94%
“…Thus, D f = {arg max x p(x | y, z)}. Based on the Bayesian theory [31], [53], arg max x p(x | y, z) can be expressed as follows:…”
Section: B Source Data Generationmentioning
confidence: 99%
See 2 more Smart Citations
“…On the other hand, PU-LP, which has a time complexity of O(n 2 logn), [Jaemin et al 2022] is a graph-based algorithm that utilizes a similarity matrix created using the k-Nearest Neighbors approach. It identifies the unlabeled nodes with the lowest similarity scores and designates them as reliable negative examples.…”
Section: Positive Unlabeled Learningmentioning
confidence: 99%