2022
DOI: 10.1007/s00362-022-01298-9
|View full text |Cite
|
Sign up to set email alerts
|

Admissible linear estimators in the general Gauss–Markov model under generalized extended balanced loss function

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 38 publications
0
2
0
Order By: Relevance
“…For the silver dataset, if it is directly merged with the gold dataset, on the one hand, it will change the label distribution or even the situation of outliers, and on the other hand, the noise samples will impair the effectiveness of the relational classifier. For this reason, this paper constructs a contrastive learning loss function for the silver dataset to use the labeling information indirectly [32][33]. First of all, the dataset includes three quality-level samples of gold, silver, and bronze, which deal with real labels and pseudo-labels obtained through templates, respectively.…”
Section: Classification Of Relationships Based On Template Qualitymentioning
confidence: 99%
“…For the silver dataset, if it is directly merged with the gold dataset, on the one hand, it will change the label distribution or even the situation of outliers, and on the other hand, the noise samples will impair the effectiveness of the relational classifier. For this reason, this paper constructs a contrastive learning loss function for the silver dataset to use the labeling information indirectly [32][33]. First of all, the dataset includes three quality-level samples of gold, silver, and bronze, which deal with real labels and pseudo-labels obtained through templates, respectively.…”
Section: Classification Of Relationships Based On Template Qualitymentioning
confidence: 99%
“…For Logistic regression the loss function is usually interpreted by two explanations, cross entropy based on information theory and the great likelihood estimation function to solve for the parameters [26][27].…”
Section: Loss Functionmentioning
confidence: 99%