2020
DOI: 10.1016/j.knosys.2019.105330
|View full text |Cite
|
Sign up to set email alerts
|

Clustering by transmission learning from data density to label manifold with statistical diffusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(19 citation statements)
references
References 29 publications
0
19
0
Order By: Relevance
“…(2) based on the designed regression model and the semantic distribution matching between each pair of domains, it not merely provides robustness on loss function but also retains the domain distribution (including local and global) structures and meanwhile maintains a high dependence on the (pseudo)label knowledge of the source domains and the target domain (Zhang et al, 2020a) so as to obtain preferable generalization performance; and (3) through our constructed metric function of correlation, we can make full use of the correlative information among multiple sources and transfer more discriminative knowledge to the target domain. To implement these properties, in the following part, we will detail the objective formulation of the proposed method.…”
Section: Problem Statementmentioning
confidence: 99%
See 1 more Smart Citation
“…(2) based on the designed regression model and the semantic distribution matching between each pair of domains, it not merely provides robustness on loss function but also retains the domain distribution (including local and global) structures and meanwhile maintains a high dependence on the (pseudo)label knowledge of the source domains and the target domain (Zhang et al, 2020a) so as to obtain preferable generalization performance; and (3) through our constructed metric function of correlation, we can make full use of the correlative information among multiple sources and transfer more discriminative knowledge to the target domain. To implement these properties, in the following part, we will detail the objective formulation of the proposed method.…”
Section: Problem Statementmentioning
confidence: 99%
“…The method utilizes the correlated knowledge among domains and features by joint l 2,1 − norm and correlation metric regularization and can process high-dimensional, sparse, outliers, and non-i.i.d EEG data at the same time. The designed method has three characteristics, which are integrated into a unified optimization formulation to find an effective emotion recognition model and align the feature distribution between source and target domains: (1) via employing the l 2,1 − norm minimization, a robust loss term is introduced to avoid the influence of noise or outliers in EEG signal, and a sparse regularization term is designed to eliminate over-fitting and a sparse feature subset is selected; (2) based on the designed regression model and the semantic distribution matching between each pair of domains, it not merely provides robustness on loss function but also retains the domain distribution (including local and global) structures and meanwhile maintains a high dependence on the (pseudo)-label knowledge of the source domains and the target domain ( Zhang et al, 2020a ) so as to obtain preferable generalization performance; and (3) through our constructed metric function of correlation, we can make full use of the correlative information among multiple sources and transfer more discriminative knowledge to the target domain. To implement these properties, in the following part, we will detail the objective formulation of the proposed method.…”
Section: Proposed Frameworkmentioning
confidence: 99%
“…Before performing the numerical analysis, we need to reorganize the constraints among the variables to obtain meaningful conclusions. Statistical big data based on advanced product identification can capture the basic relationships among the variables [36][37][38][39][40][41][42][43][44][45]. Subjecting to all constraints being satisfied, it is reasonable to assume the values of some parameters as follows: potential initial market D=1000, original production cost c =10, sensitivity coefficient of market to price k =10, advertising sensitivity coefficient α =0.7, and innovation sensitivity coefficient β =0.065.…”
Section: Numerical Analysismentioning
confidence: 99%
“…Early studies have developed to detect lanes using mathematical models and traditional computer vision algorithms. For instance, many algorithms have been developed to work on supervised and unsupervised approaches [4][5][6][7]. The current paradigm of research has shifted towards nontraditional machine learning methods, namely, deep learning.…”
Section: Introductionmentioning
confidence: 99%