2014
DOI: 10.1016/j.ins.2014.06.017
|View full text |Cite
|
Sign up to set email alerts
|

Multi-granularity distance metric learning via neighborhood granule margin maximization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
9
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 34 publications
(9 citation statements)
references
References 44 publications
0
9
0
Order By: Relevance
“…In this paper, the multigranularity structure will be constructed through employing different radii. As reported in [16,27], the radius-based neighborhood forms a information granule, then the neighborhood-based singlegranularity can be constructed by applying one and only one radius, and it follows that neighborhood based multigranularity can be constructed by applying a set of different radii. In this process, applying a smaller radius will generate a finer information granule while applying a greater radius will generate a coarser information granule.…”
Section: Introductionmentioning
confidence: 98%
See 1 more Smart Citation
“…In this paper, the multigranularity structure will be constructed through employing different radii. As reported in [16,27], the radius-based neighborhood forms a information granule, then the neighborhood-based singlegranularity can be constructed by applying one and only one radius, and it follows that neighborhood based multigranularity can be constructed by applying a set of different radii. In this process, applying a smaller radius will generate a finer information granule while applying a greater radius will generate a coarser information granule.…”
Section: Introductionmentioning
confidence: 98%
“…However, it is worth noting that they all focus on one and only one xed parameter in attribute reduction [9][10][11], such as the xed Gaussian kernel parameter in fuzzy rough set or the xed radius in neighborhood rough set. From the viewpoint of Granular Computing (GrC) [12][13][14][15][16], applying one and only one parameter in rough set can only re ect the information over a fixed granularity [17][18][19]. erefore, the attribute reduction over a fixed granularity can be termed as the single-granularity attribute reduction.…”
Section: Introductionmentioning
confidence: 99%
“…According to such property, many knowledge discovery and data mining approaches involving parameter based granularity have been explored. For example, Liu et al [21] designed a framework of multi-granularity feature selection scheme which considers the variation of parameters; Zhu et al [40] learnt an effective distance metric based on neighborhood granule margin maximization from a group of parameters; Zhu et al [41] also presented an adaptive selection scheme of neighborhood granularity (i.e., parameters), and provided a solution to margin distribution optimization. Nevertheless, these approaches only exploited the external representations of parameter, instead of describing the reflected granularity, as well as the associated discrimination ability of knowledge.…”
Section: Introductionmentioning
confidence: 99%
“…However, different data usually prefer different distance metrics to reflect different semantic concepts of dissimilarity or similarity in the context of problems, and hence adapting the distance metrics to different data can be expected to improve the classification performance of NSM. On the other hand, distance metric learning methods emerging in the machine learning community provide us a tool to learn tailored distance metrics automatically from data and to improve the classification performance [23,21,26,19,24].…”
Section: Introductionmentioning
confidence: 99%