2004
DOI: 10.1016/s0031-3203(04)00041-x
|View full text |Cite
|
Sign up to set email alerts
|

Dissimilarity learning for nominal data

Abstract: Defining a good distance (dissimilarity) measure between patterns is of crucial importance in many classification and clustering algorithms. While a lot of work has been performed on continuous attributes, nominal attributes are more difficult to handle. A popular approach is to use the value difference metric (VDM) to define a real-valued distance measure on nominal values. However, VDM treats the attributes separately and ignores any possible interactions among attributes. In this paper, we propose the use o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 10 publications
0
12
0
Order By: Relevance
“…However, this metric implies loss of information because it considers that all attributes values are of equal distance from each other, not taking into account different degrees of similarity. There exist alternative dissimilarity measures such as the Value Difference Metric (VDM) (Wilson and Martinez 1997), and Adaptive Dissimilarity Matrix (ADM) (Cheng et al 2004), among others. For the VDM two feature values are considered to be closer if they have similar classifications (i.e.…”
Section: Rbfns and The Classification Problem Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…However, this metric implies loss of information because it considers that all attributes values are of equal distance from each other, not taking into account different degrees of similarity. There exist alternative dissimilarity measures such as the Value Difference Metric (VDM) (Wilson and Martinez 1997), and Adaptive Dissimilarity Matrix (ADM) (Cheng et al 2004), among others. For the VDM two feature values are considered to be closer if they have similar classifications (i.e.…”
Section: Rbfns and The Classification Problem Frameworkmentioning
confidence: 99%
“…On the other hand, ADM takes into account the possible correlation between the attributes. In Cheng et al (2004), the RBFN model is used to test the efficiency of the OM, VDM and ADM dissimilarity measures and it is demonstrated that VDM and ADM outperform OM. It is more difficult to conclude which dissimilarity measure is the most efficient when comparing VDM and ADM results.…”
Section: Rbfns and The Classification Problem Frameworkmentioning
confidence: 99%
“…Of course, one could define a degree of quasi-oppositeness and determine the slope of gradual increase/decrease based on problem specifications. This, however, will fall into the scope of similarity/dissimilarity measures, which has been extensively investigated for many other research fields [5,10,19,20].…”
Section: Definition 4 (Type-i Quasi-oppositionmentioning
confidence: 99%
“…For example, the well-known bisection method [4] for solving equations can be considered an I-OBC algorithm since it shrinks the search interval by looking at positive versus negative sign change. As well, using Bayes theorem [2] is an implicit usage of oppositional relationship between the conditional probabilities p(A|B) and p(B|A) 5 . A different, stronger version of implicit incorporation of oppositional concepts is Bayesian Yin-Yang Harmony Learning which is based on the alternative viewpoints of Bayes Rule (Chapter 10).…”
Section: Definition 11 (Implicit Obc Algorithms) Any Algorithm That mentioning
confidence: 99%
“…However, for nominal attributes (such as rank information), definitions of (dis)similarity becomes non-trivial [55]. A commonly used approach is overlap metric [202], where } for two possible values v, and v., the distance is assigned as zero when v. and vy are identical and one if they are different.…”
Section: Determining Similaritymentioning
confidence: 99%