2005
DOI: 10.1007/11492542_8
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic and Static Weighting in Classifier Fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2007
2007
2016
2016

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(11 citation statements)
references
References 11 publications
0
11
0
Order By: Relevance
“…If the classifier outputs are interpreted as fuzzy membership values, fuzzy approaches could be used. It is also possible to train the combiner separately, using outputs of the base classifiers as new features [6]. One of the earliest works on MCS dates back to Dasarathy and Sheela's 1979 work, which discussed the idea of partitioning the feature space using two or more classifiers [7].…”
Section: Fig 1 Hierarchy Of Mcs Fusion Methodsmentioning
confidence: 99%
“…If the classifier outputs are interpreted as fuzzy membership values, fuzzy approaches could be used. It is also possible to train the combiner separately, using outputs of the base classifiers as new features [6]. One of the earliest works on MCS dates back to Dasarathy and Sheela's 1979 work, which discussed the idea of partitioning the feature space using two or more classifiers [7].…”
Section: Fig 1 Hierarchy Of Mcs Fusion Methodsmentioning
confidence: 99%
“…A weighted k-NN rule for classifying new patterns was first proposed by Dudani [16]. The votes of the k nearest neighbors are weighted by a function of their distance to the test pattern.…”
Section: Fuzzy Match Score Of Semantic Service Matchmentioning
confidence: 99%
“…The input patterns are partitioned, and the best classifier is nominated for each partition. While a static fusion method employs constant weights for each classifier based on training, a dynamic fusion method changes the weights of each classifier based on the observed test pattern [12]. For example, the distance of a test pattern to its nearest neighbor for each individual classifier may be used to compute the dynamic weights.…”
Section: Previous Researchmentioning
confidence: 99%
“…The process involves selecting the minimum value in each column of the DP matrix, and then declaring a classifier with the maximum value of these minimum values in each column as the best classifier. This method is used in [12] to classify meeting events.…”
Section: Previous Researchmentioning
confidence: 99%