2007
DOI: 10.1007/s10994-007-5015-9
|View full text |Cite
|
Sign up to set email alerts
|

Structured large margin machines: sensitive to data distributions

Abstract: This paper proposes a new large margin classifier-the structured large margin machine (SLMM)-that is sensitive to the structure of the data distribution. The SLMM approach incorporates the merits of "structured" learning models, such as radial basis function networks and Gaussian mixture models, with the advantages of "unstructured" large margin learning schemes, such as support vector machines and maxi-min margin machines. We derive the SLMM model from the concepts of "structured degree" and "homospace", base… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
39
0

Year Published

2009
2009
2016
2016

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 66 publications
(41 citation statements)
references
References 24 publications
1
39
0
Order By: Relevance
“…3). That is, RBFNs achieve better overall performance than MLPs for the given spherical-/Gaussiandistributed data [19]. This is well consistent with NFL [20] which states that, for good generalization, there are no context-independent or usage-independent reasons to favor one classification method over another, unless appropriate prior information is incorporated in model selection.…”
Section: Introductionsupporting
confidence: 75%
“…3). That is, RBFNs achieve better overall performance than MLPs for the given spherical-/Gaussiandistributed data [19]. This is well consistent with NFL [20] which states that, for good generalization, there are no context-independent or usage-independent reasons to favor one classification method over another, unless appropriate prior information is incorporated in model selection.…”
Section: Introductionsupporting
confidence: 75%
“…The principle of two fork tree-SVM multiple classification method is that there are a training period and a testing period [18]. When training, it starts from the leaf node and then moves towards the root node.…”
Section: ) Two Fork Tree-svm Multiple Classification Methodsmentioning
confidence: 99%
“…In order to observe the energy distribution of language, this paper puts forward the diagram of energy distribution of language [11]. It is able to more intuitively show the…”
Section: ) Energy Distribution Of Languagementioning
confidence: 99%
“…Labeled input-output pairs may be fed to the algorithms during training phase (Stankovic et al 2012;Ganguly 2008;Bishop 2006). The performance of a classifier depends on its ability to classify the unseen data based on the learned model and is more generally known as its generalization ability (Nguyen et al 2008;Yeung et al 2007;Steinwart and Christmann 2008;Tax and Duin 2004). Depending on the type of model learned during the training phase, these techniques can be divided into two types:…”
Section: Classification Based Outlier and Event Detection For Wsns Dementioning
confidence: 99%