2009
DOI: 10.4218/etrij.apr2009.rp080276
|View full text |Cite
|
Sign up to set email alerts
|

Fast Training of Structured SVM Using Fixed-Threshold Sequential Minimal Optimization

Abstract: In this paper, we describe a fixed-threshold sequential minimal optimization (FSMO) for structured SVM problems. FSMO is conceptually simple, easy to implement, and faster than the standard support vector machine (SVM) training algorithms for structured SVM problems. Because FSMO uses the fact that the formulation of structured SVM has no bias (that is, the threshold b is fixed at zero), FSMO breaks down the quadratic programming ( ).

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2009
2009
2013
2013

Publication Types

Select...
3
1

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…We will call this model "Blog.Adapt Model". The training was done using fixed-threshold sequential minimal optimization method [16]. We also trained a model for tweets using the news model as the basis.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We will call this model "Blog.Adapt Model". The training was done using fixed-threshold sequential minimal optimization method [16]. We also trained a model for tweets using the news model as the basis.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…His work, however, was applied to ACE 2003 RDC data, which uses 24 relations types which are too general and ambiguous for our task. While maximum entropy is a very popular classifier, we employ structured SVM, which shows better performance in various academic fields [16].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…We modified the original cutting plane algorithm [15] and solved the dual form of the quadratic problem of line 11 in Alg. 1 through the Sequential Minimal Optimization (SMO) approach, inspired by the work of Lee and Jang [19]. Let L P be the Lagrangian of the problem in Eq.…”
Section: Shape-aware Loss Functionmentioning
confidence: 99%
“…In this figure, the precision indicates the ratio of correct candidate documents from candidate documents identified as duplicates by the proposed model; while the recall indicates the ratio of correct candidate documents from all of 924 duplicated documents. F-measure indicates the harmonic mean of the precision and the recall [12].…”
Section: Performance Of Proposed Modelmentioning
confidence: 99%