2018
DOI: 10.1007/978-3-030-03658-4_4
|View full text |Cite
|
Sign up to set email alerts
|

Integration of Linear SVM Classifiers in Geometric Space Using the Median

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…Based on operations in geometrical space generated by real-valued features this procedure has proven itself to be effective in comparison to others, commonly used integration techniques such as majority voting [7]. In one of the previous papers authors have shown significant improvement in the classification by applying weighted mean and median functional to decision boundaries of the SVM classifiers [8].…”
Section: Related Workmentioning
confidence: 99%
“…Based on operations in geometrical space generated by real-valued features this procedure has proven itself to be effective in comparison to others, commonly used integration techniques such as majority voting [7]. In one of the previous papers authors have shown significant improvement in the classification by applying weighted mean and median functional to decision boundaries of the SVM classifiers [8].…”
Section: Related Workmentioning
confidence: 99%
“…Based on transformations in the geometrical space spread on real-valued, non-categorical features this procedure has proven itself to be more effective in comparison to others, commonly used integration techniques such as majority voting [ 21 ]. The authors have studied and proved the effectiveness of an integration algorithm based on averaging and taking median of values of the decision boundary in the SVM classifiers [ 22 ]. Next, two algorithms for decision trees were proposed and evaluated [ 23 , 24 ].…”
Section: Related Workmentioning
confidence: 99%
“…A common approach to deal with dispersed data is to build a separate local model based on each local table and then combine the local prediction results [ 5 , 6 , 7 ]. In the stage of combining the prediction results, we can use fusion methods [ 8 ] from three different levels (measurement level, rank level and abstract level).…”
Section: Introductionmentioning
confidence: 99%