2017
DOI: 10.1007/s10044-017-0677-9
|View full text |Cite
|
Sign up to set email alerts
|

Improving optimum-path forest learning using bag-of-classifiers and confidence measures

Abstract: Machine learning techniques have been actively pursued in the last years, mainly due to the great number of applications that make use of some sort of intelligent mechanism for decision-making processes. In this work, we presented an ensemble of optimum-path forest (OPF) classifiers, which consists into combining different instances that compute a score-based confidence level for each training sample in order to turn the classification process "smarter", i.e., more reliable. Such confidence level encodes the l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 31 publications
(39 reference statements)
0
3
0
Order By: Relevance
“…Assigning scores is straightforward using the OPF algorithm, as demonstrated by Fernandes and Papa [35]. During validation, we computed how many times a training sample labeled a validation node correctly.…”
Section: Undersampling With Optimum-path Forestmentioning
confidence: 99%
“…Assigning scores is straightforward using the OPF algorithm, as demonstrated by Fernandes and Papa [35]. During validation, we computed how many times a training sample labeled a validation node correctly.…”
Section: Undersampling With Optimum-path Forestmentioning
confidence: 99%
“…The OPF presented results similar to SVM and better than neural networks and Bayesian classifiers. The biggest difference is the execution time, which can be faster depending on the size of the database [52] [53].…”
Section: Opf Classifiersmentioning
confidence: 99%
“…After the completion of 10 experiments, the average of 10 experiments of each indicator was used to evaluate the performance of the model. This paper uses four indicators to evaluate the performance of the model, namely accuracy, precision, recall, and F-measure values (harmonic average of precision and recall) [43]. The recall rate indicates the ratio of the number of unsafe connections detected to the actual number of connections.…”
Section: A Experimental Data Setmentioning
confidence: 99%