2011
DOI: 10.1007/s00500-011-0773-5
|View full text |Cite
|
Sign up to set email alerts
|

A K-nearest neighbours method based on imprecise probabilities

Abstract: K-nearest neighbours algorithms are among the most popular existing classification methods, due to their simplicity and good performances. Over the years, several extensions of the initial method have been proposed. In this paper, we propose a K-nearest neighbours approach that uses the theory of imprecise probabilities, and more specifically lower previsions. We show that the proposed approach has several assets: it can handle uncertain data in a very generic way, and decision rules developed within this theo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 23 publications
(12 citation statements)
references
References 39 publications
0
12
0
Order By: Relevance
“…ϵ 0 settles the initial imprecision, while β determines how much imprecision increases with distance (details about the role of these parameters can be found in [14]). …”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…ϵ 0 settles the initial imprecision, while β determines how much imprecision increases with distance (details about the role of these parameters can be found in [14]). …”
Section: Resultsmentioning
confidence: 99%
“…The method we used was to apply, label-wise, the k-nn method using lower probabilities introduced in [14]. This means that from an initial training data set D, m data sets D j corresponding to binary classification problems are built, this decomposition being illustrated in Fig.…”
Section: Methodsmentioning
confidence: 99%
“…In the experiments, the parameters of the k-nn algorithm were set to β = 0.75 and ε 0 = 0.99, so that results obtained when fixing the number k of neighbors to 1 display a sufficient completeness. ε 0 settles the initial imprecision, while β determines how much imprecision increases with distance (details about the role of these parameters can be found in [6]). We ran experiments on well-known multilabel data sets having real-valued features.…”
Section: Resultsmentioning
confidence: 99%
“…The method we used was to apply, label-wise, the k-nn method using lower probabilities introduced in [6] (in which details can be found). This means that from an initial training data set D, m data sets D j corresponding to binary classification problems are built, this decomposition being illustrated in Figure 1.…”
Section: Methodsmentioning
confidence: 99%
“…In addition, the sliding model controller is removed, which can reduce the computational burden. The theory of imprecise probability provides a formal framework to determine an optimal decision under uncertainties of the state of system, which makes it suitable for a wide range of application areas [23,24]. In this paper, the theory of imprecise probability was used to design the stopping criteria.…”
Section: Introductionmentioning
confidence: 99%