2002
DOI: 10.1109/72.991426
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive resolution min-max classifiers

Abstract: A high automation degree is one of the most important features of data driven modeling tools and it should be taken into consideration in classification systems design. In this regard, constructive training algorithms are essential to improve the automation degree of a modeling system. Among neuro-fuzzy classifiers, Simpson's (1992) min-max networks have the advantage of being trained in a constructive way. The use of the hyperbox, as a frame on which different membership functions can be tailored, makes the m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
33
0
1

Year Published

2002
2002
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 97 publications
(34 citation statements)
references
References 18 publications
0
33
0
1
Order By: Relevance
“…At this stage, a generic classifier conceived to work in a vector space can easily deal with the classification task. We decided to use a k-NN classifier, but different methods could be adopted, for example, several types of single layer feed-forward networks (e.g., neurofuzzy networks or random vector functional-links [34,38]). As in the case of the Gk-NN classifier, the parameter k is chosen a-priori by the user.…”
Section: Classification By Embedding In a Vector Spacementioning
confidence: 99%
“…At this stage, a generic classifier conceived to work in a vector space can easily deal with the classification task. We decided to use a k-NN classifier, but different methods could be adopted, for example, several types of single layer feed-forward networks (e.g., neurofuzzy networks or random vector functional-links [34,38]). As in the case of the Gk-NN classifier, the parameter k is chosen a-priori by the user.…”
Section: Classification By Embedding In a Vector Spacementioning
confidence: 99%
“…Đến nay, có một số nghiên cứu đề xuất cải tiến hiệu suất thuật toán học của FMNN. Rizzi đã cải tiến FMNN của Simpson bằng cách áp dụng các kỹ thuật phân loại thích ứng [17], cắt tỉa [18], khái quát PARC (Pruning Adaptive Resolution Classifier) [19] sử dụng kỹ thuật cài đặt đệ quy. Tuy nhiên, các giải pháp này có chi phí tính toán lớn, do hạn chế của việc sử dụng đệ quy [18].…”
Section: Phần Mở đầUunclassified
“…Abe et al [3][4][5] presented an efficient method for extracting rules directly from a series of activation hyperboxes, which capture the existence region of data for a given class and inhibition hyperboxes, which inhibit the existence of data of that class. Rizzi et al [6,7] proposed an adaptive resolution classifier (ARC) and its pruned version (PARC) in order to enhance the constructs introduced by Simpson. ARC/PARC generates a regularized min-max network by a series of hyperbox cuts.…”
Section: Introductory Commentsmentioning
confidence: 99%
“…Each classifier comes with its geometry and this predominantly determines its capabilities [1][2][3][4][5][6][7][8][9]. While linear classifiers (built on a basis of some hyperplanes) and nonlinear classifiers (such as neural networks) are two popular alternatives, there is another point of view at the development of the classifiers that dwells on the concept of information granules.…”
Section: Introductory Commentsmentioning
confidence: 99%
See 1 more Smart Citation