2015
DOI: 10.5120/21837-4095
|View full text |Cite
|
Sign up to set email alerts
|

Extracting the Classification Rules from General Fuzzy Min-Max Neural Network

Abstract: The general fuzzy min-max neural network (GFMMN) is capable to perform the classification as well as clustering of the data. In addition to this it has the ability of learning in a very few passes with a very short training time. But like other artificial neural networks, GFMMN is also like a black box and expressed in terms of min-max values and associated class label. So the justification of classification results given by GFMMN is required to be obtained to make it more adaptive to the real world applicatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 33 publications
0
1
0
Order By: Relevance
“…Most of the decoders use the attention-dependent process to each created word by considering both seen terms like "cake", "plate" as well as unseen terms like "are" and etc. Although the unseen terms are effortlessly detected considering a model in the absence of considering signals, unseen words could give wrong ideas and make reduction in all inclusive execution of caption for video [7,8]. Taking these issues, we will study the hierarchy of LSTM along with an adaptive perspective in creating captions for images and videos.…”
Section: Introductionmentioning
confidence: 99%
“…Most of the decoders use the attention-dependent process to each created word by considering both seen terms like "cake", "plate" as well as unseen terms like "are" and etc. Although the unseen terms are effortlessly detected considering a model in the absence of considering signals, unseen words could give wrong ideas and make reduction in all inclusive execution of caption for video [7,8]. Taking these issues, we will study the hierarchy of LSTM along with an adaptive perspective in creating captions for images and videos.…”
Section: Introductionmentioning
confidence: 99%