2001
DOI: 10.1023/a:1012799113953
|View full text |Cite
|
Sign up to set email alerts
|

Untitled

Abstract: Neural networks are generally exposed to a dynamic environment where the training patterns or the input attributes (features) will likely be introduced into the current domain incrementally. This paper considers the situation where a new set of input attributes must be considered and added into the existing neural network. The conventional method is to discard the existing network and redesign one from scratch. This approach wastes the old knowledge and the previous effort. In order to reduce computational tim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2003
2003
2015
2015

Publication Types

Select...
4
3
1

Relationship

5
3

Authors

Journals

citations
Cited by 64 publications
(12 citation statements)
references
References 14 publications
0
12
0
Order By: Relevance
“…Diabetes, Cancer, and Glass have 200 generations in each confirmation round, while Thyroid has 5000 epochs, because of its large feature number. Results based on ITID and feature orderings derived by AD were compared with those results obtained in previous studies, where feature orderings were derived based on original orderings [2], wrappers [4], correlation-based mRMRs [9,10], and conventional approaches [4]. Here, ITID was randomly initialized by 20 different structures, and the final results were the statistical average of these 20 different initial neural networks.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Diabetes, Cancer, and Glass have 200 generations in each confirmation round, while Thyroid has 5000 epochs, because of its large feature number. Results based on ITID and feature orderings derived by AD were compared with those results obtained in previous studies, where feature orderings were derived based on original orderings [2], wrappers [4], correlation-based mRMRs [9,10], and conventional approaches [4]. Here, ITID was randomly initialized by 20 different structures, and the final results were the statistical average of these 20 different initial neural networks.…”
Section: Methodsmentioning
confidence: 99%
“…Particularly, IAL can bring along more accurate results than conventional approaches where features are imported to training by batch. For example, based on UCI datasets, classification errors of Diabetes, Thyroid and Glass derived by ILIA [4] and ITID [1], two neural IAL algorithms, reduced by 8.2%, 14.6% and 12.6%, respectively [1,2]; moreover, based on OIGA, testing error rates derived by IGA of Yeast, Glass and Wine declined by 25.9%, 19.4% and 10.8% [5] in classification. Furthermore, i + Learning and i + LRA, two kinds of IAL decision trees, were employed to run 16 different UCI datasets.…”
Section: Feature Ordering In Ialmentioning
confidence: 99%
See 1 more Smart Citation
“…The first part means that the current network should be trained for at least a certain number of epochs before a new hidden unit is installed because the error curves will be turbulent in the beginning. The second part means that the current network has been overfit or training has little progress (Guan & Li, 2001).…”
Section: Growing and Training The Neural Networkmentioning
confidence: 99%
“…Performance of the obtained incremental algorithms has been shown better than the original ones (Chen, 2003). In fact, the concept of incremental evolution has been also successfully applied in supervised learning, which works both in the input and output spaces (Guan & Li, 2001;Guan & Li, 2002;Guan & Li, 2004;Guan & Liu, 2002;Guan & Liu, 2004;Guan & Zhu, 2003). However, the objective ordering problem still remains.…”
Section: Ordered Incremental Multiobjective Problem Solving Based On ...mentioning
confidence: 99%