This paper presents an efficient approach for the use of recursive least square (RLS) learning algorithm in Takagi-Sugeno-Kang neural fuzzy systems. In the use of RLS, reduced covariance matrix, of which the off-diagonal blocks defining the correlation between rules are set to zeros, may be employed to reduce computational burden. However, as reported in the literature, the performance of such an approach is slightly worse than that of using the full covariance matrix. In this paper, we proposed a so-called enhanced local learning concept in which a threshold is considered to stop learning for those less fired rules. It can be found from our experiments that the proposed approach can have better performances than that of using the full covariance matrix. Enhanced local learning method can be more active on the structure learning phase. Thus, the method not only can stop the update for insufficiently fired rules to reduce disturbances in self-constructing neural fuzzy inference network but also raises the learning speed on structure learning phase by using a large backpropagation learning constant.
Self-constructing neural fuzzy inference network (SONFIN) is a neural fuzzy system and owing to its structure learning capability, SONFIN has been demonstrated to have excellent learning performance. However, various parameters must be selected in the implementation of SONFIN. In this paper, the learning behavior of SONFIN is studied. First, the SONFIN system with different thresholds and variances are considered. Different selections will result in different rule numbers and membership function widths. Our experiment results indicate that when there are possibilities of overfitting, more rules may not always come up with better performance. Secondly, two different learning algorithms are considered; the backpropagation (BP) learning algorithm and the recursive least square (RLS) algorithm. It can found that the learning of using RLS is much faster than that of using BP as expected. However, it can be found that when overfitting may occur, BP can have better learning performance in terms of testing errors. Finally, the use of reset for the covariance matrix in the RLS algorithm is investigated. From this primitive study, it can be found that the learning algorithms or some parameter selections may have both good effects in the testing performance and the training performance before the learning does not have significant overfitting. However, when the learning crosses this point, any selection is good for learning may bring bad effects on the testing performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.