Purpose
The purpose of this paper is to conduct a comprehensive review of the noteworthy contributions made in the area of the Feedforward neural network (FNN) to improve its generalization performance and convergence rate (learning speed); to identify new research directions that will help researchers to design new, simple and efficient algorithms and users to implement optimal designed FNNs for solving complex problems; and to explore the wide applications of the reviewed FNN algorithms in solving real-world management, engineering and health sciences problems and demonstrate the advantages of these algorithms in enhancing decision making for practical operations.
Design/methodology/approach
The FNN has gained much popularity during the last three decades. Therefore, the authors have focused on algorithms proposed during the last three decades. The selected databases were searched with popular keywords: “generalization performance,” “learning rate,” “overfitting” and “fixed and cascade architecture.” Combinations of the keywords were also used to get more relevant results. Duplicated articles in the databases, non-English language, and matched keywords but out of scope, were discarded.
Findings
The authors studied a total of 80 articles and classified them into six categories according to the nature of the algorithms proposed in these articles which aimed at improving the generalization performance and convergence rate of FNNs. To review and discuss all the six categories would result in the paper being too long. Therefore, the authors further divided the six categories into two parts (i.e. Part I and Part II). The current paper, Part I, investigates two categories that focus on learning algorithms (i.e. gradient learning algorithms for network training and gradient-free learning algorithms). Furthermore, the remaining four categories which mainly explore optimization techniques are reviewed in Part II (i.e. optimization algorithms for learning rate, bias and variance (underfitting and overfitting) minimization algorithms, constructive topology neural networks and metaheuristic search algorithms). For the sake of simplicity, the paper entitled “Machine learning facilitated business intelligence (Part II): Neural networks optimization techniques and applications” is referred to as Part II. This results in a division of 80 articles into 38 and 42 for Part I and Part II, respectively. After discussing the FNN algorithms with their technical merits and limitations, along with real-world management, engineering and health sciences applications for each individual category, the authors suggest seven (three in Part I and other four in Part II) new future directions which can contribute to strengthening the literature.
Research limitations/implications
The FNN contributions are numerous and cannot be covered in a single study. The authors remain focused on learning algorithms and optimization techniques, along with their application to real-world problems, proposing to improve the generalization performance and convergence rate of FNNs with the characteristics of computing optimal hyperparameters, connection weights, hidden units, selecting an appropriate network architecture rather than trial and error approaches and avoiding overfitting.
Practical implications
This study will help researchers and practitioners to deeply understand the existing algorithms merits of FNNs with limitations, research gaps, application areas and changes in research studies in the last three decades. Moreover, the user, after having in-depth knowledge by understanding the applications of algorithms in the real world, may apply appropriate FNN algorithms to get optimal results in the shortest possible time, with less effort, for their specific application area problems.
Originality/value
The existing literature surveys are limited in scope due to comparative study of the algorithms, studying algorithms application areas and focusing on specific techniques. This implies that the existing surveys are focused on studying some specific algorithms or their applications (e.g. pruning algorithms, constructive algorithms, etc.). In this work, the authors propose a comprehensive review of different categories, along with their real-world applications, that may affect FNN generalization performance and convergence rate. This makes the classification scheme novel and significant.