The Nonlinear Auto-Regressive Moving Average with Exogenous Inputs (NARMAX) model is a powerful, efficient and unified representation of a variety of nonlinear models. The model's construction involves structure selection and parameter estimation, which can be simultaneously performed using the established Orthogonal Least Squares (OLS) algorithm. However, several criticisms have been directed towards OLS for its tendency to select excessive or sub-optimal terms leading to nonparsimonious models. This paper proposes the application of the Binary Particle Swarm Optimization (BPSO) algorithm for structure selection of NARMAX models. The selection process searches for the optimal structure using binary bits to accept or reject the terms to form the reduced regressor matrix. Construction of the model is done by first estimating the NARX model, then continues with the estimation of the MA model based on the residuals produced by NARX. One Step Ahead (OSA) prediction, Mean Squared Error (MSE) and residual histogram analysis were performed to validate the model. The proposed optimization algorithm was tested on the Flexible Robot Arm (FRA) dataset. Results show the success of BPSO structure selection for NARMAX when applied to the FRA dataset. The final NARMAX model combines the NARX and MA models to produce a model with improved predictive ability compared to the NARX model.
This paper presents the development and comparison of muscle models based on Functional Electrical Stimulation (FES) stimulation parameters using the Nonlinear Auto-Regressive model with Exogenous Inputs (NARX) using Multi-Layer Perceptron and Cascade Forward Neural Network (CFNN). FES stimulations with varying frequency, pulse width and pulse duration were used to estimate the muscle torque. About 722 data points were used to create muscle model. One Step Ahead (OSA) prediction, correlation tests and residual histogram analysis were performed to validate the model. The optimal Multi-Layer Perceptron (MLP) results were obtained from input lag space of 1, output lag space of 43 and hidden units 30. The MLP selected a total of three terms were selected to construct the final model, which producing a final Mean Square Error (MSE) of 1.1299. The optimal CFNN results were obtained from input lag space of 1, output lag space of 5 and hidden units 20 with similar terms selected. The final MSE produced was 1.0320. The proposed approach managed to approximate the behavior of the system well with unbiased residuals, which CFNN showing 8.66% MSE improvement over MLP with 33.33% less hidden units.
Deep Learning Neural Network (DLNN), is a new branch of machine learning with the ability for complex feature representatio Although it was mainly suited for image feature (since it was inspired by object recognition method of mammalian visual system), if any type of feature can be translate into image, other type of data could be fit for using DLNN. In this paper, we prove that Mel Frequency Cepstrum Coefficient (MFCC) feature generates from audio signal of infant cry could be used as input feature for the Convolution Neural Network (CNN J Fundam Appl Sci. 2017, 9(3S), 768-778 769 The result shows CNN can be used to classify between normal and pathological (asphyxiated) cry with 94.3% accuracy in training set and 92.8% accuracy in testing set.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.