“…From Table (3), the three numbers of neuron numbers in the three layers that achieve best performance in terms of MSE mean are (1, 2, 1), (3, 1, 1), (9, 5, 2), (19, 2,4), (17, 1, 1), (16,2,5), and (11, 1, 1) sequentially.…”
Section: Experiments and Resultsamentioning
confidence: 99%
“…Table2shows the pair of number of neurons in the two layers that achieve the minimum MSE mean. Table (2), the pairs of numbers of neurons that achieve best performance in terms of MSE mean are (17, 1) (11, 1), (3,1), (19, 2), (16,2), (9,5), (1,2).…”
Section: Experiments and Resultsamentioning
confidence: 99%
“…In this paper Wisconsin Breast Cancer Data (WBCD) is used, which have been analyzed by various researchers of medical diagnosis of breast cancer in the neural network literature [5], [16], [17], [18]. This data set contains 699 instances.…”
Section: Methodsmentioning
confidence: 99%
“…The feed forward neural network is built using of Levenberg-Marquardt training algorithm which is widely used in classification literature [14,15,16].The network architecture used is composed of nine neurons for input layer and one neuron for the output layer. To achieve the paper objectives, www.ijacsa.thesai.org the number of hidden layers and the number of neurons per hidden layer are changed during the training and simulation of the network.…”
Abstract-Classification is one of the most frequently encountered problems in data mining. A classification problem occurs when an object needs to be assigned in predefined classes based on a number of observed attributes related to that object.Neural networks have emerged as one of the tools that can handle the classification problem. Feed-forward Neural Networks (FNN's) have been widely applied in many different fields as a classification tool.Designing an efficient FNN structure with optimum number of hidden layers and minimum number of layer's neurons, given a specific application or dataset, is an open research problem.In this paper, experimental work is carried out to determine an efficient FNN structure, that is, a structure with the minimum number of hidden layer's neurons for classifying the Wisconsin Breast Cancer Dataset. We achieve this by measuring the classification performance using the Mean Square Error (MSE) and controlling the number of hidden layers, and the number of neurons in each layer.The experimental results show that the number of hidden layers has a significant effect on the classification performance and the best classification performance average is attained when the number of layers is 5, and number of hidden layer's neurons are small, typically 1 or 2.
“…From Table (3), the three numbers of neuron numbers in the three layers that achieve best performance in terms of MSE mean are (1, 2, 1), (3, 1, 1), (9, 5, 2), (19, 2,4), (17, 1, 1), (16,2,5), and (11, 1, 1) sequentially.…”
Section: Experiments and Resultsamentioning
confidence: 99%
“…Table2shows the pair of number of neurons in the two layers that achieve the minimum MSE mean. Table (2), the pairs of numbers of neurons that achieve best performance in terms of MSE mean are (17, 1) (11, 1), (3,1), (19, 2), (16,2), (9,5), (1,2).…”
Section: Experiments and Resultsamentioning
confidence: 99%
“…In this paper Wisconsin Breast Cancer Data (WBCD) is used, which have been analyzed by various researchers of medical diagnosis of breast cancer in the neural network literature [5], [16], [17], [18]. This data set contains 699 instances.…”
Section: Methodsmentioning
confidence: 99%
“…The feed forward neural network is built using of Levenberg-Marquardt training algorithm which is widely used in classification literature [14,15,16].The network architecture used is composed of nine neurons for input layer and one neuron for the output layer. To achieve the paper objectives, www.ijacsa.thesai.org the number of hidden layers and the number of neurons per hidden layer are changed during the training and simulation of the network.…”
Abstract-Classification is one of the most frequently encountered problems in data mining. A classification problem occurs when an object needs to be assigned in predefined classes based on a number of observed attributes related to that object.Neural networks have emerged as one of the tools that can handle the classification problem. Feed-forward Neural Networks (FNN's) have been widely applied in many different fields as a classification tool.Designing an efficient FNN structure with optimum number of hidden layers and minimum number of layer's neurons, given a specific application or dataset, is an open research problem.In this paper, experimental work is carried out to determine an efficient FNN structure, that is, a structure with the minimum number of hidden layer's neurons for classifying the Wisconsin Breast Cancer Dataset. We achieve this by measuring the classification performance using the Mean Square Error (MSE) and controlling the number of hidden layers, and the number of neurons in each layer.The experimental results show that the number of hidden layers has a significant effect on the classification performance and the best classification performance average is attained when the number of layers is 5, and number of hidden layer's neurons are small, typically 1 or 2.
“…17 It had not only the convergence speed of Newton's method but also the convergence capability of steepest descent method. 18 Although LM algorithm had fast convergence speed, it still could not avoid the local minimum problem. 19 One of the problems which occurred during above neural network (NN) training algorithms was overfitting.…”
In this article, a neural network corrector is proposed to correct the image shift, yielding the degradation of threedimensional image reconstruction, for each slice captured by cone-beam computed tomography simulator. There are 3 degrees of freedom in tube module of simulator; the central point of tube module should be aligned with the central point of detector module to guarantee the accurate image projection. However, the mechanism manufacturing and assembling tolerance will let the above aim cannot be met. Here, a standard kit is made to measure the image shift by 1°s tep from 210°to 10°. The measure data will be the input training data of proposed neural network corrector, and the corrected translation position will be the output of neural network corrector. The Levenberg-Marquardt learning algorithm adjusts the connected weights and biases of the neural network using a supervised gradient descent method, such that the defined error function can be minimized. To avoid the problem of overfitting and improve the generalized ability of the neural network, Bayesian regularization is added to the Levenberg-Marquardt learning algorithm. After the training of neural network corrector, the different target position commands are fed into the neural network corrector. Then, the corrected data from neural network corrector are fed to be the new position command to verify the image correction performance. Moreover, a phantom kit is made to check the corrected performance of the neural network corrector. Finally, the experimental results verify that the image shift can be reduced by the neural network corrector.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.