“…Note: own elaboration Figure 2 shows the neural network constructed in which 40 data related to the competencies of the entry profile classified as follows 3 axón's (X1, X2, X3), each incoming axon generated 18 synapses or weights (W11 (1) , …, W21 (1) , …, W31 (1) , …, W36 (1) ), both are calculated by means of the aggregation function (Z1 (2) , Z2 (2) , Z3 (2) , Z4 (2) , Z5 (2) , Z6 (2) ) complying with: Z (2) = X*W (1) and the construction of the activation function a (2) = f(Z (2) ) represented by the sigmoid function for the hidden layer; the protruding axon represented by "y" is the operation between the activation functions (a1 (2) , a2 (2) , a3 (2) , a4 (2) , a5 (2) , a6 (2) ) and their respective weights (W11 (2) , W21 (2) , W31 (2) , W41 (2) , W51 (2) , W61 (2) ) constructing the function Z (3) = a (2) *W (2) and activation function y = f(Z (3) ) the function used is the sigmoid. The results found are not completely reliable since the percentage of reliability is not high, so we have resorted to identifying the cost function with the mean squared error J = ½*e 2 and the gradient descent algorithm with the derivative of the function and to ensure high reliability the second derivative of the function was used (LeCun et al, 2012).…”