In the present study, an extension of the idea of dynamic neurons is proposed by replacing the weights with a weight function that is applied simultaneously to all neuron inputs. A new type of artificial neuron called an integral neuron is modeled, in which the total signal is obtained as the integral of the weight function. The integral neuron enhances traditional neurons by allowing the signal shape to be linear and nonlinear. The training of the integral neuron involves finding the parameters of the weight function, where its functional values directly influence the total signal in the neuron’s body. This article presents theoretical and experimental evidence for the applicability and convergence of standard training methods such as gradient descent, Gauss–Newton, and Levenberg–Marquardt in searching for the optimal weight function of an integral neuron. The experimental part of the study demonstrates that a single integral neuron can be trained on the logical XOR function—something that is impossible for single classical neurons due to the linear nature of the summation in their bodies.