Artificial neural networks (ANNs) are now omnipresent, and there is continuous growth in excellent research to create new ones. ANN is capable of adaptive, dynamic learning. In ANN learning, we need to change the weights to enhance the input/output behavior. Therefore, with the help of which the weights can be adjusted, a method is required. These strategies are called learning laws, which are simply formulas or algorithms. Rules of learning are algorithms or mathematical logic that guides modifications is the weight of the network links. By employing the disparity between both the expected output and the real outcome to update its weights during training, they incorporate an error reduction mechanism. Learning laws improve the effectiveness of the artificial neural network and extend this rule to the network. Usually, the learning rule is repeatedly applied to the same set of training inputs over a large number of cycles, with error steadily decreasing over epochs. They are fine-tuned as the weights are. The present research strives, however, to assess the objective of the artificial neural network and the learning principles. In this, we analyzed ten different regulations studied by leading researchers that include rules based on supervised learning (perceptron, memory-based, delta, error correction, correlation, out star, supervised Hebbian) and unsupervised rules based on learning (competitive, competitive Hebbian, Hebbian); this defines how to adjust the weight of the nodes of a network.