In this paper, we will take a further look at a generalized perceptron-like learning rule which uses dilation and translation parameters in order to enhance the recall performance of higher order Hopfield neural networks without significantly increasing their complexity. We will practically study the influence of these parameters on the perceptron learning and recall process, using a generalized version of the Hebbian learning rule for initialization. Our analysis will be based on a pattern recognition problem with random patterns. We will see that in case of a highly correlated set of patterns, there can be gained some improvements concerning the learning and recall performance. On the other hand, we will show that the dilation and translation parameters have to be chosen carefully for a positive result.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.