In this paper, we establish a connection between image processing, visual perception, and deep learning by introducing a mathematical model inspired by visual perception from which neural network layers and image processing models for color correction can be derived. Our model is inspired by the geometry of visual perception and couples a geometric model for the organization of some neurons in the visual cortex with a geometric model of color perception. More precisely, the model is a combination of a Wilson-Cowan equation describing the activity of neurons responding to edges and textures in the area V1 of the visual cortex and a Retinex model of color vision. For some particular activation functions, this yields a color correction model which processes simultaneously edges/textures, encoded into a Riemannian metric, and the color contrast, encoded into a nonlocal covariant derivative. Then, we show that the proposed model can be assimilated to a residual layer provided that the activation function is nonlinear and to a convolutional layer for a linear activation function. Finally, we show the accuracy of the model for deep learning by testing it on the MNIST dataset for digit classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.