In this work, we propose a technique for generation of emergent navigation behavior in autonomous agents which are able to move at the environment using their own vision. In order to achieve this, we apply the Continuous Time Recurrent Artificial Neural Network and the genetic encoding proposed in [1] and [2]. However, we use a new sensorial description, which consists of captured images by a virtual camera, evolving an artificial visual cortex. The experiments show that the agents are able to navigate at the environment and to find the exit, in a non-programmed way and whithout requiring agent’s reprogrammation, using only the visual data passed to the neural network. This technique has the flexibility of being applied in various environments, without displaying a biased forced behavior as a result of a behavioral modeling, as in other techniques.