Humans and most animals can run/fly and navigate efficiently through cluttered environments while avoiding obstacles in their way. Replicating this advanced skill in autonomous robotic vehicles currently requires a vast array of sensors coupled with computers that are bulky, heavy and power hungry. The human eye and brain have had millions of years to develop an efficient solution to the problem of visual navigation and we believe that it is the best system to reverse engineer. Our brain and visual system appear to use a very different solution to the visual odometry problem compared to most computer vision approaches. We show how a neural-based architecture is able to extract self-motion information and depth from monocular 2-D video sequences and highlight how this approach differs from standard CV techniques. We previously demonstrated how our system works during pure translation of a camera. Here, we extend this approach to the case of combined translation and rotation.