The problem of embodied vision navigation attracts rising attention from the community due to its wide application in autonomous driving, vacuum cleaner, and rescue robot. To accomplish embodied vision navigation requires various intelligent skills, such as exploration, mapping, and planning, visual recognition and reasoning, etc. Moreover, building a robot that observes like a human, thinks like a human, and acts like a human helps the community to know how intelligence really is. 3D simulation technology provides a large scale of data to simulate the real-world environment, which enables researchers to train and test the embodied visual navigation model within it. The robustness of deep learning methods empowered the embodied navigation agents to accomplish diverse tasks. However, embodied navigation is still in its infancy due to a lot of challenges, like learning a robust policy from partially observed visual input, learning to exploration in navigation, accomplishing natural language navigation tasks, adapting to the real-world environment, etc. Recently, numerous works have been proposed to tackle different challenges in this area. To give a promising direction for future research, in this paper, we present a comprehensive review of embodied navigation tasks and the recent progress in deep learning-based methods.