This paper provides a motion-based contentadaptive depth map enhancement algorithm to enhance the quality of the depth map and reduce the artifacts in the synthesized views. The proposed algorithm extracts depth cues from the motion distribution at the specific scenario of camera movement to align the distribution of depth and motion. In real world scenarios, when the camera is panning in horizontal direction, the nearer distance between the object and the camera, the larger motion will be, and vice versa; therefore, we could interpret the depth from motion in this. Moreover, in the scenario of fixed camera, the depth cue from motion could be derived in the same approach, and the depth variation within one moving object shall be small. Hence, the depth values of moving object should not be rapidly changing. In addition, this paper also employs the bi-directional motion-compensated infinite impulse response low-pass filter to stabilize the consistency of depth maps over time. As a consequence, the algorithm so introduced not only aligns the depth map to depth cues from motion but also enhance stability and consistency of depth maps in the spatial-temporal domain. Experiment results via enhanced depth maps show that the synthesized results would be better in both objective and subjective measurement in comparison with the results using original depth maps and the state-of-the-art depth enhancement algorithms.