We present a method for decomposing the 3D scene flow observed from a moving stereo rig into stationary scene elements and dynamic object motion. Our unsupervised learning framework jointly reasons about the camera motion, optical flow, and 3D motion of moving objects. Three cooperating networks predict stereo matching, camera motion, and residual flow, which represents the flow component due to object motion and not from camera motion. Based on rigid projective geometry, the estimated stereo depth is used to guide the camera motion estimation, and the depth and camera motion are used to guide the residual flow estimation. We also explicitly estimate the 3D scene flow of dynamic objects based on the residual flow and scene depth. Experiments on the KITTI dataset demonstrate the effectiveness of our approach and show that our method outperforms other state-of-the-art algorithms on the optical flow and visual odometry tasks. † Part of this work was done during an internship at Microsoft Research Asia 1 S. Lee, S. Im and I. S. Kweon are with the Robotics and Computer Vision Laboratory, KAIST, Daejeon, 34141, Republic of Korea {snapillar, dlarl8927, iskweon77}@kaist.ac.kr 2 S. Lin is with the