Abstract-While humans can manipulate deformable objects smoothly and naturally, this is still a challenge for autonomous robots due to the complex object dynamics. The presence of rigid environment constraints and altering contact phases between the deformable object, the manipulator, and the environment makes this problem even more challenging. This paper presents a framework for deformable object manipulation that makes use of a single human demonstration of the task. The recorded trajectories are automatically segmented into a sequence of haptic control primitives involving contact with the rigid environment and vision-guided grasp primitives. The recorded motion/force trajectories serve as reference for a compliant control scheme in contact situations. In order to cope with positioning uncertainties a variable admittance control is proposed. The proposed approach is validated in an experimental mounting task for a deformable linear object with multiple re-grasping. The task is demonstrated with a multimodal teleoperation system and transfered to a robotic platform with a pair of seven degrees of freedom manipulators.