Manipulation of deformable objects often requires a robot to apply specific forces to bring the object into the desired configuration. For instance, tightening a knot requires pulling on the ends, flattening an article of clothing requires smoothing out wrinkles, and erasing a whiteboard requires applying downward pressure. We present a method for learning force-based manipulation skills from demonstrations. Our approach uses non-rigid registration to compute a warping function that transforms both the end-effector poses and forces in each demonstration into the current scene, based on the configuration of the object. Our method then uses the variation between the demonstrations to extract a single trajectory, along with time-varying feedback gains that determine how much to match poses or forces. This results in a learned variableimpedance control strategy that trades off force and position errors, providing for the right level of compliance that applies the necessary forces at each stage of the motion. We evaluate our approach by tying knots in rope, flattening towels, and erasing a whiteboard.