We present a learning and control strategy that enables robots to harness physical human interventions to update their trajectory and goal during autonomous tasks. Within the state of the art, the robot typically reacts to physical interactions by modifying a local segment of its trajectory, or by searching for the global trajectory offline, using either replanning or previous demonstrations. Instead, we explore a one-shot approach: here, the robot updates its entire trajectory and goal in real time without relying on multiple iterations, offline demonstrations, or replanning. Our solution is grounded in optimal control and gradient descent, and extends linear-quadratic regulator controllers to generalize across methods that locally or globally modify the robot's underlying trajectory. In the best case, this Linear-quadratic regulator + Learning approach matches the optimal offline response to physical interactions, and-in more challenging cases-our strategy is robust to noisy and unexpected human corrections. We compare the proposed solution against other real-time strategies in a user study and demonstrate its efficacy in terms of both objective and subjective measures.