Utilizing appropriate landmarks in the environment is often critical to planning a robot's motion for a given task. We propose a method to automatically learn task-relevant landmarks, and incorporate the method into an asymptotically optimal motion planner that is informed by a set of human-guided demonstrations. Our method learns from kinesthetic demonstrations a task model that is parameterized by the poses of virtual landmarks. The approach models a task using multivariate Gaussian distributions in a feature space that includes the robot's configurations and the relative positions of landmarks in the environment. The method automatically learns virtual landmarks that are based on linear combinations or projections of sensed landmarks whose pose is identified using the robot's kinematic model and vision sensors. To compute motion plans for the task in new environments, we parameterize the learned task model using the virtual landmark poses and compute paths that maximally adhere to the learned task model while avoiding obstacles. We experimentally evaluate our approach on two manipulation tasks using the Baxter robot in an environment with obstacles.Index Terms-Motion and path planning, probability and statistical methods.