Teleoperated robots have been widely accepted in several fields of medical practice, enhancing human abilities and allowing remote operation. However, such technology has not been able yet to permeate areas such as primary care and physical examination. Such applications strongly rely on the quality of the interaction between doctor and patient, and on its multimodal nature. In order to achieve remote physical examination is thus mandatory to have a good doctor-robot interface, but what does good mean? Ultimately, the goal is for the user to achieve task embodiment, making the remote task feel like the in-person one. Several research groups have proposed a wide variety of interfaces, showcasing largely different methods of control and feedback, because of the absence of design guidelines. In this work, we argue that the ideal interface for a remote task should resemble as close as possible the experience provided by the in-person equivalent, keeping in consideration the nature of the target users. To support our claims, we analyze many remote interfaces and compare them with the respective in-person task. This analysis is not limited to the medical sector, with examples such as remote abdominal surgery, but it expands to all forms of teleoperation, up to nuclear waste handling and avionics.