In standard practice, the surgeon perceives multiple types of data (visual, tactile, auditory, even olfactory information), decides at each moment the optimal strategy to pursue, and acts in accordance with this strategy. Here we refer to as a ''surgical robot'' any system able to use digital data to help the surgeon work physically on the patient. Every surgical robot therefore uses sensors, its own capacity to process the information they collect, and instruments. We will use this common framework to describe and propose a classification of the current robots and to discuss their future prospects.The sources of information used by the robots in clinical practice today are, on the one hand, medical images, both pre-and intraoperative, and on the other hand, images and measurements of position, of three-dimensional (3D) shapes, and of force.Historically, beginning in 1988, X-ray CT was the first imaging method used to plan robotic stereotactic neurosurgery procedures [1], which were then performed in the CT scanner room. The superior performance of magnetic resonance imaging (MRI) for brain imaging, as well as the fact that an operating room is much better adapted to surgery than the scanner room, led us to suggest that stereotactic neurosurgical procedures should be guided by a robot that can use intraoperative MRI [2] and radiography (Fig. 1). Ultrasound subsequently appeared to be a very useful source of images for guiding surgery, including procedures involving bones [3]. These sources of images have since been widely used by surgical robots.Other sensors, initially developed for computer vision applications, now play an important role in these robots. Three-dimensional position sensors or trackers (also called 3D localizers), for example, are an essential component of all the surgical navigation systems presented in Section Action: classification of surgical robots. Their principle is to make