The challenge of creating an autonomous space robot for on-orbit satellite servicing and inspection depends greatly upon the vision understanding subsystem. Off-the-shelf vision systems do not provide the three spatial and one temporal dimension modeling necessary for this complex task. Prior research has generally investigated the four-dimensional scene understanding problem at the expense of a true real-time capability. We have begun research at the Lockheed Digital Image Processing Laboratory on a space robot vision subsystem providing both a real-time processing and four-dimensional object deteamination. This paper describes our initial approach.
Statement of ProblemCertain space robotic tasks require that the vision understanding system of the robot perceive, identify, and measure moving objects. An example of this is the Solar Maximum Mission (SMM) Main Electronics Box Orbital Replacement Unit (ORU) replacement task [l]. (a task to be handled autonomously by NASA's Flight Telerobotic Servicer.) These tasks require machine vision which models three-dimensional spatial structures and which perceives relative motions [2]. The most general case is to solve for twelve degrees of M o m (three translation and three rotation parameters, for both the robot and the satellite). The worstxase scenario is a satellite tumbling freely due to some mishap, with the motions oscillating at some unknown frequencies. We are concerned with a more typical scenario with the satellite revolving about some spatial axis at a constant rate, perhaps with a small but significant precession and nutation. The robot must accurately perceive this motion in order to effectively grapple or dock with the satellite. It is clear from this that the vision subsystem robot must operate in real time, have four dimensions of perception (three spatial, one temporal), and must be closely integrated with the robot control subsystem.
MethodologyGiven the requirements for perception of three spatial dimensions, most researchers have concentrated on either the use of active systems (e.g., laser radars) [3,4] or the m e of stereo vision [5,6]. Both approaches have drawbacks.Stereo is computationally complex, which precludes real time processing given today's technology and the probable volume and power available to a space robot. Microwave radars are subject to electronic cmntermeasures and can often interfere with other devices. Structured light range sensors are either mechanically slow or (if a 2-D grid is superimposed) computationally complex.The new laser radars seem ideal: they have a low signature (thus will not interfere with local devices), and provide excellent information on range and intensities. Problems to be overcome, however, include maturation of the sensors; flight-qualitlcation; cost (currently over US$lOO,OOO); safety (for both testing and EVA astronauts); and speed (current frame rates range from one to four per second).1t.k desirable to use approaches without the above drawbacks.One approach is to apply general object recognition [7] a...