Recently, problems involving space debris have become more serious. According to NASA research, the volume of space debris is projected to increase, even if no new satellites are launched. Therefore, debris-removal satellites must be developed immediately. A mandatory function of a debris-removal satellite is to recognize and approach target debris. Thus, visual guidance using image processing is being considered as an effective means of guiding debris-removal satellites toward unresponsive targets. A small satellite is suitable for use as a debris-removal satellite; however, because of weight and/or size limitations, the installation of certain cameras in small satellites is difficult. Thus, we have developed a compact camera system that can perform on-board image processing, by expanding the functionality of an existing camera system to enable it to acquire the multi-direction images required during the satellite-debris rendezvous process. Experiments were conducted using our proposed system on the H-2 Transfer Vehicle (HTV) as part of an electrodynamic-tether experiment. This paper presents a brief report on the results of this HTV flight experiment.
The present paper introduces a near-future perception system called Previewed Reality. In a coexistence environment of a human and a robot, unexpected collisions between the human and the robot must be avoided to the extent possible. In many cases, the robot is controlled carefully so as not to collide with a human. However, it is almost impossible to perfectly predict human behavior in advance. On the other hand, if a user can determine the motion of a robot in advance, he/she can avoid a hazardous situation and exist safely with the robot. In order to ensure that a user perceives future events naturally, we developed a near-future perception system named Previewed Reality. Previewed Reality consists of an informationally structured environment, a VR display or an AR display, and a dynamics simulator. A number of sensors are embedded in an informationally structured environment, and information such as the position of furniture, objects, humans, and robots, is sensed and stored structurally in a database. Therefore, we can forecast possible subsequent events using a robot motion planner and a dynamics simulator and can synthesize virtual images from the viewpoint of the user, which will actually occur in the near future. The viewpoint of the user, which is the position and orientation of a VR display or an AR display, is also tracked by an optical tracking system in the informationally structured environment, or the SLAM technique on an AR display. The synthesized images are presented to the user by overlaying these images on a real scene using the VR display or the AR display. This system provides human-friendly communication between a human and a robotic system, and a human and a robot can coexist safely by intuitively showing the human possible hazardous situations in advance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.