Controlling a team of robots is much more challenging than controlling an individual robot. Users desire high level commands that shield the users from too much detail, and yet still afford the desired precision of control. This project aims to design a method through which a team of robots can be controlled as easily and precisely as an individual robot. A simple language in the form of a set of finger gestures allows the user to give general motion commands to the team of robots. The gestures are supplemented by fine controls such as speed through tangible input gadgets. The gesture-based language has been implemented in a prototype user interface on a multi-touch screen. A number of test applications demonstrate the validity of the design.
This paper introduces the Multi-modal Interface Framework (MIF). It is a system which allows developers to easily integrate interface devices of multiple modalities, such as voice, hand and finger gestures, and various tangible devices such as game controllers into a multi-modal input system. The integrated devices can then be used to control practically any computer application. The advantages offered by MIF are ease of use, flexibility and support for collaboration. Its design has been validated by applying it to integrate finger gestures, voice, a Wii mote and an iPhone to control applications such as Google Earth and Windows Media Player.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.