Insects are capable of robust visual navigation in complex environments using efficient information extraction and processing approaches. This paper presents an implementation of insect inspired visual navigation that uses spatial decompositions of the instantaneous optic flow to extract local proximity information. The approach is demonstrated in a corridor environment on an autonomous quadrotor micro-air-vehicle (MAV) where all the sensing and processing, including altitude, attitude, and outer loop control is performed on-board. The resulting methodology has the advantages of computation speed and simplicity, hence are consistent with the stringent size, weight, and power requirements of MAVs.
This paper investigates how to utilize different forms of human interaction to safely train autonomous systems in realtime by learning from both human demonstrations and interventions. We implement two components of the Cycle-of-Learning for Autonomous Systems, which is our framework for combining multiple modalities of human interaction. The current effort employs human demonstrations to teach a desired behavior via imitation learning, then leverages intervention data to correct for undesired behaviors produced by the imitation learner to teach novel tasks to an autonomous agent safely, after only minutes of training. We demonstrate this method in an autonomous perching task using a quadrotor with continuous roll, pitch, yaw, and throttle commands and imagery captured from a downward-facing camera in a high-fidelity simulated environment. Our method improves task completion performance for the same amount of human interaction when compared to learning from demonstrations alone, while also requiring on average 32% less data to achieve that performance. This provides evidence that combining multiple modes of human interaction can increase both the training speed and overall performance of policies for autonomous systems.
With growing use of automation in civilian and military contexts that engage cooperatively with humans, the operator's level of trust in the automated system is a major factor in determining the efficacy of the human-autonomy teams. Suboptimal levels of human trust in autonomy (TiA) can be detrimental to joint team performance. This mis-calibrated trust can manifest in several ways, such as distrust and complete disuse of the autonomy or complacency, which results in an unsupervised autonomous system. This work investigates human behaviors that may reflect TiA in the context of an automated driving task, with the goal of improving team performance. Subjects performed a simulated leaderfollower driving task with an automated driving assistant. The subjects had could choose to engage an automated lane keeping and active cruise control system of varying performance levels. Analysis of the experimental data was performed to identify contextual features of the simulation environment that correlated to instances of automation engagement and disengagement. Furthermore, behaviors that potentially indicate inappropriate TiA levels were identified in the subject trials using estimates of momentary risk and agent performance, as functions of these contextual features. Inter-subject and intra-subject trends in automation usage and performance were also identified. This analysis indicated that for poorer performing automation, TiA decreases with time, while higher performing automation induces less drift toward diminishing usage, and in some cases increases in TiA. Subject use of automation was also found to be largely influenced by course features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.