Landing is a challenging aspect of flight because, to land safely, speed must be decreased to a value close to zero at touchdown. The mechanisms by which animals achieve this remain unclear. When landing on horizontal surfaces, honey bees control their speed by holding constant the rate of front-to-back image motion (optic flow) generated by the surface as they reduce altitude. As inclination increases, however, this simple pattern of optic flow becomes increasingly complex. How do honey bees control speed when landing on surfaces that have different orientations? To answer this, we analyze the trajectories of honey bees landing on a vertical surface that produces various patterns of motion. We find that landing honey bees control their speed by holding the rate of expansion of the image constant. We then test and confirm this hypothesis rigorously by analyzing landings when the apparent rate of expansion generated by the surface is manipulated artificially. This strategy ensures that speed is reduced, gradually and automatically, as the surface is approached. We then develop a mathematical model of this strategy and show that it can effectively be used to guide smooth landings on surfaces of any orientation, including horizontal surfaces. This biological strategy for guiding landings does not require knowledge about either the distance to the surface or the speed at which it is approached. The simplicity and generality of this landing strategy suggests that it is likely to be exploited by other flying animals and makes it ideal for implementation in the guidance systems of flying robots.rchestrating a safe landing is one of the greatest challenges for flying animals and airborne vehicles alike. Although some progress has been made toward unraveling the cues that flying animals might use for triggering landings (1-10), we do not yet have a good understanding of how these or other possible cues are used to control the landing process once it has been initiated.To achieve a smooth landing, it is essential to control deceleration in such a manner that the approach speed decreases to a value close to zero at the time of touchdown. An obvious way to achieve this would be to measure flight speed and distance to the target simultaneously and to use this information to reduce speed progressively, in a moment-to-moment fashion. However, this strategy is computationally demanding and unsuitable for animals such as flying insects, whose close-set, fixed-focus eyes prevent them from using stereopsis or accommodation to measure the distances to surfaces directly (11-13).When performing a grazing landing on a horizontal surface, honey bees use a technique that allows them to overcome the limitations of their relatively simple nervous systems. Instead of measuring the distance to the surface directly, they hold constant the magnitude of optic flow (the speed of image motion on the retina) that is generated by the ground beneath them (4, 5). This automatically ensures that the speed of flight is reduced as the ground is approac...
SUMMARYVisual landmarks guide humans and animals including insects to a goal location. Insects, with their miniature brains, have evolved a simple strategy to find their nests or profitable food sources; they approach a goal by finding a close match between the current view and a memorised retinotopic representation of the landmark constellation around the goal. Recent implementations of such a matching scheme use raw panoramic images ('image matching') and show that it is well suited to work on robots and even in natural environments. However, this matching scheme works only if relevant landmarks can be detected by their contrast and texture. Therefore, we tested how honeybees perform in localising a goal if the landmarks can hardly be distinguished from the background by such cues. We recorded the honeybees' flight behaviour with high-speed cameras and compared the search behaviour with computer simulations. We show that honeybees are able to use landmarks that have the same contrast and texture as the background and suggest that the bees use relative motion cues between the landmark and the background. These cues are generated on the eyes when the bee moves in a characteristic way in the vicinity of the landmarks. This extraordinary navigation performance can be explained by a matching scheme that includes snapshots based on optic flow amplitudes ('optic flow matching'). This new matching scheme provides a robust strategy for navigation, as it depends primarily on the depth structure of the environment.Supplementary material available online at http://jeb.biologists.org/cgi/content/full/213/17/2913/DC1 Key words: honeybee, landmark navigation, snapshot matching, vision. THE JOURNAL OF EXPERIMENTAL BIOLOGY 2914be unnecessary (Zeil et al., 2003;Stürzl and Zeil, 2007). Zeil et al. show that the similarities between panoramic images of natural environments decrease smoothly with spatial distance between an observer and the goal location (Zeil et al., 2003). An animal that is sensitive to the similarity of views relative to the memorised view of the goal location could return to this location by maximising the similarities between images [modelled by simple image similarity gradient methods (Zeil et al., 2003)]. Thus, panoramic image similarities can be used for view-based homing in natural environments. Recently, the behaviour of ants and crickets in goal-finding tasks could be explained by 'image matching ' (Wystrach and Beugnon, 2009;Mangan and Webb, 2009).In our combined behavioural and modelling approach, we tested the content of the spatial memory in honeybees during complex navigational tasks. Honeybees were trained to locate an inconspicuous feeder surrounded by three cylinders, which we refer to as landmarks. By altering the spatial configuration and landmark texture and monitoring the approach flights to the feeder, we addressed the following questions: what role does the spatial configuration of the landmarks play? Does landmark texture play a role in navigational tasks? In particular, can landmarks b...
Insects such as flies or bees, with their miniature brains, are able to control highly aerobatic flight maneuvres and to solve spatial vision tasks, such as avoiding collisions with obstacles, landing on objects, or even localizing a previously learnt inconspicuous goal on the basis of environmental cues. With regard to solving such spatial tasks, these insects still outperform man-made autonomous flying systems. To accomplish their extraordinary performance, flies and bees have been shown by their characteristic behavioral actions to actively shape the dynamics of the image flow on their eyes (“optic flow”). The neural processing of information about the spatial layout of the environment is greatly facilitated by segregating the rotational from the translational optic flow component through a saccadic flight and gaze strategy. This active vision strategy thus enables the nervous system to solve apparently complex spatial vision tasks in a particularly efficient and parsimonious way. The key idea of this review is that biological agents, such as flies or bees, acquire at least part of their strength as autonomous systems through active interactions with their environment and not by simply processing passively gained information about the world. These agent-environment interactions lead to adaptive behavior in surroundings of a wide range of complexity. Animals with even tiny brains, such as insects, are capable of performing extraordinarily well in their behavioral contexts by making optimal use of the closed action–perception loop. Model simulations and robotic implementations show that the smart biological mechanisms of motion computation and visually-guided flight control might be helpful to find technical solutions, for example, when designing micro air vehicles carrying a miniaturized, low-weight on-board processor.
SUMMARYBlowfly flight consists of two main components, saccadic turns and intervals of mostly straight gaze direction, although, as a consequence of inertia, flight trajectories usually change direction smoothly. We investigated how flight behavior changes depending on the surroundings and how saccadic turns and intersaccadic translational movements might be controlled in arenas of different width with and without obstacles. Blowflies do not fly in straight trajectories, even when traversing straight flight arenas; rather, they fly in meandering trajectories. Flight speed and the amplitude of meanders increase with arena width. Although saccade duration is largely constant, peak angular velocity and succession into either direction are variable and depend on the visual surroundings. Saccade rate and amplitude also vary with arena layout and are correlated with the ʻtime-to-contactʼ to the arena wall. We provide evidence that both saccade and velocity control rely to a large extent on the intersaccadic optic flow generated in eye regions looking well in front of the fly, rather than in the lateral visual field, where the optic flow at least during forward flight tends to be strongest.
Nesting insects perform learning flights to establish a visual representation of the nest environment that allows them to subsequently return to the nest. It has remained unclear when insects learn what during these flights, what determines their overall structure, and, in particular, how what is learned is used to guide an insect's return. We analyzed learning flights in ground-nesting wasps (Sphecidae: Cerceris australis) using synchronized high-speed cameras to determine 3D head position and orientation. Wasps move along arcs centered on the nest entrance, whereby rapid changes in gaze assure that the nest is seen at lateral positions in the left or the right visual field. Between saccades, the wasps translate along arc segments around the nest while keeping gaze fixed. We reconstructed panoramic views along the paths of learning and homing wasps to test specific predictions about what wasps learn during their learning flights and how they use this information to guide their return. Our evidence suggests that wasps monitor changing views during learning flights and use the differences they experience relative to previously encountered views to decide when to begin a new arc. Upon encountering learned views, homing wasps move left or right, depending on the nest direction associated with that view, and in addition appear to be guided by features on the ground close to the nest. We test our predictions on how wasps use views for homing by simulating homing flights of a virtual wasp guided by views rendered in a 3D model of a natural wasp environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.