Humpback whales (Megaptera novaeangliae) exhibit a variety of foraging behaviours, but neither they nor any baleen whale are known to produce broadband clicks in association with feeding, as do many odontocetes. We recorded underwater behaviour of humpback whales in a northwest Atlantic feeding area using suction-cup attached, multi-sensor, acoustic tags (DTAGs). Here we describe the first recordings of click production associated with underwater lunges from baleen whales. Recordings of over 34000 'megapclicks' from two whales indicated relatively low received levels at the tag (between 143 and 154dB re 1 microPa pp), most energy below 2kHz, and interclick intervals often decreasing towards the end of click trains to form a buzz. All clicks were recorded during night-time hours. Sharp body rolls also occurred at the end of click bouts containing buzzes, suggesting feeding events. This acoustic behaviour seems to form part of a night-time feeding tactic for humpbacks and also expands the known acoustic repertoire of baleen whales in general.
U ntil recently, scientists knew little about what whales did underwater. Studying the underwater behavior of marine mammals is difficult; light doesn't travel far through water, and divers can't keep within visual range of an animal capable of sustained speeds of 5 knots. Scientists can use sonar technologies to image marine mammals underwater, but sonar records provide only occasional brief glimpses of whales underwater.A new collaboration between visualization experts, engineers, and marine biologists has changed this. For the first time, we can see and study the foraging behavior of humpback whales. This is important not just for purely scientific reasons. Whales are dying because of ship collisions and entanglements with fishing gear in ever-increasing numbers. Understanding their behavior could lead to changes in shipping regulations or in the nature and deployment of fishing apparati.Our study's primary objective was furthering the science of marine mammal ethology. We also had a second objective: field testing GeoZui4D, an innovative testbench for investigate effective ways of navigating through time-varying geospatial data. The study involved two expeditions, one in 2004 and another in 2005, in which we tagged whales and recorded their movements. Recording whale behaviorThe digital recording acoustic tag, 1 shown in Figure 1, is the key technology allowing us to see underwater whale behavior. DTAG is a recording device containing several instruments. Three axis accelerometers provide information about the gravity vector's direction, three axis magnetometers measure the direction of the earth's magnetic field, a pressure sensor provides depth information, and a hydrophone continuously records sound. Except for sound, which is recorded at 64 kHz, all instruments record at 50 Hz. DTAG merges and compresses data records to be stored on 3 gigabytes of flash memory. DTAG is about 10 centimeters long, excluding the antenna, and is attached to a whale's back using suction cups. After some preset interval, typically 10 to 22 hours, DTAG automatically releases suction and floats to the surface. Once it's on the surface, a ship can locate it by its radio beacon. The ship picks up the DTAG, and scientists download the data.To place the tag, scientists attach it loosely to the tip of a 45-foot carbon fiber pole, mounted (as Figure 1 shows) with a gimbal on a rigid hull inflatable boat's (RHIB) bow. Placing the tag on a whale isn't easy given that a whale usually only surfaces for a few seconds at a time. The operation requires careful timing and coordination between the boat driver and pole handler.Scientists analyze the DTAG data to produce a pseudotrack. This is only an approximate estimate of the animal's path because DTAG doesn't give information about speed through the water. Therefore, scientists must derive the pseudotrack using a form of dead reckoning. The cross-product of the magnetometer vector with the gravity vector (from the accelerometers) provides heading information. Together, heading and gravity pr...
The term Eye-hand co-ordination refers to hand movements controlled with visual feedback and reinforced by hand contact with objects. A correct perspective view of a virtual environment enables normal eye-hand co-ordination skills to be applied. But is it necessary for rapid interaction with 3D objects? A study of rapid hand movements is reported using an apparatus designed so that the user can touch a virtual object in the same place where he or she sees it. A Fitts tapping task is used to assess the effect of both contact with virtual objects and real-time update of the centre of perspective based on the user's actual eye position. A Polhemus tracker is used to measure the user's head position and from this estimate their eye position. In half of the conditions, head tracked perspective is employed so that visual feedback is accurate while in the other half a fixed eye-position is assumed. A Phantom force feedback device is used to make it possible to touch the targets in selected conditions. Subjects were required to change their viewing position periodically to assess the importance of correct perspective and of touching the targets in maintaining eyehand co-ordination, The results show that accurate perspective improves performance by an average of 9% and contact improves it a further 12%. A more detailed analysis shows the advantages of head tracking to be greater for whole ann movements in comparison with movements from the elbow.
It is possible to simulate a high quality virtual environment with viewpoint controlled perspective, high quality stereo and a sense of touch obtained with the Phantom force feedback device using existing "fish tank VR" technologies. This enables us to investigate the importance of different depth cues and touch using higher quality visual display than is possible with more immersive technologies. Prior work on depth perception suggests that different depth cues are important depending on the task performed. A number of studies have shown that motion parallax is more important than stereopsis in perceiving 3D patterns, but other studies suggest that stereopsis should be critically important for visually guided reaching. A Fitts' Law tapping task was used to investigate the relative importance of stereo and head tracking in visually guided hand movements. It allowed us to examine the inter-tap intervals following a head movement in order to look for evidence of rapid adaptation to a misplaced head position. The results show that stereo is considerably more important than eye-coupled perspective for this task and that the benefits increase as task difficulty increases. Disabling stereo increased mean inter-tap intervals by 33% while disabling head tracking produced only a 11% time increase. However we failed to find the expected evidence for adaptation during the series of taps. We conclude by discussing the theoretical and practical implications of the results.
It is difficult with most current computer interfaces to rotate a virtual object so that it matches the orientation of another virtual object. Times to perform this simple task can exceed 20 seconds whereas the same kind of rotation can be accomplished with real objects and with some VR interfaces in less than two seconds. In many advanced 3D user interfaces, the hand manipulating a virtual object is not in the same place as the object being manipulated. The available evidence suggests that this is not usually a significant problem for manipulations requiring translations of virtual objects, but it is when rotations are required. We hypothesize that the problems may be caused by frame of reference effects -mismatches between the visual frame of reference and the haptic frame of reference. Here we report two experiments designed to study interactions between visual and haptic reference frames space.In our first study we investigated the effect of rotating the frame of the controller with respect to the frame of the object being rotated. We measured a broad U-shaped relationship. Subjects could tolerate quite large mismatches, but when the orientation mismatch approached 90 degrees performance deteriorated rapidly by up to a factor of 5. In our second experiment we manipulated both rotational and translational correspondence between visual and haptic frames of reference. We predicted that the haptic reference frame might rotate in egocentric coordinates when the input device was in a different location than the virtual object.The experimental results showed a change in the direction predicted; they are consistent with a rotation of the haptic frame of reference, although only by about half the magnitude predicted. Implications for the design of control devices are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.