U ntil recently, scientists knew little about what whales did underwater. Studying the underwater behavior of marine mammals is difficult; light doesn't travel far through water, and divers can't keep within visual range of an animal capable of sustained speeds of 5 knots. Scientists can use sonar technologies to image marine mammals underwater, but sonar records provide only occasional brief glimpses of whales underwater.A new collaboration between visualization experts, engineers, and marine biologists has changed this. For the first time, we can see and study the foraging behavior of humpback whales. This is important not just for purely scientific reasons. Whales are dying because of ship collisions and entanglements with fishing gear in ever-increasing numbers. Understanding their behavior could lead to changes in shipping regulations or in the nature and deployment of fishing apparati.Our study's primary objective was furthering the science of marine mammal ethology. We also had a second objective: field testing GeoZui4D, an innovative testbench for investigate effective ways of navigating through time-varying geospatial data. The study involved two expeditions, one in 2004 and another in 2005, in which we tagged whales and recorded their movements. Recording whale behaviorThe digital recording acoustic tag, 1 shown in Figure 1, is the key technology allowing us to see underwater whale behavior. DTAG is a recording device containing several instruments. Three axis accelerometers provide information about the gravity vector's direction, three axis magnetometers measure the direction of the earth's magnetic field, a pressure sensor provides depth information, and a hydrophone continuously records sound. Except for sound, which is recorded at 64 kHz, all instruments record at 50 Hz. DTAG merges and compresses data records to be stored on 3 gigabytes of flash memory. DTAG is about 10 centimeters long, excluding the antenna, and is attached to a whale's back using suction cups. After some preset interval, typically 10 to 22 hours, DTAG automatically releases suction and floats to the surface. Once it's on the surface, a ship can locate it by its radio beacon. The ship picks up the DTAG, and scientists download the data.To place the tag, scientists attach it loosely to the tip of a 45-foot carbon fiber pole, mounted (as Figure 1 shows) with a gimbal on a rigid hull inflatable boat's (RHIB) bow. Placing the tag on a whale isn't easy given that a whale usually only surfaces for a few seconds at a time. The operation requires careful timing and coordination between the boat driver and pole handler.Scientists analyze the DTAG data to produce a pseudotrack. This is only an approximate estimate of the animal's path because DTAG doesn't give information about speed through the water. Therefore, scientists must derive the pseudotrack using a form of dead reckoning. The cross-product of the magnetometer vector with the gravity vector (from the accelerometers) provides heading information. Together, heading and gravity pr...
GeoZui3D stands for Geographic Zooming User Interface. It is a new visualization software system designed for interpreting multiple sources of 3D data. The system supports gridded terrain models, triangular meshes, curtain plots, and a number of other display objects. A novel center of workspace interaction method unifies a number of aspects of the interface. It creates a simple viewpoint control method, it helps link multiple views, and is ideal for stereoscopic viewing. GeoZui3D has a number of features to support real-time input. Through a CORBA interface external entities can influence the position and state of objects in the display. Extra windows can be attached to moving objects allowing for their position and data to be monitored. We describe the application of this system for heterogeneous data fusion, for multibeam QC and for ROV/AUV monitoring.
Weather maps commonly display several variables at once, usually a subset of the following: atmospheric pressure, surface wind speed and direction, surface temperature, cloud cover, and precipitation. Most often, a single variable is mapped separately and occasionally two are shown together. But sometimes there is an attempt to show three or four variables with a result that is difficult to interpret because of visual interference between the graphical elements. As a design exercise, we set the goal of finding out if it is possible to show three variables (two 2D scalar fields and one 2D vector field) simultaneously so that values can be accurately read using keys for all variables, a reasonable level of detail is shown, and important meteorological features stand out clearly. Our solution involves employing three perceptual “channels”: a color channel, a texture channel, and a motion channel in order to perceptually separate the variables and make them independently readable. We describe a set of interactive weather displays, which enable users to view two meteorological scalar fields of various kinds and a field showing wind patterns. To evaluate the method, we implemented three alternative representations each simultaneously showing temperature, atmospheric pressure, wind speed, and direction. Both animated and static variants of our new design were compared to a conventional solution and a glyph-based solution. The evaluation tested the abilities of participants both to read values using a key and to see meteorological patterns in the data. Our new scheme was superior, especially in the representation of wind patterns using the motion channel. It also performed well enough in the representation of pressure using the texture channel to suggest it as a viable design alternative.
Frame-of-reference interaction consists of a unified set of 3D interaction techniques for exploratory navigation of large virtual spaces in non-immersive environments. It is based on a conceptual framework that considers navigation from a cognitive perspectiveas a way of facilitating changes in user attention from one reference frame to another-rather than from the mechanical perspective of moving a camera between different points of interest. All of our techniques link multiple frames of reference in some meaningful way. Some techniques link multiple windows within a zooming environment while others allow seamless changes of user focus between static objects, moving objects, and groups of moving objects. We present our techniques as they are implemented in GeoZui3D, a geographic visualization system for ocean data.
Most data visualization systems only show static data or produce "canned" movies of time-varying data. Others incorporate visualization in real-time monitoring but these are generally customized to the particular application. The ability to interactively navigate through geospatial data is common but interactive navigation along the time dimension is not. And yet, visualization of data from interacting dynamic systems is increasingly necessary to interpret biological process, physical oceanographic processes, the motion of instrument platforms (such as ships, ROVs and AUVs), and the interactions between all of these. To address this need, we have enhanced our GeoZui3D system so that it seamlessly handles multiple time varying data sets: anything can be handled that can be represented through time varying surfaces, curved colored lines, curved colored tubes, arrow arrays, or color-, shape-, and size-coded points. The system can be used in both real-time and replay modes and data sets that have different sampling rates can still be visualized together. GeoZui3D can visualize events over a wide range of time scales from sensor readings at the millisecond scale to glacial movements evolving over tens of thousands of years. The system is illustrated with examples from collaborative research projects including modeled ocean and estuarine currents, tides, ship movements, changes in surface topography, AUV and ROV movements and the movements of marine mammals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.