Numerical weather prediction ensembles are routinely used for operational weather forecasting. The members of these ensembles are individual simulations with either slightly perturbed initial conditions or different model parameterizations, or occasionally both. Multi-member ensemble output is usually large, multivariate, and challenging to interpret interactively. Forecast meteorologists are interested in understanding the uncertainties associated with numerical weather prediction; specifically variability between the ensemble members. Currently, visualization of ensemble members is mostly accomplished through spaghetti plots of a single mid-troposphere pressure surface height contour. In order to explore new uncertainty visualization methods, the Weather Research and Forecasting (WRF) model was used to create a 48-hour, 18 member parameterization ensemble of the 13 March 1993 "Superstorm". A tool was designed to interactively explore the ensemble uncertainty of three important weather variables: water-vapor mixing ratio, perturbation potential temperature, and perturbation pressure. Uncertainty was quantified using individual ensemble member standard deviation, inter-quartile range, and the width of the 95% confidence interval. Bootstrapping was employed to overcome the dependence on normality in the uncertainty metrics. A coordinated view of ribbon and glyph-based uncertainty visualization, spaghetti plots, iso-pressure colormaps, and data transect plots was provided to two meteorologists for expert evaluation. They found it useful in assessing uncertainty in the data, especially in finding outliers in the ensemble run and therefore avoiding the WRF parameterizations that lead to these outliers. Additionally, the meteorologists could identify spatial regions where the uncertainty was significantly high, allowing for identification of poorly simulated storm environments and physical interpretation of these model issues.
Many techniques have been proposed to show uncertainty in data visualizations. However, very little is known about their effectiveness in conveying meaningful information. In this paper, we present a user study that evaluates the perception of uncertainty amongst four of the most commonly used techniques for visualizing uncertainty in one-dimensional and twodimensional data. The techniques evaluated are traditional errorbars, scaled size of glyphs, color-mapping on glyphs, and colormapping of uncertainty on the data surface. The study uses generated data that was designed to represent the systematic and random uncertainty components. Twenty-seven users performed two types of search tasks and two types of counting tasks on 1D and 2D datasets. The search tasks involved finding data points that were least or most uncertain. The counting tasks involved counting data features or uncertainty features. A 4 4 full-factorial ANOVA indicated a significant interaction between the techniques used and the type of tasks assigned for both datasets indicating that differences in performance between the four techniques depended on the type of task performed. Several one-way ANOVAs were computed to explore the simple main effects. Bonferronnis correction was used to control for the family-wise error rate for alpha-inflation. Although we did not find a consistent order among the four techniques for all the tasks, there are several findings from the study that we think are useful for uncertainty visualization design. We found a significant difference in user performance between searching for locations of high and searching for locations of low uncertainty. Errorbars consistently underperformed throughout the experiment. Scaling the size of glyphs and color-mapping of the surface performed reasonably well. The efficiency of most of these techniques were highly dependent on the tasks performed. We believe that these findings can be used in future uncertainty visualization design. In addition, the framework developed in this user study presents a structured approach to evaluate uncertainty visualization techniques, as well as provides a basis for future research in uncertainty visualization.
The United States Air Force Red Flag exercise is the premier combat flight training experience for fighter pilots. We created and evaluated a computer system for replay of Red Flag air-to-air combat training data with alternative display systems. Air combat data could either be displayed on a console display system (CDS) which mimicked existing replay displays or in a headmounted display (HMD).The eflectiveness of replaying air combat data using these two displays was compared in a humanperformance experiment, with USAFJighter pilots as the subjects. Quantitative and qualitative data about display performance and preference were collected from the pilots who used each display to review mission replays. Although there was no statistically significant d@erence between the subject performance when using familiar CDS or the new HMD, there was a trend favoring the HMD. Red FlagRed Flag exercises held at Nellis Air Force Base NV provide some of the most challenging and beneficial training available for USAF tighter pilots. Several times a year, fighter squadrons from bases around the world participate in these exercises, where visiting combatready pilots take on the role of the Blue Force (US and allies) and fly against resident Red Force trained to mimic enemy tactics.An instrumented range north of Nellis AFB, NV provides detailed information about each aircraft in a Red Flag exercise. The Red Flag Measurement and Debriefing System (RFMDS) is collective name for the aircraft tracking and recording network and its debrief system for training feedback. Recorded missions can be replayed for critique to single ah-crews on a CDS or to an entire strike package's crews on large screen displays.As the air combat takes place over the desert range, RFMDS records position, speed and weapons firings data of the participating aircraft. Up to 36 aircraft can be equipped with advanced electronics pods and take on the role of the high-activity aircraft. For these aircraft, flight information is transmitted 10 times per second to telemetry recording stations. Participants simulate weapons fuing with the success calculated in real-time and the consequences radioed to participants.As flight information is transmitted to ground stations on the range and eventually relayed back to the main Red Flag building, it is displayed in real-time to safety monitors and recorded onto magnetic tape. After a oneto two-hour training mission, the aircraft recover back at Nellis. The aircrews gather in a large auditorium, and the mission commander conducts a debrief of all aspects of the mission. The mission commander relies on RFMDS to depict the actions taken by aircraft on both sides, and he can selectively review Blue and Red Force performance with resolution down to single aircraft. Evaluation of the participants' actions is often extremely frank and harsh; pilots are deadly serious about their performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.