As machine learning (ML) models are increasingly being employed to assist human decision makers, it becomes critical to provide these decision makers with relevant inputs which can help them decide if and how to incorporate model predictions into their decision making. For instance, communicating the uncertainty associated with model predictions could potentially be helpful in this regard. However, there is little to no research that systematically explores if and how conveying predictive uncertainty impacts decision making. In this work, we carry out user studies to systematically assess how people respond to different types of predictive uncertainty i.e., posterior predictive distributions with different shapes and variances, in the context of ML assisted decision making. To the best of our knowledge, this work marks one of the first attempts at studying this question. Our results demonstrate that people are more likely to agree with a model prediction when they observe the corresponding uncertainty associated with the prediction. This finding holds regardless of the properties (shape or variance) of predictive uncertainty (posterior predictive distribution), suggesting that uncertainty is an effective tool for persuading humans to agree with model predictions. Furthermore, we also find that other factors such as domain expertise and familiarity with ML also play a role in determining how someone interprets and incorporates predictive uncertainty into their decision making.
Through extensive experience developing and explaining machine learning (ML) applications for real-world domains, we have learned that ML models are only as interpretable as their features. Even simple, highly interpretable model types such as regression models can be difficult or impossible to understand if they use uninterpretable features. Different users, especially those using ML models for decision-making in their domains, may require different levels and types of feature interpretability. Furthermore, based on our experiences, we claim that the term "interpretable feature" is not specific nor detailed enough to capture the full extent to which features impact the usefulness of ML explanations. In this paper, we motivate and discuss three key lessons: 1) more attention should be given to what we refer to as the interpretable feature space, or the state of features that are useful to domain experts taking real-world actions, 2) a formal taxonomy is needed of the feature properties that may be required by these domain experts (we propose a partial taxonomy in this paper), and 3) transforms that take data from the model-ready state to an interpretable form are just as essential as traditional ML transforms that prepare features for the model.
Loss of thrust emergencies-e.g., induced by bird/drone strikes or fuel exhaustion-create the need for dynamic data-driven flight trajectory planning to advise pilots or control UAVs. While total loss of thrust (gliding) trajectories to nearby airports can be pre-computed for all initial points in a 3D flight plan, dynamic aspects such as partial power, wind, and airplane surface damage must be considered for accuracy. In this paper, we propose a new Dynamic Data-Driven Avionics Software (DDDAS) approach which during flight updates a damaged aircraft performance model, used in turn to generate plausible flight trajectories to a safe landing site. Our damaged aircraft model is parameterized on a baseline glide ratio for a clean aircraft configuration assuming best gliding airspeed on straight flight. The model predicts purely geometric criteria for flight trajectory generation, namely, glide ratio and turn radius for different bank angles and drag configurations. Given actual aircraft flight performance data, we dynamically infer the baseline glide ratio to update the damaged aircraft model. Our new flight trajectory generation algorithm thus can significantly improve upon prior Dubins based trajectory generation work by considering these data-driven geometric criteria. We further introduce a trajectory utility function to rank trajectories for safety, in particular, to prevent steep turns close to the ground and to remain as close to the airport or landing zone as possible. As a use case, we consider the Hudson River ditching of US Airways 1549 in January 2009 using a flight simulator to evaluate our trajectories and to get sensor data (airspeed, GPS location, barometric altitude). In this example, a baseline glide ratio of 17.25:1 enabled us to generate trajectories up to 28 seconds after the birds strike, whereas, a 19:1 baseline glide ratio enabled us to generate trajectories up to 36 seconds after the birds strike. DDDAS can significantly improve the accuracy of generated flight trajectories thereby enabling better decision support systems for pilots in total and partial loss of thrust emergency conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.