Abstract-Applications are generally written assuming a predictable and well-behaved OS. In practice, they experience unpredictable misbehavior at the OS level and across OSes: different OSes can handle network events differently, APIs can behave differently across OSes, and OSes may be compromised or buggy. This unpredictability is challenging because its sources typically manifest during deployment and are hard to reproduce. This paper introduces Bear, a framework for statistical analysis of application sensitivity to OS unpredictability that can help developers build more resilient software, discover challenging bugs and identify the scenarios that most need validation. Bear analyzes a program with a set of perturbation strategies on a set of commonly used system calls in order to discover the most sensitive system calls for each application, the most impactful strategies, and how they predict abnormal program outcome. We evaluated Bear with 113 CPU and IO-bound programs, and our results show that null memory dereferencing and erroneous buffer operations are the most impactful strategies for predicting abnormal program execution and that their impacts increase tenfold with workload increase (e.g. number of network requests from 10 to 1000). Generic system calls are more sensitive than specialized system calls-for example, write and sendto can both be used to send data through a socket, but the sensitivity of write is twice that of sendto. System calls with an array parameter (e.g. read) are more sensitive to perturbations than those having a struct parameter with a buffer (e.g readv).Moreover, the fewer parameters a system call has, the more sensitive it is.