Artificial Intelligence (AI) systems sometimes make errors and will make errors in the future, from time to time. These errors are usually unexpected, and can lead to dramatic consequences. Intensive development of AI and its practical applications makes the problem of errors more important. Total re-engineering of the systems can create new errors and is not always possible due to the resources involved. The important challenge is to develop fast methods to correct errors without damaging existing skills. We formulated the technical requirements to the 'ideal' correctors. Such correctors include binary classifiers, which separate the situations with high risk of errors from the situations where the AI systems work properly. Surprisingly, for essentially high-dimensional data such methods are possible: simple linear Fisher discriminant can separate the situations with errors from correctly solved tasks even for exponentially large samples. The paper presents the probabilistic basis for fast non-destructive correction of AI systems. A series of new stochastic separation theorems is proven. These theorems provide new instruments for fast non-iterative correction of errors of legacy AI systems. The new approaches become efficient in high-dimensions, for correction of high-dimensional systems in high-dimensional world (i.e. for processing of essentially high-dimensional data by large systems).We prove that this separability property holds for a wide class of distributions including log-concave distributions and distributions with a special 'SMeared Absolute Continuity' (SmAC) property defined through relations between the volume and probability of sets of vanishing volume. These classes are much wider than the Gaussian distributions. The requirement of independence and identical distribution of data is significantly relaxed. The results are supported by computational analysis of empirical data sets.
An approach to the Shannon and Rényi entropy maximization problems with constraints on the mean and law invariant deviation measure for a random variable has been developed. The approach is based on the representation of law invariant deviation measures through corresponding convex compact sets of nonnegative concave functions. A solution to the problem has been shown to have an alpha-concave distribution (log-concave for Shannon entropy), for which in the case of comonotone deviation measures, an explicit formula has been obtained. As an illustration, the problem has been solved for several deviation measures, including mean absolute deviation (MAD), conditional value-at-risk (CVaR) deviation, and mixed CVaR-deviation. Also, it has been shown that the maximum entropy principle establishes a one-to-one correspondence between the class of alpha-concave distributions and the class of comonotone deviation measures. This fact has been used to solve the inverse problem of finding a corresponding comonotone deviation measure for a given alpha-concave distribution.
Mean-deviation analysis, along with the existing theories of coherent risk measures and dual utility, is examined in the context of the theory of choice under uncertainty, which studies rational preference relations for random outcomes based on different sets of axioms such as transitivity, monotonicity, continuity, etc. An axiomatic foundation of the theory of coherent risk measures is obtained as a relaxation of the axioms of the dual utility theory, and a further relaxation of the axioms are shown to lead to the mean-deviation analysis. Paradoxes arising from the sets of axioms corresponding to these theories and their possible resolutions are discussed, and application of the mean-deviation analysis to optimal risk sharing and portfolio selection in the context of rational choice is considered.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.