We propose a novel outlier detection approach in this paper, which learns the most accurate hyperspheres for the normal data through a top-down procedure. Conventional one-class support vector machine (SVM) based approaches aim to find nonlinear global solutions for all the normal data, with the benefit of kernel trick. However, those methods are intractable when data are in large-scale and inaccurate when data are under complex distributions. It's observed that high dimensional data, e.g., features of texts or images, are always sparse, and linear classifier usually performs well. A specific class of data seldom lie in one single subspace. In this paper, we propose to learn multiple discriminative hyper-spheres locally based on the data distributions, and fit them globally to formulate a more discriminative boundary for the normal data. By far, neural mechanisms used by human brain-mind for outlier detection are not known, however, the top-down strategy proposed in this paper would inspire understanding of the human neural mechanisms. The benefits of our model are twofolds. First, the distribution of each local cluster is much simpler than that in a global view, which makes the fitting processing for each individual cluster much easier and insensitive to the choice of kernel. In particular, we adopt low-rank constraints to find multiple clusters automatically. Secondly, the proposed approach trains the model linearly which tackles the largescale problem, substantially reducing training time and memory space. Extensive experimental results on three image databases demonstrate that our approach outperforms several related methods.