The merit of ensemble learning lies in having different outputs from many individual models on a single input, i.e., the diversity of the base models. The high quality of diversity can be achieved when each model is specialized to different subsets of the whole dataset. Moreover, when each model explicitly knows to which subsets it is specialized, more opportunities arise to improve diversity. In this paper, we propose an advanced ensemble method, called Auxiliary class based Multiple Choice Learning (AMCL), to ultimately specialize each model under the framework of multiple choice learning (MCL). The advancement of AMCL is originated from three novel techniques which control the framework from different directions: 1) the concept of auxiliary class to provide more distinct information through the labels, 2) the strategy, named memory-based assignment, to determine the association between the inputs and the models, and 3) the feature fusion module to achieve generalized features. To demonstrate the performance of our method compared to all variants of MCL methods, we conduct extensive experiments on the image classification and segmentation tasks. Overall, the performance of AMCL exceeds all others in most of the public datasets trained with various networks as members of the ensembles.Inheriting the concept of the oracle loss, [6] applied MCL to ensembles of deep neural networks with a trainable algorithm and named stochastic multiple choice learning (sMCL). Since each specialized model takes the loss only from a certain set of examples, overconfident predictions to unassigned examples occur by nature. To address this overconfidence issue, a series of progressive methods are introduced. [7] added a regularization term to the oracle loss in order to coerce the predictive Preprint. Under review.