This paper considers a "one among many" detection problem, where one has to discriminate between pure noise and one among alternatives that are known up to an amplitude factor. Two issues linked to high dimensionality arise in this framework. First, the computational complexity associated to the Generalized Likelihood Ratio (GLR) with the constraint of sparsity-one inflates linearly with , which can be an obstacle when multiple data sets have to be tested. Second, standard procedures based on dictionary learning aimed at reducing the dimensionality may suffer from severe power losses for some alternatives, thus suggesting a worst-case scenario strategy. In the case where the learned dictionary has column, we show that the exact solution of the resulting detection problem, which can be formulated as a minimax problem, can be obtained by Quadratic Programming. Because it allows a better sampling of the diversity of the alternatives, the case is expected to improve the detection performances over the case . The worst-case analysis of this case, which is more involved, leads us to propose two "minimax learning algorithms". Numerical results show that these algorithms indeed allow to increase performances over the case and are in fact comparable to the GLR using the full set of alternatives, while being computationally simpler.