This paper focuses on the problem of identitybased face retrieval [2], a problem heavily depending on the quality of the similarity function used to compare the images. Instead of using standard or handcrafted similarity functions, one of the most popular ways to address this problem is to learn adapted metrics, from sets of similar and dissimilar example pairs. This is generally equivalent to projecting the face signatures into an adapted (possibly low-dimensional) space in which the similarity can be measured by the Euclidean distance. For large scale applications, the dimension of this subspace should be as small as possible to limit the storage requirements, and the projections should be fast to compute. Since the Euclidean distance fulfill the second requirement, producing face representations adapted to the Euclidean metric is interesting. However, such representations usually have very large sizes. Several methods have been proposed to learn projections that are capable of reducing the size of the signatures while preserving their performance. Most of these approaches are based on metric leaning algorithms [1] used to learn Mahalanobis-like distances:with W a positive semi-definite matrix. To guarantee this property and to reduce the size of the signatures, these methods use the factorization W = LL with L ∈ M D×d as projection matrix: y i = L x i . It is important to control the rank of W so that the dimension of the reduced signature is as small as possible.In this paper, we focus on a particular metric learning algorithm so-called MLBoost [3], a supervised method based on boosting. MLBoost learns the metric incrementally by aggregating several weak metrics: with α (t) the weights of the weak metrics and z (t) the projector vectors of these weak metrics.Here, we propose two improvements over MLBoost [3]: (i) A new method for computing weak metrics at a lower computational cost;