The existence of a Dimension Reduction (DR) subspace is a common assumption in regression analysis when dealing with high-dimensional predictors. The estimation of such a DR subspace has received considerable attention in the past few years, the most popular method being undoubtedly the Sliced Inverse Regression. We propose in this paper a new estimation procedure of the DR subspace by assuming that the joint distribution of the predictor and the response variables is a finite mixture of distributions. The new method is compared through a simulation study to some classical methods.Regression analysis concerns inference on the conditional distribution of a response variable Y ∈ R q given the value X = x of a vector of predictors X ∈ R p . For instance, a classical problem is the nonparametric estimation of the conditional mean function E(Y |X) for which a popular estimator, when the dimension p is not too large, has been proposed by Nadaraya [22] and Watson [30].When the dimension p becomes large, the so-called "curse of dimensionality" problem arises and 1 inference on the conditional distribution of Y given X = x becomes difficult. A common procedure when dealing with a high-dimensional predictor X is to determine a subspace S ⊂ R p , with dim(S) = d ≤ p, that carries all the information that X has about Y . Such a subspace S is called a Dimension Reduction (DR) subspace. It is spanned by the columns of a full rank matrix Γ ∈ R p×d such that X and Y are conditionally independent given Γ t X. A DR subspace always exists since the trivial choice Γ = I p is possible, but does not produce a reduction of dimension. Under minor conditions (see Cook [6]), the intersection of two DR subspaces is still a DR subspace and the intersection of all DR subspaces is called the central subspace. As seen in Li [19], a regression model admitting a central subspace is given by Y = g(Γ t X, ε), where ε is a random value independent of X and g : R d+1 → R q is an arbitrary function.One of the earliest method to estimate the central subspace (i.e. a matrix Γ) is the Sliced Inverse Regression (SIR) procedure introduced by Li [19]. This method is based on the estimation of Var(E(X|Y )) using a set {S h , h = 1, . . . , H} of non-overlapping slices that cover the range of Y .The asymptotic properties of SIR and related methods are derived for instance by Saracco [24,25].The SIR central subspace estimator is motivated in Li [19] by a geometric property of the covariance matrix Var(E(X|Y )). Another way to understand the SIR method is proposed in Cook [8] where Γ is interpreted as a parameter of an inverse regression model. This model is equivalent to assume that, for all h = 1, . . . , H, the conditional distribution of X given Y ∈ S h is a multivariate Gaussian distribution. Considering n independent replications of the random vector (X, Y ), and Szretter and Yohai [28] show that the maximum likelihood estimator of Γ corresponds to the SIR estimator of the central subspace. This inverse regression model is also used by to propose a ...