Consider a channel Y = X + N where X is an n-dimensional random vector, and N is a multivariate Gaussian vector with a full-rank covariance matrix KN. The object under consideration in this paper is the conditional mean of X given Y = y, that is y → E[X|Y = y]. Several identities in the literature connect E[X|Y = y] to other quantities such as the conditional variance, score functions, and higher-order conditional moments. The objective of this paper is to provide a unifying view of these identities.In the first part of the paper, a general derivative identity for the conditional mean estimator is derived. Specifically, for the Markov chain U ↔ X ↔ Y, it is shown that the Jacobian matrix of E[U|Y = y] is given by K −1 N Cov(X, U|Y = y) where Cov(X, U|Y = y) is the conditional covariance.In the second part of the paper, via various choices of the random vector U, the new identity is used to recover and generalize many of the known identities and derive some new identities. First, a simple proof of the Hatsel and Nolte identity for the conditional variance is shown. Second, a simple proof of the recursive identity due to Jaffer is provided. The Jaffer identity is then further explored, and several equivalent stamens are derived, such as an identity for the higher-order conditional expectation (i.e., E[X k |Y]) in terms of the derivatives of the conditional expectation. Third, a new fundamental connection between the conditional cumulants and the conditional expectation is demonstrated. In particular, in the univariate case, it is shown that the k-th derivative of the conditional expectation is proportional to the (k +1)-th conditional cumulant. A similar expression is derived in the multivariate case.The third part of the paper considers various applications of the derived identities (mostly in the scalar case). In a first application, using the new identity for higher-order derivatives of the conditional expectation, a power series representation of the conditional expectation is derived. The power series representation, together with the Lagrange inversion theorem, is then used to find an expression for the compositional inverse of y → E[X|Y = y]. In a second application, the conditional expectation is viewed as a random variable and the probability distribution of E[X|Y ] and probability distribution of the estimator error (X − E[X|Y ]) are derived. In the third application, the new identities are used to show that the higher-order conditional expectations and the conditional cumulants depended on the joint distribution only through the marginal of Y . This observation is then used to construct consistent estimators (known as the empirical Bayes estimators) of the higher-order conditional expectations and the conditional cumulants from an independent and identically distributed sequence Y1, . . . , Yn.