a b s t r a c tRecently, mutual interdependence analysis (MIA) has been successfully used to extract representations, or ''mutual features'', accounting for samples in the class. For example, a mutual feature is a face signature under varying illumination conditions or a speaker signature under varying channel conditions. A mutual feature is a linear regression that is equally correlated with all samples of the input class. Previous work discussed two equivalent definitions of this problem and a generalization of its solution called generalized MIA (GMIA). Moreover, it showed how mutual features can be computed and employed. This paper uses a parametrized version GMIAðl) to pursue a deeper understanding of what GMIA features really represent. It defines a generative signal model that is used to interpret GMIAðl) and visualize its difference to MIA, principal and independent component analysis. Finally, we analyze the effect of l on the feature extraction performance of GMIAðl) in two standard pattern recognition problems: illumination-independent face recognition and text-independent speaker verification.