Information fusion is a challenging problem in biometrics, where data comes from multiple biometric modalities or multiple feature spaces extracted from the same modality. Learning from heterogeneous data sources, in general, is termed as multi-view learning, where view is an encompassing term that refers to different sets of observations having distinct statistical properties. Most of the existing approaches to learning from multiple views either assume that the views are either independent or fully dependent. However, in real scenarios, these assumptions are almost never truly satisfied. In this work, we relax these assumptions. We propose a feature fusion method called Discriminative Factorized Subspaces (DFS) that learns a factorized subspace consisting of a single shared subspace (that captures the common information), and view-specific subspaces that captures information specific to each view. DFS jointly learns these subspaces, by posing the optimization problem as a constrained Rayleigh Quotient based formulation, whose solution is efficiently obtained using generalized eigenvalue decomposition. Our method does not require lots of data to learn from, and we show how it is apt for domains characterized by limited training data, and high intra-class variability. As an application, we tackle the challenging problem of touchscreen biometrics, which is based on the study of user interactions with their touch screens. Through extensive experimentation and thorough evaluation, we demonstrate how DFS learns a better discriminatory boundary, and provides a superior performance than state of the art methods for touchscreen biometric verification.INDEX TERMS Touchscreen biometrics, multi-modal biometrics, multi-modal data, feature fusion