Combining multiple human trait features is a proven and effective strategy for biometric-based personal identification. In this study, the authors investigate the fusion of two biometric modalities, i.e. ear and palmprint, at feature-level. Ear and palmprint patterns are characterised by a rich and stable structure, which provides a large amount of information to discriminate individuals. Local texture descriptors, namely local binary patterns, weber local descriptor, and binarised statistical image features, were used to extract the discriminant features for robust human identification. The authors' extensive experimental analysis based on the benchmark IIT Delhi-2 ear and IIT Delhi palmprint databases confirmed that the proposed multimodal biometric system is able to increase recognition rates compared with that produced by single-modal biometrics, attaining a recognition rate of 100%.