Brain-inspired facial expression recognition is a promising but challenging research direction with significant potential for applications in human-computer interaction and intelligent image analysis. However, current mainstream FER models, which are single-pathway and global feature-based, still have a long way to go to match the performance of the human brain in FER. To narrow this gap, this paper proposes a novel multi-net fused FER framework that incorporates specific FER mechanisms from the human brain into deep learning. Different from most FER model designers, we transfer our main focus from obtaining high-quality global features through data augmentation and network optimization to referring the cognitive processes involved in different facial expression classes, classification preference, and other factors that contribute to the human brain's FER mechanisms. The framework includes data preprocessing, activeness estimation, multi-net classification, and fusion recognition. We validate and test the proposed framework using general datasets, and the results show that it performs well in FER tasks. The works in this paper provide exploratory ideas and a basis for future research in this direction.INDEX TERMS Facial expression recognition, brain-inspired, multi-pathway, local activeness.