Many deep learning (DL) models have been proposed for the IMU (inertial measurement unit) based HAR (human activity recognition) domain. However, combinations of manually designed time series features (TSFs) and traditional machine learning (ML) often continue to perform well. It is not rare that combinations among TSFs and DL show better performance than the DL-only approaches. Those facts mean that TSFs have the potential to outperform automatically generated features using deep neural networks (DNNs). However, TSFs have a problem: their performances are only good if appropriate 3D bases are selected. Fortunately, DL's strengths include capturing the features of input data and adaptively deriving parameters automatically. Thus, as a new DNN model for an IMU-based HAR, this paper proposes rTsfNet, a DNN model with Multi-head 3D Rotation and Time Series Feature Extraction. rTsfNet automatically selects multiple 3D bases from which features should be derived by extracting 3D rotation parameters within the DNN. Then, TSFs are derived to achieve HAR results using multilayer perceptrons (MLPs). With this combination, rTsfNet showed higher performance than existing models under well-managed benchmark conditions and multiple datasets: UCI HAR, PAMAP2, Daphnet, and OPPORTUNITY, all of which target different activities.