The advent of 3D medical imaging has been a turning point in the diagnosis of various diseases, as voxel information from adjacent slices helps radiologists better understand complex anatomical relationships. However, the interpretation of medical images by radiologists with different levels of expertise can vary and is also time‐consuming. In the last decades, artificial intelligence‐based computer‐aided systems have provided fast and more reliable diagnostic insights with great potential for various clinical purposes. This paper proposes a significant deep learning based 3D medical image diagnosis method. The method classifies MedMNIST3D, which consists of six 3D biomedical datasets obtained from CT, MRA, and electron microscopy modalities. The proposed method concatenates 3D image features extracted from three independent networks, a 3D CNN, and two time‐distributed ResNet BLSTM structures. The ultimate discriminative features are selected via the minimum redundancy maximum relevance (mRMR) feature selection method. Those features are then classified by a neural network model. Experiments adhere to the rules of the official splits and evaluation metrics of the MedMNIST3D datasets. The results reveal that the proposed approach outperforms similar studies in terms of accuracy and AUC.