This study proposes a new method for the detection of facial expressions of pain using a 3D profiler that combines a multiple-input-multiple-output (MIMO) radar system with a machine learning (ML) model (ML-MIMO radar profiler). It offers a promising solution for the pain detection of facial expressions in a non-invasive, non-intrusive, and cost-effective manner. The ML-MIMO radar profiler employs six radars behind a lens to monitor changes in six facial regions and to build a 3D facial profile with real-time facial activity information. A dielectric lens was used to ensure an optimal beam size to effectively illuminate each face region. Signal processing is performed using dynamic time deformation to determine the longitudinal distance and a discrete stationary wavelet transform to filter the signal and improve accuracy. The information from the 3D profiler was compared with the facial action coding system (FACS) to determine actual facial expressions. A machine learning algorithm was trained to learn action units from the FACS and compare them with the information provided by the ML-MIMO radar profiler, thus performing facial expression classification. In this study, we analyzed four facial expressions: joy, sadness, anger, and pain. Identification and classification were performed using a machine-learning model based on multilayer perceptrons. The results revealed 92% accuracy of the system for pain expression, whereas expressions of happiness, sadness, and anger were detected with 88, 86, and 87% accuracy, respectively.