This article presents a design of "virtual sensors" to collect low-dimensional sensor data from 3D digital human motion and create real-world applications through the "interactive simulation." It provides an opportunity to decrease dependency on real-world data requirements and gives more flexibility in the corresponding human activity-related applications.
Using sensors and machine learning (ML) to recognize human motion and build the activity-related applications has been successfully applied to assist people's daily lives, such as gesture interaction, motion tracking, and exergames. 1,2 As in Zhu et al., 2 multimodal sensors have been used in various types of activity recognition systems. However, such systems have shown bottlenecks; that is, real sensor datasets are always required as prerequisites to train classifiers. As a result, the developed applications are mainly limited by the dataset's characteristics, including data modalities, sensor characteristics, and data types.