Human activity recognition (HAR) became a challenging issue in recent years. In this paper, we propose a novel approach to tackle indistinguishable activity recognition based on human wearable sensors. Generally speaking, vision-based solutions struggle with low illumination environments and partial occlusion problems. In contrast, wearable inertial sensors can tackle this problem and avoid revealing personal privacy. We address the issue by building a multistage deep neural network framework that interprets accelerometer, gyroscope, and magnetometer data that provide useful information of human activities. Initially, the stage of variational autoencoders (VAE) can extract the crucial information from raw data of inertial measurement units (IMUs). Furthermore, the stage of generative adversarial networks (GANs) can generate more realistic human activities. Finally, the transfer learning method is applied to enhance the performance of the target domain, which builds a robust and effective model to recognize human activities.