Recent developments in mmWave technology allow the detection and classification of dynamic arm gestures. However, achieving a high accuracy and generalization requires a lot of samples for the training of a machine learning model. Furthermore, in order to capture variability in the gesture class, the participation of many subjects and the conduct of many gestures with different arm speed are required. In case of macro-gestures, the position of the subject must also vary inside the field of view of the device. This would require a significant amount of time and effort, which needs to be repeated in case that the sensor hardware or the modulation parameters are modified. In order to reduce the required manual effort, here we developed a synthetic data generator that is capable of simulating seven arm gestures by utilizing Blender, an open-source 3D creation suite. We used it to generate 600 artificial samples with varying speed of execution and relative position of the simulated subject, and used them to train a machine learning model. We tested the model using a real dataset recorded from ten subjects, using an experimental sensor. The test set yielded 84.2% accuracy, indicating that synthetic data generation can significantly contribute in the pre-training of a model.
Human Machine Interaction based on air gestures finds an increasing number of applications in consumer electronics. The availability of mmWave technology, combined with machine learning, allows the detection and classification of gestures, avoiding high-resolution LIDAR or video sensors Nevertheless, in most of the existing studies, the processing takes place offline, takes into account only the velocity and distance of the moving arm, and can handle only gestures that are conducted very close to the sensor device, which limits the range of possible applications. Here, we use an experimental multi-channel mmWave- based system that can detect small targets, like a moving arm, up to a few meters away from the sensor. As our pipeline can estimate and take into account the angle of arrival in azimuth and elevation, it has the ability to classify a greater variety of dynamic gestures. Furthermore, the digital signal processing chain we present here, runs in real-time, incorporating an event detector. Whenever an event is detected, a novel empirical feature extraction takes place and a Multi-Layer Perceptron is deployed to infer the type of the gesture. To evaluate our setup and signal processing pipeline, a dataset with ten subjects, performing nine gestures was recorded. Our method yielded 94.3% accuracy on the test set, indicating a successful combination of our proposed sensor and signal processing pipeline for real time applications.
Human Machine Interaction based on air gestures finds an increasing number of applications in consumer electronics. The availability of mmWave technology, combined with machine learning, allows the detection and classification of gestures, avoiding high-resolution LIDAR or video sensors. Nevertheless, in most of the existing studies, the processing takes place offline, takes into account only the velocity and distance of the moving arm, and can handle only gestures that are conducted very close to the sensor device, which limits the range of possible applications. Here, we use an experimental multi-channel mmWave-based system that can detect small targets, like a moving arm, up to a few meters away from the sensor. As our pipeline can estimate and take into account the angle of arrival in azimuth and elevation, it has the ability to classify a greater variety of dynamic gestures. Furthermore, the digital signal processing chain we present here, runs in real-time, incorporating an event detector. Whenever an event is detected, a novel empirical feature extraction takes place and a Multi-Layer Perceptron is deployed to infer the type of the gesture. To evaluate our setup and signal processing pipeline, a dataset with ten subjects, performing nine gestures was recorded. Our method yielded 94.3% accuracy on the test set, indicating a successful combination of our proposed sensor and signal processing pipeline for real time applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.