With the wide applications of smart devices and mobile computing, smart home becomes a hot issue in the household appliance industry. The controlling and interaction approach plays a key role in users' experience and turns into one of the most important selling points for profit growth. Considering the robustness and privacy protection, wearable devices equipped with MEMS, e.g., smartphones, smartwatches, or smart wristbands, are thought of one of the most feasible commercial solutions for interaction. However, the low-cost built-in MEMS sensors do not perform well in capturing finely grained human activity directly. In this paper, we propose a method that leverages the arm constraint and historical information recorded by MEMS sensors to estimate the maximum likelihood action in a two-phases model. First, in the arm posture estimation phase, we leverage the kinematics model to analyze the maximum likelihood position of users' arms. Second, in the trajectory recognition phase, we leverage the gesture estimation model to identify the key actions and output the instructions to devices by SVM. Our substantial experiments show that the proposed solution can recognize eight kinds of postures defined for manmachine interaction in the smart home application scene, and the solution implements efficient and effective interaction using low-cost smartwatches, and the interaction accuracy is >87%. The experiments also show that the algorithm proposed in this paper can be well applied to the perceptual control of smart household appliances, and has high practical value for the application design of the perceptual interaction function of household appliances.
Action recognition is essential in security monitoring, home care, and behavior analysis. Traditional solutions usually leverage particular devices, such as smart watches, infrared/visible cameras, etc. These methods may narrow the application areas due to the risk of privacy leakage, high equipment cost, and over/under-exposure. Using wireless signals for motion recognition can effectively avoid the above problems. However, the motion recognition technology based on Wi-Fi signals currently has some defects, such as low resolution caused by narrow signal bandwidth, poor environmental adaptability caused by the multi-path effect, etc., which make it hard for commercial applications. To solve the above problems, we first propose and implement a position adaptive motion recognition method based on Wi-Fi feature enhancement, which is composed of an enhanced Wi-Fi features module and an enhanced convolution Transformer network. Meanwhile, we improve the generalization ability in the signal processing stage to avoid building an extremely complex model and reduce the demand for system hardware. To verify the generalization of the method, we implement real-world experiments using 9300 network cards and the PicoScenes software platform for data acquisition and processing. By contrast with the baseline method using original channel state information(CSI) data, the average accuracy of our algorithm is improved by 14% in different positions and over 16% in different orientations. Meanwhile, our method has best performance with an accuracy of 90.33% compared with the existing models on public datasets WiAR and WiDAR.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.