Scientists have explored the human body for hundreds of years, and yet more relationships between the behaviors and health are still to be discovered. With the development of data mining, artificial intelligence technology, and human posture detection, it is much more possible to figure out how behaviors and movements influence people’s health and life and how to adjust the relationship between work and rest, which is needed urgently for modern people against this high-speed lifestyle. Using smart technology and daily behaviors to supervise or predict people’s health is a key part of a smart city. In a smart city, these applications involve large groups and high-frequency use, so the system must have low energy consumption, a portable system, and a low cost for long-term detection. To meet these requirements, this paper proposes a posture recognition method based on multisensor and using LoRa technology to build a long-term posture detection system. LoRa WAN technology has the advantages of low cost and long transmission distances. Combining the LoRa transmitting module and sensors, this paper designs wearable clothing to make people comfortable in any given posture. Aiming at LoRa’s low transmitting frequency and small size of data transmission, this paper proposes a multiprocessing method, including data denoising, data enlarging based on sliding windows, feature extraction, and feature selection using Random Forest, to make 4 values retain the most information about 125 data from 9 axes of sensors. The result shows an accuracy of 99.38% of extracted features and 95.06% of selected features with the training of 3239 groups of datasets. To verify the performance of the proposed algorithm, three testers created 500 groups of datasets and the results showed good performance. Hence, due to the energy sustainability of LoRa and the accuracy of recognition, this proposed posture recognition using multisensor and LoRa can work well when facing long-term detection and LoRa fits smart city well when facing long-distance transmission.
With the wide application of Light Detection and Ranging (LiDAR) in the collection of high-precision environmental point cloud information, three-dimensional (3D) object classification from point clouds has become an important research topic. However, the characteristics of LiDAR point clouds, such as unstructured distribution, disordered arrangement, and large amounts of data, typically result in high computational complexity and make it very difficult to classify 3D objects. Thus, this paper proposes a Convolutional Neural Network (CNN)-based 3D object classification method using the Hough space of LiDAR point clouds to overcome these problems. First, object point clouds are transformed into Hough space using a Hough transform algorithm, and then the Hough space is rasterized into a series of uniformly sized grids. The accumulator count in each grid is then computed and input to a CNN model to classify 3D objects. In addition, a semi-automatic 3D object labeling tool is developed to build a LiDAR point clouds object labeling library for four types of objects (wall, bush, pedestrian, and tree). After initializing the CNN model, we apply a dataset from the above object labeling library to train the neural network model offline through a large number of iterations. Experimental results demonstrate that the proposed method achieves object classification accuracy of up to 93.3% on average.
Point clouds have been widely used in three-dimensional (3D) object classification tasks, i.e., people recognition in unmanned ground vehicles. However, the irregular data format of point clouds and the large number of parameters in deep learning networks affect the performance of object classification. This paper develops a 3D object classification system using a broad learning system (BLS) with a feature extractor called VB-Net. First, raw point clouds are voxelized into voxels. Through this step, irregular point clouds are converted into regular voxels which are easily processed by the feature extractor. Then, a pre-trained VoxNet is employed as a feature extractor to extract features from voxels. Finally, those features are used for object classification by the applied BLS. The proposed system is tested on the ModelNet40 dataset and ModelNet10 dataset. The average recognition accuracy was 83.99% and 90.08%, respectively. Compared to deep learning networks, the time consumption of the proposed system is significantly decreased.
To solve the problems of high power consumption, low transmission distance and low recognition accuracy of the gesture monitoring system of traditional wearable devices, this paper designs a remote gesture monitoring system based on LoRa. In terms of data transmission, LoRa Internet of Things technology is used, which has the characteristics of low power consumption, high speed and long-distance transmission, and can meet the needs of multi-user long-term use. The identification module is built on the remote server and can be used directly without configuration. Based on the multi-sensor data, this paper also designs a deep learning model to complete the task of human gesture recognition, which can recognize 7 kinds of gesture data and the effect meets the expectations.
Terrain synthesis has been a hot topic in the field of computer graphics and image processing. However, there are still issues in terrain synthesis where synthesis results are difficult to control and not realistic enough. To address these problems, this paper proposes an interactive terrain elevation map generation method based on the synthesis of a single sample terrain elevation map. First, we propose a method to extract the skeleton from a terrain elevation map and a user sketch. Second, we construct a skeleton sample feature map based on the terrain elevation map and the user sketch. Finally, we propose a matching cost function to match image patches of the terrain sample and the user sketch. The proposed method can obtain a synthesis result containing the features of both the terrain sample and the user sketch, and then generates a synthetic terrain elevation map. The experimental results demonstrate the effectiveness of the proposed method, where the synthesized results can meet the needs of users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.