With a massive influx of multimodality data, the role of data analytics in health informatics has grown rapidly in the last decade. This has also prompted increasing interests in the generation of analytical, data driven models based on machine learning in health informatics. Deep learning, a technique with its foundation in artificial neural networks, is emerging in recent years as a powerful tool for machine learning, promising to reshape the future of artificial intelligence. Rapid improvements in computational power, fast data storage, and parallelization have also contributed to the rapid uptake of the technology in addition to its predictive power and ability to generate automatically optimized high-level features and semantic interpretation from the input data. This article presents a comprehensive up-to-date review of research employing deep learning in health informatics, providing a critical analysis of the relative merit, and potential pitfalls of the technique as well as its future outlook. The paper mainly focuses on key applications of deep learning in the fields of translational bioinformatics, medical imaging, pervasive sensing, medical informatics, and public health.
The increasing popularity of wearable devices in recent years means that a diverse range of physiological and functional data can now be captured continuously for applications in sports, wellbeing, and healthcare. This wealth of information requires efficient methods of classification and analysis where deep learning is a promising technique for large-scale data analytics. While deep learning has been successful in implementations that utilize high-performance computing platforms, its use on low-power wearable devices is limited by resource constraints. In this paper, we propose a deep learning methodology, which combines features learned from inertial sensor data together with complementary information from a set of shallow features to enable accurate and real-time activity classification. The design of this combined method aims to overcome some of the limitations present in a typical deep learning framework where on-node computation is required. To optimize the proposed method for real-time on-node computation, spectral domain preprocessing is used before the data are passed onto the deep learning framework. The classification accuracy of our proposed deep learning approach is evaluated against state-of-the-art methods using both laboratory and real world activity datasets. Our results show the validity of the approach on different human activity datasets, outperforming other methods, including the two methods used within our combined pipeline. We also demonstrate that the computation times for the proposed method are consistent with the constraints of real-time on-node processing on smartphones and a wearable sensor platform.
Abstract-Human Activity Recognition provides valuable contextual information for wellbeing, healthcare, and sport applications. Over the past decades, many machine learning approaches have been proposed to identify activities from inertial sensor data for specific applications. Most methods, however, are designed for offline processing rather than processing on the sensor node. In this paper, a human activity recognition technique based on a deep learning methodology is designed to enable accurate and real-time classification for low-power wearable devices. To obtain invariance against changes in sensor orientation, sensor placement, and in sensor acquisition rates, we design a feature generation process that is applied to the spectral domain of the inertial data. Specifically, the proposed method uses sums of temporal convolutions of the transformed input. Accuracy of the proposed approach is evaluated against the current state-of-the-art methods using both laboratory and real world activity datasets. A systematic analysis of the feature generation parameters and a comparison of activity recognition computation times on mobile devices and sensor nodes are also presented.
This paper presents a new approach to gait analysis and parameter estimation from a single miniaturized ear-worn sensor embedded with a triaxial accelerometer. Singular spectrum analysis combined with the longest common subsequence algorithm has been used as a basis for gait parameter estimation. It incorporates information from all axes of the accelerometer to estimate parameters including swing, stance, and stride times. Rather than only using local features of the raw signals, the periodicity of the signals is also taken into account. The hypotheses tested by this study include: 1) how accurate is the ear-worn sensor in terms of gait parameter extraction compared to the use of an instrumented treadmill; 2) does the ear-worn sensor provide a feasible option for assessment and quantification of gait pattern changes. Key gait events for normal subjects such as heel contact and toe off are validated with a high-speed camera, as well as a force-plate instrumented treadmill. Ten healthy adults walked for 20 min on a treadmill with an increasing incline of 2% every 2 min. The upper and lower limits of the absolute errors using 95% confidence intervals for swing, stance, and stride times were obtained as 35.5 ±3.99 ms, 36.9 ±3.84 ms, and 17.9 ±2.29 ms, respectively.
Abstract-Understanding the solid biomechanics of the human body is important to the study of structure and function of the body, which can have a range of applications in healthcare, sport, wellbeing, and workflow analysis. Conventional laboratorybased biomechanical analysis systems and observation-based tests are only designed to capture brief snapshots of the mechanics of movement. With recent developments in wearable sensing technologies, biomechanical analysis can be conducted in less constrained environments, thus allowing continuous monitoring and analysis beyond laboratory settings. In this paper, we review the current research in wearable sensing technologies for biomechanical analysis, focusing upon sensing and analytics that enable continuous, long-term monitoring of kinematics and kinetics in a free-living environment. The main technical challenges, including measurement drift, external interferences, nonlinear sensor properties, sensor placement, and muscle variations that can affect the accuracy and robustness of existing methods, and different methods for reducing the impact of these sources of errors are described in this review. Recent developments in motion estimation in kinematics, mobile force sensing in kinematics, sensor reduction for electromyography, as well as the future direction of sensing for biomechanics are also discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.