Deep learning algorithms are employed in many applications, especially in medical fields such as gait analysis and human pose detection for rehabilitation. However, creating the desired model with deep learning algorithms requires high memory and computing costs, which is problematic because deep learning technologies must be run on low-power devices such as edge computing equipment. To deal with these problems, feature reduction methods reduce the memory and energy costs. This paper presents an empirical analysis of deep learning with feature reduction. The method classifies foot images for knee rehabilitation using convolutional and dense autoencoders. The obtained results are compared with those of conventional methods (histograms of oriented gradients and local binary pattern algorithms). The features were classified and compared using support vector machine, k-nearest neighbor, and multilayer perceptron methods. The experimental results demonstrate that the conventional method uses fewer features than the deep learning method with higher accuracy because its algorithm projects pixels onto the histogram. In addition, using fewer features in deep learning layers maintains high accuracy, which is beneficial for edge computing implementations.
In this paper, a real-time knee extension monitoring and rehabilitation system for people, such as patients, the elderly, athletes, etc., is developed and tested. The proposed system has three major functions. The first function is two-channel surface electromyography (EMG) signal measurement and processing for the vastus lateralis (VL) and vastus medialis (VM) muscles using a developed EMG device set. The second function is the knee extension range of motion (ROM) measurement using an angle sensor device set (i.e., accelerometer sensor). Both functions are connected and parallelly processed by the NI-myRIO embedded device. Finally, the third function is the graphical user interface (GUI) using LabVIEW, where the knee rehabilitation program can be defined and flexibly set, as recommended by physical therapists and physicians. Experimental results obtained from six healthy subjects demonstrated that the proposed system can efficiently work with real-time response. It can support multiple rehabilitation users with data collection, where EMG signals with mean absolute value (MAV) and root mean square value (RMS) results and knee extension ROM data can be automatically measured and recorded based on the defined rehabilitation program. Furthermore, the proposed system is also employed in the hospital for validation and evaluation, where bio-feedback EMG and ROM data from six patients, including (a) knee osteoarthritis, (b) herniated disc, (c) knee ligament injury, (d) ischemic stroke, (e) hemorrhagic stroke, and (f) Parkinson are obtained. Such data are also collected for one month for tracking, evaluation, and treatment. With our proposed system, results indicate that the rehabilitation people can practice themselves and know their rehabilitation progress during the time of testing. The system can also evaluate (as a primary treatment) whether the therapy training is successful or not, while experts can simultaneously review the progress and set the optimal treatment program in response to the rehabilitation users. This technology can also be integrated as a part of the Internet of Things (IoT) and smart healthcare systems.
Lower-body detection can be useful in many applications, such as the detection of falling and injuries during exercises. However, it can be challenging to detect the lower-body, especially under various lighting and occlusion conditions. This paper presents a novel lower-body detection framework using proposed anthropometric ratios and compares the performance of deep learning (convolutional neural networks and OpenPose) and traditional detection methods. According to the results, the proposed framework helps to successfully detect the accurate boundaries of the lower-body under various illumination and occlusion conditions for lower-limb monitoring. The proposed framework of anthropometric ratios combined with convolutional neural networks (A-CNNs) also achieves high accuracy (90.14%), while the combination of anthropometric ratios and traditional techniques (A-Traditional) for lower-body detection shows satisfactory performance with an averaged accuracy (74.81%). Although the accuracy of OpenPose (95.82%) is higher than the A-CNNs for lower-body detection, the A-CNNs provides lower complexity than the OpenPose, which is advantageous for lower-body detection and implementation on monitoring systems.
Objectives: It can be challenging in some situations to distinguish primary central nervous system lymphoma (PCNSL) from glioblastoma (GBM) based on magnetic resonance imaging (MRI) scans, especially those involving the corpus callosum. The objective of this study was to assess the diagnostic performance of deep learning (DL) models between PCNSLs and GBMs in corpus callosal tumors. Materials and Methods: The axial T1-weighted gadolinium-enhanced MRI scans of 274 individuals with pathologically confirmed PCNSL (n = 94) and GBM (n = 180) were examined. After image pooling, pre-operative MRI scans were randomly split with an 80/20 procedure into a training dataset (n = 709) and a testing dataset (n = 177) for DL model development. Therefore, the DL model was deployed as a web application and validated with the unseen images (n = 114) and area under the receiver operating characteristic curve (AUC); other outcomes were calculated to assess the discrimination performance. Results: The first baseline DL model had an AUC of 0.77 for PCNSL when evaluated with unseen images. The 2nd model with ridge regression regularization and the 3rd model with drop-out regularization increased an AUC of 0.83 and 0.84. In addition, the last model with data augmentation yielded an AUC of 0.57. Conclusion: DL with regularization may provide useful diagnostic information to help doctors distinguish PCNSL from GBM.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.