Video prediction which maps a sequence of past video frames into realistic future video frames is a challenging task because it is difficult to generate realistic frames and model the coherent relationship between consecutive video frames. In this paper, we propose a hierarchical sequence-to-sequence prediction approach to address this challenge. We present an end-to-end trainable architecture in which the frame generator automatically encodes input frames into different levels of latent Convolutional Neural Network (CNN) features, and then recursively generates future frames conditioned on the estimated hierarchical CNN features and previous prediction. Our design is intended to automatically learn hierarchical representations of video and their temporal dynamics. Convolutional Long Short-Term Memory (ConvLSTM) is used in combination with skip connections so as to separately capture the sequential structures of multiple levels of hierarchy of features. We adopt Scheduled Sampling for training our recurrent network in order to facilitate convergence and to produce high-quality sequence predictions. We evaluate our method on the Bouncing Balls, Moving MNIST, and KTH human action dataset, and report favorable results as compared to existing methods.
Background In case of focal neuropathy, the muscle fibers innervated by the corresponding nerves are replaced with fat or fibrous tissue due to denervation, which results in increased echo intensity (EI) on ultrasonography. EI analysis can be conducted quantitatively using gray scale analysis. Mean value of pixel brightness of muscle image defined as EI. However, the accuracy achieved by using this parameter alone to differentiate between normal and abnormal muscles is limited. Recently, attempts have been made to increase the accuracy using artificial intelligence (AI) in the analysis of muscle ultrasound images. CTS is the most common disease among focal neuropathy. In this study, we aimed to verify the utility of AI assisted quantitative analysis of muscle ultrasound in CTS. Methods This is retrospective study that used data from adult who underwent ultrasonographic examination of hand muscles. The patient with CTS confirmed by electromyography and subjects without CTS were included. Ultrasound images of the unaffected hands of patients or subjects without CTS were used as controls. Ultrasonography was performed by one physician in same sonographic settings. Both conventional quantitative grayscale analysis and machine learning (ML) analysis were performed for comparison. Results A total of 47 hands with CTS and 27 control hands were analyzed. On conventional quantitative analysis, mean EI ratio (i.e. mean thenar EI/mean hypothenar EI ratio) were significantly higher in the patient group than in the control group, and the AUC was 0.76 in ROC analysis. In the analysis using machine learning, the AUC was the highest for the linear support vector classifier (AUC = 0.86). When recursive feature elimination was applied to the classifier, the AUC value improved to 0.89. Conclusion This study showed a significant increase in diagnostic accuracy when AI was used for quantitative analysis of muscle ultrasonography. If an analysis protocol using machine learning can be established and mounted on an ultrasound machine, a noninvasive and non-time-consuming muscle ultrasound examination can be conducted as an ancillary tool for diagnosis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.