Despite widespread availability of ultrasound and a need for personalised muscle diagnosis (neck/back pain-injury, work related disorder, myopathies, neuropathies), robust, online segmentation of muscles within complex groups remains unsolved by existing methods. For example, Cervical Dystonia (CD) is a prevalent neurological condition causing painful spasticity in one or multiple muscles in the cervical muscle system. Clinicians currently have no method for targeting/monitoring treatment of deep muscles. Automated methods of muscle segmentation would enable clinicians to study, target, and monitor the deep cervical muscles via ultrasound. We have developed a method for segmenting five bilateral cervical muscles and the spine via ultrasound alone, in real-time. Magnetic Resonance Imaging (MRI) and ultrasound data were collected from 22 participants (age: 29.0±6.6, male: 12). To acquire ultrasound muscle segment labels, a novel multimodal registration method was developed, involving MRI image annotation, and shape registration to MRI-matched ultrasound images, via approximation of the tissue deformation. We then applied polynomial regression to transform our annotations and textures into a mean space, before using shape statistics to generate a texture-to-shape dictionary. For segmentation, test images were compared to dictionary textures giving an initial segmentation, and then we used a customized Active Shape Model to refine the fit. Using ultrasound alone, on unseen participants, our technique currently segments a single image in [Formula: see text] to over 86% accuracy (Jaccard index). We propose this approach is applicable generally to segment, extrapolate and visualise deep muscle structure, and analyse statistical features online.
Objectives: To automate online segmentation of cervical muscles from transverse ultrasound (US) images of the human neck during functional head movement. To extend ground-truth labelling methodology beyond dependence upon MRI imaging of static head positions required for application to participants with involuntary movement disorders. Method: We collected sustained sequences (> 3 minutes) of US images of human posterior cervical neck muscles at 25 fps from 28 healthy adults, performing visually-guided pitch and yaw head motions. We sampled 1,100 frames (approx. 40 per participant) spanning the experimental range of head motion. We manually labelled all 1,100 US images and trained deconvolutional neural networks (DCNN) with a spatial SoftMax regression layer to classify every pixel in the full resolution (525x491) US images, as one of 14 classes (10 muscles, ligamentum nuchae, vertebra, skin, background). We investigated ‘MaxOut’ and Exponential Linear unit (ELU) transfer functions and compared with our previous benchmark (analytical shape modelling). Results: These DCNNs showed higher Jaccard Index (53.2%) and lower Hausdorff Distance (5.7 mm) than the previous benchmark (40.5%, 6.2 mm). SoftMax Confidence corresponded with correct classification. ‘MaxOut’ outperformed ELU marginally. Conclusion: The DCNN architecture accommodates challenging images and imperfect manual labels. The SoftMax layer gives user feedback of likely correct classification. The ‘MaxOut’ transfer function benefits from near-linear operation, compatibility with deconvolution operations and the dropout regulariser. Significance: This methodology for labelling ground-truth and training automated labelling networks is applicable for dynamic segmentation of moving muscles and for participants with involuntary movement disorders who cannot remain still.
This paper presents an investigation into the feasibility of using deep learning methods for developing arbitrary full spatial resolution regression analysis of B-mode ultrasound images of human skeletal muscle. In this study, we focus on full spatial analysis of muscle fibre orientation, since there is an existing body of work with which to compare results. Previous attempts to automatically estimate fibre orientation from ultrasound are not adequate, often requiring manual region selection, feature engineering, providing low-resolution estimations (one angle per muscle) and deep muscles are often not attempted. We build upon our previous work in which automatic segmentation was used with plain convolutional neural network (CNN) and deep residual convolutional network (ResNet) architectures, to predict a low-resolution map of fibre orientation in extracted muscle regions. Here, we use deconvolutions and max-unpooling (DCNN) to regularise and improve predicted fibre orientation maps for the entire image, including deep muscles, removing the need for automatic segmentation and we compare our results with the CNN and ResNet, as well as a previously established feature engineering method, on the same task. Dynamic ultrasound images sequences of the calf muscles were acquired (25 Hz) from 8 healthy volunteers (4 male, ages: 25-36, median 30). A combination of expert annotation and interpolation/extrapolation provided labels of regional fibre orientation for each image. Neural networks (CNN, ResNet, DCNN) were then trained both with and without dropout using leave one out cross-validation. Our results demonstrated robust estimation of full spatial fibre orientation within approximately 6 • error, which was an improvement on previous methods.
Objective: To test automated in vivo estimation of active and passive skeletal muscle states using ultrasonic imaging. Background: Current technology (electromyography, dynamometry, shear wave imaging) provides no general, non-invasive method for online estimation of skeletal intramuscular states. Ultrasound (US) allows non-invasive imaging of muscle, yet current computational approaches have never achieved simultaneous extraction nor generalisation of independently varying, active and passive states. We use deep learning to investigate the generalizable content of 2D US muscle images. Method: US data synchronized with electromyography of the calf muscles, with measures of joint moment/angle were recorded from 32 healthy participants (7 female, ages: 27.5, 19-65). We extracted a region of interest of medial gastrocnemius and soleus using our prior developed accurate segmentation algorithm. From the segmented images, a deep convolutional neural network was trained to predict three absolute, drift-free, components of the neurobiomechanical state (activity, joint angle, joint moment) during experimentally designed, simultaneous, independent variation of passive (joint angle) and active (electromyography) inputs. Results: For all 32 held-out participants (16-fold cross-validation) the ankle joint angle, electromyography, and joint moment were estimated to accuracy 55±8%, 57±11%, and 46±9% respectively. Significance: With 2D US imaging, deep neural networks can encode in generalizable form, the activity-length-tension state relationship of muscle. Observation only, low power, 2D US imaging can provide a new category of technology for non-invasive estimation of neural output, length and tension in skeletal muscle. This proof of principle has value for personalised muscle diagnosis in pain, injury, neurological conditions, neuropathies, myopathies and ageing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.