AimThe aim of this study is to prove that facial surface electromyography (sEMG) conveys sufficient information to predict 3D lip shapes. High sEMG predictive accuracy implies we could train a neural control model for activation of biomechanical models by simultaneously recording sEMG signals and their associated motions.Materials and methodsWith a stereo camera set-up, we recorded 3D lip shapes and simultaneously performed sEMG measurements of the facial muscles, applying principal component analysis (PCA) and a modified general regression neural network (GRNN) to link the sEMG measurements to 3D lip shapes. To test reproducibility, we conducted our experiment on five volunteers, evaluating several sEMG features and window lengths in unipolar and bipolar configurations in search of the optimal settings for facial sEMG.ConclusionsThe errors of the two methods were comparable. We managed to predict 3D lip shapes with a mean accuracy of 2.76 mm when using the PCA method and 2.78 mm when using modified GRNN. Whereas performance improved with shorter window lengths, feature type and configuration had little influence.
In oral cancer, loss of function due to surgery can be unacceptable, designating the tumour as functionally inoperable. Other curative treatments can then be considered. Currently, predictions of these functional consequences are subjective and unreliable. We want to create patient-specific models to improve and objectify these predictions. A first step was taken by controlling a 3D lip model with volunteer-specific sEMG activities. We focus on the lips first, because they are essential for speech, oral food transport, and facial mimicry. Besides, they are more accessible to measurements than intraoral organs. 3D lip movement and corresponding sEMG activities are measured in five healthy volunteers, who performed 19 instructions repeatedly, to create a quantitative lip model by establishing the relationship between sEMG activities of eight facial muscles bilaterally on the input side and the corresponding 3D lip displacements on the output side. The relationship between 3D lip movement and sEMG activities was accommodated in a state-space model. A good relationship between sEMG activities and 3D lip movement was established with an average root mean square error of 2.43 mm for the first-order system and 2.46 mm for the second-order system. This information can be incorporated into biomechanical models to further personalise functional outcome assessment after treatment.
This is the first study quantitatively measuring tongue motion in 3D after in vivo intraoperative neurostimulation of the hypoglossal nerve and its branches during a neck dissection procedure. Firstly, this study is performed to show whether this set-up is suitable for innervating different muscles or muscle groups with an identifiable corresponding motion pattern by stimulating the main stem and the visible branches and by performing an intra-patient comparison over the captured 3D trajectories. Secondly, an inter-patient comparison is performed to analyse if similar branches lead to comparable movements of the tongue tip. Our results showed the measurement set-up works, and that we are able to capture distinguishable trajectories for different branches. The inter-patient comparison showed a poor match in trajectories for similar branches, which could be an indication of the presence of several anatomical variations between individuals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.