The idea that the brain controls movement using a neural representation of limb dynamics has been a dominant hypothesis in motor control research for well over a decade. Speech movements offer an unusual opportunity to test this proposal by means of an examination of transfer of learning between utterances that are to varying degrees matched on kinematics. If speech learning results in a generalizable dynamics representation, then, at the least, learning should transfer when similar movements are embedded in phonetically distinct utterances. We tested this idea using three different pairs of training and transfer utterances that substantially overlap kinematically. We find that, with these stimuli, speech learning is highly contextually sensitive and fails to transfer even to utterances that involve very similar movements. Speech learning appears to be extremely local, and the specificity of learning is incompatible with the idea that speech control involves a generalized dynamics representation.
Humans routinely make movements to targets that have different accuracy requirements in different directions. Examples extend from everyday occurrences such as grasping the handle of a coffee cup to the more refined instance of a surgeon positioning a scalpel. The attainment of accuracy in situations such as these might be related to the nervous system's capacity to regulate the limb's resistance to displacement, or impedance. To test this idea, subjects made movements from random starting locations to targets that had shape-dependent accuracy requirements. We used a robotic device to assess both limb impedance and patterns of movement variability just as the subject reached the target. We show that impedance increases in directions where required accuracy is high. Independent of target shape, patterns of limb stiffness are seen to predict spatial patterns of movement variability. The nervous system is thus seen to modulate limb impedance in entirely predictable environments to aid in the attainment of reaching accuracy.
Recent studies of human arm movement have suggested that the control of stiffness may be important both for maintaining stability and for achieving differences in movement accuracy. In the present study, we have examined the voluntary control of postural stiffness in 3D in the human jaw. The goal is to address the possible role of stiffness control in both stabilizing the jaw and in achieving the differential precision requirements of speech sounds. We previously showed that patterns of kinematic variability in speech are systematically related to the stiffness of the jaw. If the nervous system uses stiffness control as a means to regulate kinematic variation in speech, it should also be possible to show that subjects can voluntarily modify jaw stiffness. Using a robotic device, a series of force pulses was applied to the jaw to elicit changes in stiffness to resist displacement. Three orthogonal directions and three magnitudes of forces were tested. In all conditions, subjects increased the magnitude of jaw stiffness to resist the effects of the applied forces. Apart from the horizontal direction, greater increases in stiffness were observed when larger forces were applied. Moreover, subjects differentially increased jaw stiffness along a vertical axis to counteract disturbances in this direction. The observed changes in the magnitude of stiffness in different directions suggest an ability to control the pattern of stiffness of the jaw. The results are interpreted as evidence that jaw stiffness can be adjusted voluntarily, and thus may play a role in stabilizing the jaw and in controlling movement variation in the orofacial system.
Observations were made in three speakers of compensation in formant trajectories in response to jaw perturbations during utterances with the general form /siyCVd/, as in ‘‘see red.’’ Custom dental prostheses were used to help immobilize the head (upper jaw) and couple a computer-controlled robotic device (lower jaw). A 3-Newton perturbation force was applied to the jaw during one out of every five repetitions, selected at random, with half of the perturbations applied downward and half upward. Perturbations were triggered from jaw opening (for CV) exceeding a threshold relative to clench position. Audio (at 10 kHz) and jaw position (at 1 kHz) were recorded concurrently. Individual tokens were extracted using the perturbation threshold for alignment. Formants computed over these intervals show initial deviation from control trajectories and then compensation that begins 60–90 ms after perturbation. Since jaw position does not recover its unperturbed trajectory, compensation presumably is effected through modified tongue movements. The observed behavior is compatible with the function of the DIVA model of speech motor planning, in which corrective motor commands are computed in response to errors between anticipated and produced sensory (auditory and somatosensory) consequences. [Research supported by NIDCD.]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.