The nature of the difference in skill between the preferred and non-preferred hands was investigated using a peg-board task. The first experiment examined the effects of varying movement amplitude and target tolerance on performance. The difference between hands was found to be related to tolerance rather than movement amplitude. The second study analysed a film record of well-practised subjects, confirming the hypothesis that most of the difference between hands is due to relative slowness of the non-preferred hand in the positioning phase involving small corrective movements. Analysis of the type and number of errors further suggested that this result is not due to differences in duration of movements but to their increased frequency, implying greater accuracy of aiming with the preferred hand. Thus whilst the initial gross analysis implicated feedback processing in skill differences the more detailed analysis suggests that motor output of the nonpreferred hand is simply more variable.
Abstract--Motor imagery has been studied using subjective, behavioural and physiological methods and this paper reviews theoretical and practical issues from all three viewpoints. Attempts to measure motor imagery on a subjective scale have met with limited success but alternative methods are proposed. Research on mental practice suggests a number of different processes may be needed to explain the variety and variability of effects obtained. Recent studies of spatial and motor working memory signify the importance of a primarily visuo-spatial component in which actions are consciously represented together with a more properly motoric component which must be activated to generate either images or overt actions. Finally the question of whether motor imagery is primarily perceptual or motoric in character does not have a simple neurophysiological answer due to the highly distributed nature of motor control. Nevertheless some of the key mechanisms serving both spatial and motoric components have been provisionally identified.
Subjective rating scales are widely used in almost every aspect of ergonomics research and practice for the assessment of workload, fatigue, usability, annoyance and comfort, and lesser known qualities such as urgency and presence, but are they truly scientific? This paper raises some of the key issues as a basis for debate. First, it is argued that all empirical observations, including those conventionally labelled as 'objective', are unavoidably subjective. Shared meaning between observers, or intersubjectivity, is the key criterion of scientific probity. The practical steps that can be taken to increase intersubjective agreement are discussed and the well-known sources of error and bias in human judgement reviewed. The role of conscious experience as a mechanism for appraising the environment and guiding behaviour has important implications for the interpretation of subjective reports. The view that psychometric measures do not conform to the requirements of truly 'scientific' measurement is discussed. Human judgement of subjective attributes is essentially ordinal and, unlike physical measures, can be matched to interval scales only with difficulty, but ordinal measures can be used successfully both to develop and test substantive theories using multivariate statistical techniques. Constructs such as fatigue are best understood as latent or inferred variables defined by a set of manifest or directly observed indicator variables. Both construct validity and predictive validity are viewed from this perspective and this helps to clarify several problems including the dissociation between measures of different aspects of a given construct, the question of whether physical (e.g. physiological) measures should be preferred to subjective measures and whether a single measure of constructs which are essentially multidimensional having both subjective and physical components is desirable. Finally, the fitness of subjective ratings to different purposes within the broad field of ergonomics research is discussed. For testing of competing hypotheses concerning the mechanisms underlying human performance, precise quantitative predictions are rarely needed. The same is frequently true of comparative evaluation of competing designs. In setting design standards, however, something approaching the level of measurement required for precise quantitative prediction is required, but this is difficult to achieve in practice. Although it may be possible to establish standards within restricted contexts, general standards for broadly conceived constructs such as workload are impractical owing to the requirement for representative sampling of tasks, work environments and personnel.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.