In nursing research many concepts are measured by questionnaires. Respondents are asked to respond to a set of related statements or questions. In unidimensional scaling these statements or questions are indicants of the same concept. Scaling means to assign numbers to respondents, according to their position on the continuum underlying the concept. It is very common to use the summative Likert scaling procedure. The sumscore of the responses to the items is the estimator of the position of the patient on the continuum. The rationale behind this procedure is classical test theory. The main assumption in this theory is that all items are parallel instruments. The Rasch model offers an alternative scaling procedure. With Rasch both respondents and items are scaled on the same continuum. Whereas in Likert scaling all items have the same weight in the summating procedure, in the Rasch model items are differentiated from each other by 'difficulty'. The model holds that the probability of a positive response to an item is dependent on the difference between the difficulty of the item and the value of the person on the latent trait. The rationale behind this procedure is item response theory. In this paper both scaling procedures and their rationales are discussed.
Many studies show discrepancies between patients' and professionals' ratings on the same questionnaire regarding the patient's health. A relevant question is whether the differences in ratings reflect real differences between patients and professionals or whether they are caused by characteristics of the instrument. In this study, we address the latter option by examining the effects of 3 item characteristics (item wording, observability and clarity) on the degree of patient/nurse discrepancies in ratings of the items of the Appraisal of Self-care Agency (ASA) scale. Secondary analysis on 252 patient/nurse ratings showed that item wording (positively and negatively formulated items), and the observability of the items have a significant effect on the mean absolute difference score. No effect was found for clarity. These results were generally confirmed by subgroup analyses.
Many studies show discrepancies between patients' and professionals' ratings on the same questionnaire regarding the patient's health. A relevant question is whether the differences in ratings reflect real differences between patients and professionals or whether they are caused by characteristics of the instrument. In this study, we address the latter option by examining the effects of 3 item characteristics (item wording, observability and clarity) on the degree of patient/nurse discrepancies in ratings of the items of the Appraisal of Self-care Agency (ASA) scale. Secondary analysis on 252 patient/nurse ratings showed that item wording (positively and negatively formulated items), and the observability of the items have a significant effect on the mean absolute difference score. No effect was found for clarity. These results were generally confirmed by subgroup analyses.
Use of the standard instruction when completing the FES-I/Hips can lead to underreporting of FoF. Adaptation of certain items may improve content validity. Further psychometric studies are recommended to determine whether the proposed adjustments are appropriate.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.