Successful clinical practice demands that we use good judgment in choosing optimal treatments for the conditions in question. However, the criteria on which we base our choice are often implicit. In this article, it is argued that practicing clinicians can enhance the quality of these judgments by attending more explicitly to the strength and integrity of treatment as well as to specific standards of treatment effectiveness. Numerous examples from the psychological and evaluation literature are utilized to portray the practical considerations and problems inherent to this approach. By monitoring strength, integrity, and effectiveness for the duration of treatment, we can make appropriate mid-course corrections and determine the relationship among these three critical dimensions of treatment.Many of the issues confronted by research scientists are also likely to be quite relevant to professionals who evaluate and treat personal problems. We have recently written several papers that deal with issues of special interest to evaluation researchers in the fields of health and psychology. The concepts in these papers are not field-specific and appear to us to be quite pertinent to practicing clinicians as well. Our major purpose in the present article, then, is to suggest alternative perspectives on and procedures for clinical research and practice that may be particularly useful to clinicians. Testing these procedures in the proving grounds of clinical practice may in turn generate other new concepts of interest to research scientists. Issues Regarding the Choice and Maintenance of Successful TreatmentsThere is a seemingly infinite number of aspects of the patient-problem nexus that will potentially influence the choice of treatments for clinical problems. However, our emphasis will instead be directed to impor-Requests for reprints should be sent to
The aim of this study was to assess the relationship between breastfeeding and postponing introduction to solid food (SF) on children's obesity and healthy weight status (WS), at 2 and 4 years. Drawing upon a nationally representative sample of children from the Early Childhood Longitudinal Study-Birth Cohort, we estimated the magnitude of the relationship between children's WS and early feeding practices. Contingency tables and multinomial logistic regression were used to analyze obese and healthy WS for breastfed and never breastfed children and examine three timing categories for SF introduction. With both percentages and odds, breastfeeding and delaying introduction to SF until 4 months were associated with lower obesity rates and higher, healthy WS rates (typically 5-10%). Analyses of feeding practice combinations revealed that when children were not breastfed, obesity odds decreased when SF introduction was postponed until 4 months. Obesity odds were further reduced when SF delay was combined with breastfeeding. Consistent increases in healthy WS were also observed. Benefits were stable across both follow-up periods. Breastfeeding and delaying complementary foods yielded consistently and substantially lower likelihood of obesity and greater probability of healthy WS. Health policies targeting early feeding practices represent promising interventions to decrease preschool obesity and promote healthy WS.
Utilizing the regression-discontinuity research design, this article explores the effectiveness of a developmental English program in a large, multicampus community college. Routinely collected data were extracted from existing records of a cohort of first-time college students followed for approximately 6 years (N = 1,473). Results are consistent with a conclusion that students’ participation in the program increases English academic achievement to levels similar to those of students not needing developmental coursework. The findings are also consistent with a conclusion that those students in greatest need of developmental English benefit the most from the program. This study provides an inexpensive, inferentially rigorous, program evaluation strategy that can be applied with few additional efforts to assess existing programs and to guide policy decisions.
Twenty-four elementary school children in grades K-3 participated in a study to teach six street-crossing skills: (1) wait at curb, (2) look all ways, (3) watch vehicle distance, (4) walk, (5) continue to look, and (6) use crosswalk. The effects of an instructional package implemented on the street corner were evaluated using a multiple-baseline design across two groups of six children at each of two schools. Rapid acquisition of pedestrian skills was evident at both schools. Average skill levels improved from 44% during baseline to 97% after training at School A and from 21% to 86% at School B. Data taken at a second street at each school were used to assess setting generality of safety behaviors. A one-year followup of 14 children indicated that pedestrian safety skills either maintained at high levels or could be quickly recovered from intermediate levels after remedial training. This research represents a first step in the solution of just one of the many community problems involving safety-deficient settings.
How the magnitude of an experimental effect may be measured has been a matter of concern for at least two decades. The phenomenon of effect size is still not well understood, and it cannot be inferred from statistical significance. In recent years various ways of assessing the amount of variance accounted for have been proposed as measures of magnitude of effect. Other writers have proposed rulesfor standardizing effect size, with the interpretations of the measures depending largely on intuitions buttressed by some further general empirical norms. All the methods of assessing effect size have serious flaws that limit their usefulness. The various statistical procedures for estimating variance accounted for are based on different statistical models and can produce rather sharply differing results, depending on the model employed. All the methods suffer from the limitation that they reflect to too great an extent the particular characteristics of the study being reported and hence have limited generalizability. problem of which researching psychologists have been aware for years (e.g., Bolles and Messick, 1958;Savage, 1957) but that has had increasing attention over the past decade or so is how to determine how large an effect is achieved by an experimental intervention. Especially for psychologists working in applied areas it is important to know more than that a treatment produces a statistically significant main effect. However, even for theoretical problems it is at least enlightening and often sobering to find out how much an effect is at stake during the intricacies of a theoretical controversy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.