Propensity score analysis is a relatively recent statistical innovation that is useful in the analysis of data from quasi-experiments. The goal of propensity score analysis is to balance two non-equivalent groups on observed covariates to get more accurate estimates of the effects of a treatment on which the two groups differ. This article presents a general introduction to propensity score analysis, provides an example using data from a quasi-experiment compared to a benchmark randomized experiment, offers practical advice about how to do such analyses, and discusses some limitations of the approach. It also presents the first detailed instructions to appear in the literature on how to use classification tree analysis and bagging for classification trees in the construction of propensity scores. The latter two examples serve as an introduction for researchers interested in computing propensity scores using more complex classification algorithms known as ensemble methods.
A hallmark of Lee Sechrest's career has been his emphasis on practical methods for field research. Sechrest once drove this point home in a comment on the first author's presidential address to the 1997 Annual Convention of the American Evaluation Association (Shadish, 1998). The title of that address was "Evaluation Theory Is Who We Are," and the address included a 10-item test on evaluation theory given to the entire audience. Those who failed the test were informed that their credentials as evaluators were in serious question-a rhetorical point in many respects, but one that was clearly controversial. After the address, Sechrest took Shadish aside and said "You really need to get out and do more evaluations." We hope it is belatedly responsive to this suggestion that, in this chapter, we focus on a very practical recent development in field research, one that Sechrest and his colleagues have both used and criticizedpropensity score analysis of quasi-experimental data.Quasi-experiments share many of the characteristics of randomized experiments, except they never use random assignment. Quasi-experiments are widely viewed as more practical than randomized experiments, especially when random assignment is not feasible or ethical. The latter might occur, for example, if the researcher is asked to design a study after a treatment is implemented, or if practitioners judge that treatment cannot be denied to needy clients. In such cases, it is common for participants in a quasi-experiment to select which treatment they want to have, or to have their treatment selected for them on a nonrandom basis by, say, administrators or treatment providers. Quasi-experiments have other desirable features as well. For example, the participants, treatments, outcome measures, and settings in quasi-experiments may be more representative of real-world conditions than are randomized experiments. Often, for example, randomized experiments can include only participants who agree to be randomly assigned or settings that agree to have 143
Quasi‐experiments are used to test hypotheses about the effects of manipulable treatments but lack the process of random assignment that occurs with true experiments. Consequently, researchers must be particularly concerned with reducing threats to the validity of causal inferences. Quasi‐experiments can be thought of as a collection of design elements aimed at reducing threats to validity. A number of threats to validity and basic quasi‐experimental designs are discussed.
A Center for Substance Abuse Treatment Knowledge Application Program based on cognitive-behavioral and self-management treatment approaches and targeted to older adults with substance abuse was provided through a community behavioral health center. A sample of 199 adults aged 50 and above participated in the 18-session program. Observations were made at intake and 6 months after intake. Program completers versus noncompleters differed significantly over time, favoring completers with regard to decreased use of nonmedical prescription drugs, improved cognitive functioning, improved mental health, increased vitality, and lack of bodily pain. Significant time effects were noted in participants' decreased use of alcohol and binge drinking, reduced stress, fewer emotional problems, a decrease in having to reduce important activities, and increased prescription of medication for psychological problems. Participants also reported significant improvement in their social functioning, and their physical health and emotional problems had less impact on what they were able to do.
BackgroundDepression in the workplace creates a significant burden on employees and employers in terms of lost productivity and related costs. myStrength provides a robust, holistic Web- and mobile-based solution empowering users to learn, practice, and implement a range of evidence-based psychological interventions.ObjectiveThe main aim of this study was to demonstrate improvement in depressive symptoms among employees at risk of depression through myStrength use.MethodsA 26-week, parallel-arm, pilot, randomized controlled trial was designed to assess the effectiveness of myStrength compared to a series of informational “Depression Tip/Fact of the Week” emails as the active control arm. Study participants (n=146) were commercially insured employees of a mid-sized financial software solutions firm. The primary outcome was self-reported change in depression score as best fit by a linear random effects model accounting for individual baseline symptoms.ResultsThe final sample consisted of 78 participants in the experimental arm, myStrength, and 68 participants in the active control arm. myStrength users demonstrated significantly steeper and more rapid reduction in depressive symptoms over time compared to the active control (P<.001), suggesting that the intervention generated improvement in behavioral health symptoms, even in a nonclinical sample.ConclusionsThis pilot study builds foundational support for the scalable deployment of myStrength as a complementary behavioral health offering to promote overall mental health and well-being in the workplace.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.