The medical director of a child guidance center is starting a new treatment program. Following intense media coverage of adverse events associated with selective serotonin reuptake inhibitors, depressed teens and their families are refusing to accept pharmacotherapy. The director has paid for three social work therapists to attend a cognitivebehavioral therapy (CBT) workshop, and these clinicians will begin seeing depressed teens next week. The director, however, is worried. From what she has heard and read, CBT has a good track record in treating depression although it may work best in combination with the medications that her patients are refusing to take. In addition, she wonders how well will CBT work when delivered by her social work therapists to the population of poor, Spanishspeaking teens served by the clinic?In this example, the medical director struggles with how to bring the principles of evidence-based practice (EBP) to bear on the problem of program evaluation. She is asking a critical, pragmatic question: Does therapy work, here, now, and with my patients? This column discusses a method for addressing this query that uses the results of published psychotherapy clinical trials as a gold-standard benchmark against which the outcomes of practice can be measured (McFall, 1996). This methodology views research evidence on the effects of psychotherapy as, literally, a base-solid ground that researchers and practitioners can use as a foundation when trying to solve problems, whether theoretical or applied.