In this issue of JAMA Internal Medicine, Apathy et al 1 present results of difference-in-differences (DiD) with electronic health record (EHR) metadata from Epic, examining changes in EHR use and visit volume after successful, voluntary adoption of team-based documentation support (eg, scribes). They found a decrease in documentation time and increase in visit volume, with larger effects for more intensive users.The study offers an opportunity to consider the merits and assumptions underlying DiD, a popular observational study design in which researchers estimate the average treatment effect on the treated by comparing pre-post differences between groups that were and were not exposed to a new treatment. DiD is based on a counterfactual parallel trends assumption: that absent the intervention, the treatment and comparison groups would have had parallel trajectories on average. 2,3 Although widely used and seemingly straightforward, DiD can be complex to apply and interpret, and in the past several years, many methodological studies have aimed to improve its reliability and transparency. 2,3 The study by Apathy et al 1 incorporates several recent recommendations to strengthen DiD. For example, the authors present event study plots to provide evidence about the strength of the parallel trends assumption (Figure 2 in the study by Apathy et al 1 ). Because the parallel trends assumption describes what would have happened to treated groups if there had not been an intervention, it cannot be directly tested. However, event study plots provide information about preintervention trends: if preintervention trends were parallel, this would increase the plausibility that trends would have remained parallel absent the intervention. In Figure 2, 1 each point on the event study plot shows a DiD estimate for the outcome at a given time (shown on the x-axis) relative to the last week prior to the intervention (x-axis equal to −1). For example, Figure 2A shows the difference in total weekly visits for treatment vs comparison groups at each time compared to the last preintervention week. Preintervention points (those to the left of the dotted vertical line at 0) can be thought of as placebo effect estimates. The DiD design is most reliable when these preintervention points are close to 0, have narrow error bars, and lack a discernible trend over time. Postintervention points on the event study plot show the distribution of treatment effects over time following treatment; effects that begin abruptly after intervention can lend credibility to a causal link between the treatment and the effect.The authors also incorporated estimators designed to account for staggered treatment rollout. In this study, physicians adopted documentation support at different calendar times. In such cases, traditional DiD estimators can produce misleading or difficult-tointerpret estimates if treatment effects differ across adoption cohorts or are growing or shrinking over time. 2,4,5 To address this, the authors conducted sensitivity analyses using the Cal...