Drift-diffusion models or DDMs are becoming a standard in the field of computational neuroscience. They extend models from signal detection theory by proposing a simple mechanistic explanation for the observed relationship between decision outcomes and reaction times (RT). In brief, they assume that decisions are triggered once the accumulated evidence in favor of a particular alternative option has reached a predefined threshold. Fitting a DDM to empirical data then allows one to interpret observed group or condition differences in terms of a change in the underlying model parameters. However, current approaches do not provide reliable parameter estimates when, e.g., evidence strength is varied over trials. In this note, we propose a fast and efficient approach that is based on fitting a self-consistency equation that the DDM fulfills. Using numerical simulations, we show that this approach enables one to extract relevant information from trial-by-trial variations of RT data that would typically be buried in the empirical distribution. Finally, we demonstrate the added-value of the approach, when applied to a recent value-based decision making experiment.