Tracer hydrograph separation has been widely applied to identify streamflow components, often indicating that pre-event water comprises a large proportion of stream water. Previous work using numerical modeling suggests that hydrodynamic mixing in the subsurface inflates the pre-event water contribution to streamflow when derived from tracer-based hydrograph separation. This study compares the effects of hydrodynamic dispersion, both within the subsurface and at the surface-subsurface boundary, on the tracer-based pre-event water contribution to streamflow. Using a fully integrated surface-subsurface code, we simulate two hypothetical 2-D hillslopes with surface-subsurface solute exchange represented by different solute transport conceptualizations (i.e., advective and dispersive conditions). Results show that when surface-subsurface solute transport occurs via advection only, the pre-event water contribution from the tracer-based separation agrees well with the hydraulically determined value of pre-event water from the numerical model, despite dispersion occurring within the subsurface. In this case, subsurface dispersion parameters have little impact on the tracer-based separation results. However, the pre-event water contribution from the tracer-based separation is larger when dispersion at the surface-subsurface boundary is considered. This work demonstrates that dispersion within the subsurface may not always be a significant factor in apparently large pre-event water fluxes over a single rainfall event. Instead, dispersion at the surface-subsurface boundary may increase estimates of pre-event water contribution. This work also shows that solute transport in numerical models is highly sensitive to the representation of the surface-subsurface interface. Hence, models of catchment-scale solute dynamics require careful treatment and sensitivity testing of the surface-subsurface interface to avoid misinterpretation of real-world physical processes.