Our 2019 editorial opened a dialogue about what is needed to foster an impactful field of learning analytics (Knight, Wise, & Ochoa, 2019). As we head toward the close of a tumultuous year that has raised profound questions about the structure and processes of formal education and its role in society, this conversation is more relevant than ever. That editorial, and a recent online community event, focused on one component of the impact: standards for scientific rigour and the criteria by which knowledge claims in an interdisciplinary, multi-methodology field should be judged. These initial conversations revealed important commonalities across statistical, computational, and qualitative approaches in terms of a need for greater explanation and justification of choices in using appropriate data, models, or other methodological approaches, as well as the many micro-decisions made in applying specific methodologies to specific studies. The conversations also emphasize the need to perform different checks (for overfitting, for bias, for replicability, for the contextual bounds of applicability, for disconfirming cases) and the importance of learning analytics research being relevant by situating itself within a set of educational values, making tighter connections to theory, and considering its practical mobilization to affect learning. These ideas will serve as the starting point for a series of detailed follow-up conversations across the community, with the goal of generating updated standards and guidance for JLA articles.