Why We are Introducing Registered Reports Last February, the BISE Editorial Board met for its annual Editorial Board Meeting in Siegen with a bunch of new and interesting ideas on the agenda. One topic that was discussed and settled was the introduction of a new submission format-Registered Reports-for the recently established ''Human-Computer Interaction and Social Computing'' department. Experimental research is the dominant paradigm in this department and therefore a good candidate for this innovative approach to evaluating research results. Registered Reports are a promising format to encourage and support high quality (and also risky and innovative) experimental research and ensure rigorous scientific practice. As experimental research is gaining importance in our journal through our new department, but also in other areas such as Economics of IS, dealing with Registered Reports is also part of our strategy to maintain and develop our status as a high-quality community journal and make it even more attractive for potential authors in the respective areas. Within the last decade, the scientific landscape was shaken by a series of reports showing that many scientific studies are difficult or impossible to replicate or reproduce in a subsequent investigation, either by independent researchers or by the original researchers themselves. This problem is known as the ''Replication Crisis''. Though the focus was on psychology, biology and medicine, our domain-Information Systems-and related fields are no exception (Coiera et al. 2018; Head et al. 2015; Hutson 2018). Several questionable scientific practices-unfortunately so common that almost known by everyone-are more or less related to this problem (Chambers 2015; Chambers et al. 2014). One is a set of methods called ''phacking''. A common metaphorical paraphrase of p-hacking, somehow funny and sad at the same time, is ''torturing the data until they confess'' (e.g., Probst and Hagger 2015). P-hacking means, e.g., introducing or removing control variables or switching statistical tests in order to receive significant p-values. This is often combined with underpowered study design, the observation number of which is iteratively increased until the results match the expectations (very likely only due to variance). HARKing is another questionable method which consists in adapting hypotheses to the data after a study is performed. Simpson's paradox is the well-known phenomenon that a trend may appear in several different groups of data but disappears or changes to the contrary when these groups are combined. This illustrates that seemingly conflicting conclusions are possible for a given data set. These phenomena lead to the situation that a considerable amount of published findings are, in fact, false positives. Finally, a lack of data sharing makes many results not verifiable.