The world received a confidence booster in the power of the scientific method, having witnessed and participated in the recent development of successful vaccines against SARS-COV-2. The world also got a peek into scientific controversies, the clamour for more transparency and data sharing, besides the requirement for rigorous testing, adequate sample sizes, false positives, false negatives, risk probabilities, and population variation. For an interested lay person, or even for a practising scientist, this was the equivalent of a crash course on the world stage on how science is done, warts and all, but where science triumphed in the end.Behind the scenes, science is in a maelstrom, confronting many demons, and the Better Angels of our Nature are struggling. What is the truth and how does one find it? Are there many truths? Scientists need to solve problems associated with the practice of science, and the sooner the better.Two major issues facing science today are those of reproducibility and replicability of results. Vigilance on both matters has led to a spate of retractions of papers, e.g. a total of 508 retractions from laboratories in India in the biomedical sciences (Elango 2021). The US National Academy of Sciences, Engineering and Medicine Report (2019) has defined reproducibility in the very narrow context of computer science, requiring all code and data be presented in a paper with adequate transparency and explanation such that anyone running the code would get the same results. This narrow view of reproducibility apparently springs from the work of the seismologist and computer scientist Jon Claerbout (Goodman et al. 2016). A broader definition of reproducibility would entail using the same data and methods and producing the same results (Hillary and Rajtmejer 2021). The US Academy justified the narrow sense definition of reproducibility by the current era of big data and burgeoning computer programmes, such that a call for greater data sharing and transparency in data analyses and open-source code are warranted.Replicability as defined by the US Academy document is the probability of getting the same results using a different data set, but maintaining the same protocols as were published, such as confirming the efficacy of an anticancer drug in a different laboratory and the same human cell lines, ordered from the same source, or requested from the authors. There is also the factor of generalisability, which is whether the same drug will deliver the same results on different cell lines. A report published by the Netherlands Academy of Sciences (KNAW 2018) defines reproducibility differently. In this report, reproducibility is the extent to which a replication study's results agree with those of the earlier study. There is therefore confusion in the definition of the terms themselves, although everyone believes that they know what is being said. The Netherlands Academy has gone so far as to set aside €3 million for the first set of projects on replication studies, and has declared that replication must...