It is known that statistically significant results are more likely to be published than results that are not statistically significant. However, it is unclear whether negative results are disappearing from papers, and whether there exists a 'hierarchy of sciences' with the social sciences publishing more positive results than the physical sciences. Using Scopus, we conducted a search in the abstracts of papers published between 1990 and 2014, and calculated the percentage of papers reporting marginally positive results (i.e., p-values between 0.040 and 0.049) versus the percentage of papers reporting marginally negative results (i.e., pvalues between 0.051 and 0.060). The results indicate that negative results are not disappearing, but have actually become 4.3 times more prevalent since 1990. Positive results, on the other hand, have become 13.9 times more prevalent since 1990. We found no consistent support for a 'hierarchy of sciences'. However, we did find large differences in reporting practices between disciplines, with the reporting of p-values being 60.6 times more frequent in the biological sciences than in the physical sciences. We argue that the observed longitudinal trends may be caused by negative factors, such as an increase of questionable research practices, but also by positive factors, such as an increasingly quantitative research focus.
IntroductionIn the last decade, many methodologists have raised concerns about the skewed nature of the scientific record. Ioannidis ' (2005) highly-cited article claimed that over 50% of the results that are declared statistically significant are false, meaning that they actually reflect a negative (i.e., null) effect. Similar voices are heard in a variety of research fields, including biology and ecology (Csada et al., 1996;Jennions & Møller, 2002), medicine and pharmaceutics (Atkin, 2002;Colom & Vieta, 2011;Dwan et al., 2008;Hopewell et al., 2009;Kyzas et al., 2007), economics (Ioannidis & Doucouliagos, 2013), cognitive sciences (Ioannidis et al., 2014), genetics (Ioannidis, 2003), neurosciences (Jennings & Van Horn, 2012), and psychology (Ferguson & Heene, 2012;Francis, 2013;Laws, 2013).The abundance of positive results has been attributed to questionable research practices such as selective publication (Dwan et al., 2008;Hopewell et al., 2009;Rothstein et al., 2006), undisclosed exploratory analyses and selective reporting Dwan et al., 2008;Kirkham et al., 2010;Simmons et al., 2011), as well as data fabrication (Fanelli, 2009;Moore et al., 2010). These mechanisms are fuelled by an emphasis on productivity (De Rond & Miller, 2005), high rejection rates of journals (Young et al., 2008), and competitive schemes for funding and promotion (Joober et al., 2012). Not just researchers, but also journal editors (Sterling et al., 1995;Thornton & Lee, 2000) and sponsoring/funding parties (Djulbegovic et al., 2000;Lexchin et al., 2003;Sismondo, 2008) have been criticised for favouring positive results over negative ones. It has been argued that certain fields ...