Background: Choosing or altering the planned statistical analysis approach after examination of trial data (often referred to as 'p-hacking') can bias the results of randomised trials. However, the extent of this issue in practice is currently unclear. We conducted a review of published randomised trials to evaluate how often a pre-specified analysis approach is publicly available, and how often the planned analysis is changed. Methods: A review of randomised trials published between January and April 2018 in six leading general medical journals. For each trial, we established whether a pre-specified analysis approach was publicly available in a protocol or statistical analysis plan and compared this to the trial publication. Results: Overall, 89 of 101 eligible trials (88%) had a publicly available pre-specified analysis approach. Only 22/89 trials (25%) had no unexplained discrepancies between the pre-specified and conducted analysis. Fifty-four trials (61%) had one or more unexplained discrepancies, and in 13 trials (15%), it was impossible to ascertain whether any unexplained discrepancies occurred due to incomplete reporting of the statistical methods. Unexplained discrepancies were most common for the analysis model (n = 31, 35%) and analysis population (n = 28, 31%), followed by the use of covariates (n = 23, 26%) and the approach for handling missing data (n = 16, 18%). Many protocols or statistical analysis plans were dated after the trial had begun, so earlier discrepancies may have been missed. Conclusions: Unexplained discrepancies in the statistical methods of randomised trials are common. Increased transparency is required for proper evaluation of results.