SummaryClinical trials represent the best source of evidence on which to base treatment decisions. For such evidence to be utilized meaningfully, however, it is essential that results are interpreted correctly. This requires a good understanding of strengths and weaknesses of the adopted design, the clinical relevance of the outcome measures, and the many factors that could affect such outcomes. As a general rule, uncontrolled studies tend to provide misleading evidence as a result of the impact of confounders such as regression to the mean, patient‐related bias, and observer bias. On the other hand, although randomized controlled trials (RCTs) are qualitatively superior, aspects of their execution may still decrease their validity. Bias and decreased validity in RCTs may occur by chance alone (for example, treatment groups may not necessarily be balanced for important variables despite randomization) or because of specific features of the trial design. In the case of industry‐driven studies, bias often influences the outcome in favor of the sponsor's product. Factors that need to be carefully scrutinized include (1) the purpose for which the trial is conducted; (2) potential bias due to unblinding or lack of blinding; (3) the appropriateness of the control group; (4) the power of the study in detecting clinically relevant differences; (5) the extent to which eligibility criteria could affect outcomes and be representative of routine clinical practice; (6) whether the treatments being compared are used optimally in terms of dosing, duration of treatment, and other variables; (7) the appropriateness of the statistical comparisons; (8) the clinical relevance of the outcome measures and whether all key outcome information is reported (for example, responder rates in completers); and (9) potential bias in the way results are presented and discussed. This article discusses each of these aspects and illustrates the discussion with examples taken from published antiepileptic drug trials.