SUMMARYMethods have been proposed to re-design a clinical trial at an interim stage in order to increase power.This may be in response to external factors which indicate power should be sought at a smaller effect size, or it could be a reaction to data observed in the study itself. In order to preserve the type I error rate, methods for unplanned design change have to be defined in terms of non-sufficient statistics and this calls into question their efficiency and the credibility of conclusions reached. We evaluate methods for adaptive re-design, extending the theoretical arguments for use of sufficient statistics of Tsiatis & Mehta (2003) and assessing the possible benefits of pre-planned adaptive designs by numerical computation of optimal tests; these optimal adaptive designs are concrete examples of optimal sequentially planned sequential tests proposed by Schmitz (1993). We conclude that the flexibility of unplanned adaptive designs comes at a price and we recommend the appropriate power for a study should be determined as thoroughly as possible at the outset. Then, standard error spending tests, possibly with unevenly spaced analyses, provide efficient designs but it is still possible to fall back on flexible methods for re-design should study objectives change unexpectedly once the trial is under way.