In randomized trials, comparability of the treatment groups is ensured through allocation of treatments using a mechanism that involves some random element, thus controlling for confounding of the treatment effect. Completely random allocation ensures comparability between the treatment groups for all known and unknown prognostic factors. For a specific trial, however, imbalances in prognostic factors among the treatment groups may occur. Although accidental bias can be avoided in the presence of such imbalances by stratifying the analysis, most trialists, regulatory agencies, and other stakeholders prefer a balanced distribution of prognostic factors across the treatment groups. Some procedures attempt to achieve balance in baseline covariates, by stratifying the allocation for these covariates, or by dynamically adapting the allocation using covariate information during the trial (covariate‐adaptive procedures). In this Tutorial, the performance of minimization, a popular covariate‐adaptive procedure, is compared with two other commonly used procedures, completely random allocation and stratified blocked designs. Using individual patient data of 2 clinical trials (in advanced ovarian cancer and age‐related macular degeneration), the procedures are compared in terms of operating characteristics (using asymptotic and randomization tests), predictability of treatment allocation, and achieved balance. Fifty actual trials of various sizes that applied minimization for treatment allocation are used to investigate the achieved balance. Implementation issues of minimization are described. Minimization procedures are useful in all trials but especially when (1) many major prognostic factors are known, (2) many centers of different sizes accrue patients, or (3) the trial sample size is moderate.