Early laryngeal, especially glottic, cancer is a good candidate for radiotherapy because obvious early symptoms (e.g. hoarseness) make earlier treatment possible and with highly successful localized control. This type of cancer is also a good model for exploring the basic principles of radiation oncology and several key findings (e.g. dose, fractionation, field size, patient fixation, and overall treatment time) have been noted. For example, unintended poor outcomes have been reported during transition from 60Cobalt to linear accelerator installation in the 1960s, with usage of higher energy photons causing poor dose distribution. In addition, shell fixation made precise dose delivery possible, but simultaneously elevated toxicity if a larger treatment field was necessary. Of particular interest to the radiation therapy community was altered fractionation gain as a way to improve local tumor control and survival rate. Unfortunately, this interest ceased with advancements in chemotherapeutic agents because alternate fractionation could not improve outcomes in chemoradiotherapy settings. At present, no form of acceleration can potentially compensate fully for the lack of concurrent chemotherapy. In addition, the substantial workload associated with this technique made it difficult to add extra fractionation routinely in busy clinical hospitals. Hypofractionation, on the other hand, uses a larger single fractionation dose (2–3 Gy), making it a reasonable and attractive option for T1–T2 early glottic cancer because it can improve local control without the additional workload. Recently, Japan Clinical Oncology Group study 0701 reprised its role in early T1–T2 glottic cancer research, demonstrating that this strategy could be an optional standard therapy. Herein, we review radiotherapy history from 60Cobalt to modern linear accelerator, with special focus on the role of alternate fractionation.