Can we accelerate convergence of gradient descent without changing the algorithm-just by carefully choosing stepsizes? Surprisingly, we show that the answer is yes. Our proposed Silver Stepsize Schedule optimizes strongly convex functions in κ log ρ 2 ≈ κ 0.7864 iterations, where ρ = 1 + √ 2 is the silver ratio and κ is the condition number. This is intermediate between the textbook unaccelerated rate κ and the accelerated rate κ 1/2 due to Nesterov in 1983. The non-strongly convex setting is conceptually identical, and standard black-box reductions imply an analogous partially accelerated rate ε − log ρ 2 ≈ ε −0.7864 . We conjecture and provide partial evidence that these rates are optimal among all stepsize schedules.The Silver Stepsize Schedule is constructed recursively in a fully explicit way. It is nonmonotonic, fractal-like, and approximately periodic of period κ log ρ 2 . This leads to a phase transition in the convergence rate: initially super-exponential (acceleration regime), then exponential (saturation regime).The core algorithmic intuition is hedging between individually suboptimal strategies-short steps and long steps-since bad cases for the former are good cases for the latter, and vice versa. Properly combining these stepsizes yields faster convergence due to the misalignment of worst-case functions. The key challenge in proving this speedup is enforcing long-range consistency conditions along the algorithm's trajectory. We do this by developing a technique that recursively glues constraints from different portions of the trajectory, thus removing a key stumbling block in previous analyses of optimization algorithms. More broadly, we believe that the concepts of hedging and multi-step descent have the potential to be powerful algorithmic paradigms in a variety of contexts in optimization and beyond.This series of papers publishes and extends the first author's 2018 Master's Thesis (advised by the second author)-which established for the first time that carefully choosing stepsizes can enable acceleration in convex optimization. Prior to this thesis, the only such result was for the special case of quadratic optimization, due to Young in 1953.