Many of the recent successes of deep learning-based approaches have been enabled by a framework of flexible, composable computational blocks with their parameters adjusted through an automatic differentiation mechanism to implement various data processing tasks. In this work, we explore how the same philosophy can be applied to existing “classical” (i.e., non-learning) algorithms, focusing on computed tomography (CT) as application field. We apply four key design principles of this approach for CT workflow design: end-to-end optimization, explicit quality criteria, declarative algorithm construction by building the forward model, and use of existing classical algorithms as computational blocks. Through four case studies, we demonstrate that auto-differentiation is remarkably effective beyond the boundaries of neural-network training, extending to CT workflows containing varied combinations of classical and machine learning algorithms.