“…As was further argued in [54], moving towards mortal computation would also likely entail challenging another important separation made in deep learning -the separation between inference and credit assignment. Specifically, in deep neural networks, including the more recent neural transformers [11,16,5] that drive large language models (typically pre-trained on gigantic text databases), are fit to training datasets in such a way that learning, carried out via the backpropagation of errors (backprop) algorithm [64], is treated as a separate computation distinct from the mechanisms in which information is propagated through the network itself. In contrast, an adaptive system that could take advantage of mortal computing will most likely need to engage in intertwined inference-and-learning [60,58,55,53], the framing that neurobiological learning and inference in the brain are not really two completely distinct, separate processes but rather complementary ones that depend on one another, the formulations of which are motivated and integrated with the properties of the underlying neural circuitry (and the hardware that instantiates it) in mind.…”