<p>The general reinforcement learning agent AIXI may be the only mathematical formalism of artificial general intelligence (AGI) supported by proof that its performance, measured according to a description of intelligence, is optimal. Unfortunately AIXI is incomputable, and its performance is subjective. This paper proposes an alternative, also supported by proof, which overcomes both problems. Integrating research from cognitive science (enactivism), philosophy of language (intension and extension), machine learning and planning (satplan), the notion of an arbitrary task is given mathematical rigour. This serves as an enactive, unified model of learning and reasoning within which a description of intelligence is formalised, along with a computable universal prior we prove grants optimal performance according to that description. This mathematical proof is then further supported with experimental results. </p>
<p>The foremost limitation is that intelligence is computationally complex, and must be coupled with domain specific inductive biases to make real-world tasks of practical significance tractable (such as the domain of all tasks an average human might expect to undertake). Finally, by unifying concepts from AI, cognitive science and philosophy in one formalism, we have defined shared language to enable collaborative bridges within and beyond these fields.</p>