<p>The general reinforcement learning agent AIXI may be the only mathematical formalism of artificial general intelligence (AGI) supported by proof that its performance is optimal. This is achieved using compression as a proxy for intelligence. Unfortunately, AIXI is incomputable and claims of its optimality were later shown to be subjective. This paper proposes an alternative, supported by proof, which overcomes both problems. Integrating research from cognitive science (enactivism), philosophy (intension and extension), machine learning and planning (satplan), an arbitrary task is given mathematical rigour. This serves as an enactive model of learning and reasoning within which a description of intelligence is formalised. Instead of compression we use weakness as our proxy, and the result is both computable and objective. We formally prove that maximising weakness maximises intelligence. This proof is then further supported with experimental results comparing weakness and description length (the closest analogue to compression possible under the enactive model that would not reintroduce the problem of subjective performance). Our results show that weakness outperforms description length, and is a better proxy for intelligence. The foremost limitation is that intelligence as we have defined it is computationally complex, but may be coupled with domain specific inductive biases to make real-world domains of practical significance tractable (e.g. the domain of all tasks a human would undertake). Like AIXI this is not intended to be a panacea but to demonstrate useful principles, which may be integrated with existing tools such as neural networks to improve performance.</p>