Awarded “Best Student Paper” and published in Proceedings of
The 16th International Conference on Artificial General Intelligence,
Stockholm, 2023.
To make accurate inferences in an interactive setting, an agent must not
confuse passive observation of events with having intervened to cause
them. The do operator formalises interventions so that we may reason
about their effect. Yet there exist pareto optimal mathematical
formalisms of general intelligence in an interactive setting which,
presupposing no explicit representation of intervention, make maximally
accurate inferences. We examine one such formalism. We show that in the
absence of a do operator, an intervention can be represented by a
variable. We then argue that variables are abstractions, and that need
to explicitly represent interventions in advance arises only because we
presuppose these sorts of abstractions. The aforementioned formalism
avoids this and so, initial conditions permitting, representations of
relevant causal interventions will emerge through induction. These
emergent abstractions function as representations of one’s self and of
any other object, inasmuch as the interventions of those objects impact
the satisfaction of goals. We argue that this explains how one might
reason about one’s own identity and intent, those of others, of one’s
own as perceived by others and so on. In a narrow sense this describes
what it is to be aware, and is a mechanistic explanation of aspects of
consciousness.