This paper discusses the specific characteristics of any hypothetical cognitive space that may be modelled in order to automate (or partially automate) the kind of mental health clinical reasoning - clinical or psychological case formulation - that is used by mental health professionals. It argues that work into the use of generative artificial intelligence (AI) in the field of mental health needs to consider three components of this kind of clinical reasoning. Firstly, heterotopy. When mental health clinical reasoning statements are made, parsing them does not result in the same representation when the same words are used, due to the fact that mental health ontologies contain multiple meanings for the same words. Secondly, orthogonality. Variables relevant to mental health may not causally intersect but may both be relevant for clinical case formulation and treatment determination. Thirdly, veridicality. The truth of a clinical case formulation may not be determined by any testable observations. Even treatment response may not allow for a determination of truth., The truth status of a clinical case formulation may hinge principally on the degree to which it confers meaning or understanding of a mental state on the person who is experiencing that mental state, and that truth may be different to the truth judgements of a mental healthcare clinician. Automated clinical case formulation models need to accommodate for these features of the cognitive space of mental health clinical case formulation.