Reverse-engineering the brain involves adopting and testing a hierarchy of working hypotheses regarding the computational problems that it solves, the representations and algorithms that it employs and the manner in which these are implemented. Because problem-level assumptions set the course for the entire research programme, it is particularly important to be open to the possibility that we have them wrong, but tacit algorithm-and implementation-level hypotheses can also benefit from occasional scrutiny. This paper focuses on the extent to which our computational understanding of how the brain works is shaped by three such rarely discussed assumptions, which span the levels of Marr's hierarchy: (i) that animal behaviour amounts to a series of stimulus/response bouts, (ii) that learning can be adequately modelled as being driven by the optimisation of a fixed objective function and (iii) that massively parallel, uniformly connected layered or recurrent network architectures suffice to support learning and behaviour. In comparison, a more realistic approach acknowledges that animal behaviour in the wild is characterised by dynamically branching serial order and is often agentic rather than reactive. Arguably, such behaviour calls for open-ended learning of world structure and may require a neural architecture that includes precisely wired circuits reflecting the serial and branching structure of behavioural tasks.