Researchers across cognitive, neuro- and computer sciences increasingly reference ‘human-like’ artificial intelligence and ‘neuroAI’. However, the scope and use of the terms are often inconsistent. Contributed research ranges widely from mimicking
behaviour
, to testing machine learning methods as
neurally plausible
hypotheses at the cellular or functional levels, or solving
engineering
problems. However, it cannot be assumed nor expected that progress on one of these three goals will automatically translate to progress in others. Here, a simple rubric is proposed to clarify the scope of individual contributions, grounded in their commitments to
human-like behaviour
,
neural plausibility
or
benchmark/engineering/computer science
goals. This is clarified using examples of weak and strong neuroAI and human-like agents, and discussing the generative, corroborate and corrective ways in which the three dimensions interact with one another. The author maintains that future progress in artificial intelligence will need strong interactions across the disciplines, with iterative feedback loops and meticulous validity tests—leading to both known and yet-unknown advances that may span decades to come.
This article is part of a discussion meeting issue ‘New approaches to 3D vision’.