Strong AI—artificial intelligence that is in all respects at least as intelligent as humans—is still out of reach. Current AI lacks common sense, that is, it is not able to infer, understand, or explain the hidden processes, forces, and causes behind data. Main stream machine learning research on deep artificial neural networks (ANNs) may even be characterized as being behavioristic. In contrast, various sources of evidence from cognitive science suggest that human brains engage in the active development of compositional generative predictive models (CGPMs) from their self-generated sensorimotor experiences. Guided by evolutionarily-shaped inductive learning and information processing biases, they exhibit the tendency to organize the gathered experiences into event-predictive encodings. Meanwhile, they infer and optimize behavior and attention by means of both epistemic- and homeostasis-oriented drives. I argue that AI research should set a stronger focus on learning CGPMs of the hidden causes that lead to the registered observations. Endowed with suitable information-processing biases, AI may develop that will be able to explain the reality it is confronted with, reason about it, and find adaptive solutions, making it Strong AI. Seeing that such Strong AI can be equipped with a mental capacity and computational resources that exceed those of humans, the resulting system may have the potential to guide our knowledge, technology, and policies into sustainable directions. Clearly, though, Strong AI may also be used to manipulate us even more. Thus, it will be on us to put good, far-reaching and long-term, homeostasis-oriented purpose into these machines.