The most striking feature of any cognitive system is its ability to keep learning cumulatively and exploit its experiences prospectively to anticipate, reason and produce goal-oriented behaviors. At the core of this intriguing capability lies our memory. Every touch, sound, sight, taste triggers our memory, enabling us to remember and flexibly connect our past experiences, with the available present to the possible future. While emerging trends in neurosciences are rapidly enriching our understanding of the functional organization of memory in the brain, a prospective memory architecture is also a central design feature necessary if robots are to truly become commonplace assistants in numerous application domains (domestic, industrial, others) and in the complexity of the environments we inhabit and create (where neither everything can be known nor can everything be experienced). Hence, with the central premise that cognition is constructive manipulation of memory, this thesis proposes a novel brain guided perspective on the design of cognitive architectures for cumulatively developing systems. Set up in the context of several experiments from animal and infant cognition reenacted on the iCub humanoid, a principled framework for cumulative learning of actions, skills, affordances and cause-effect relations through multiple streams (imitation, exploration, linguistic inputs) is proposed. How such diverse experiences of the robot learnt in past are recalled based on present context, combined with explorative actions to learn something new, creatively connected to generate novel behaviors in the context of sought goals is demonstrated through several tasks. The proposed integrated machinery provides for a shared computational basis for recalling of the past and simulation of the future in line with the emerging trends from neuroscience (like the brain’s default mode network). A notable transition further is the porting of the framework developed on iCub humanoid to real world industrial settings in tasks like assembly and manufacturing (facilitating runtime reasoning and quick switchover to novel assembly tasks). In this sense, the proposed architecture brings together the fields of Robotics, Developmental Psychology, and Neuroscience to craft cognitive and adaptable End User Applications while at the same time provides insights into the functional organization of the brain. In parallel, it lays the foundation for a new generation of cognitive architectures that are domain agnostic (i.e. support open-ended learning), partially embodiment agnostic (i.e. are configurable to new robotic platforms), partially self-driven (i.e. can generate their own internal goals) and brain guided.