Object grasping is a typical human ability which is widely studied from both a biological and an engineering point of view. This paper presents an approach to grasp synthesis inspired by the human neurophysiology of actionoriented vision. Our grasp synthesis method is built upon an architecture which, taking into account the differences between robotic and biological systems, proposes an adaptation of brain models to the peculiarities of robotic setups. The architecture modularity allows for scalability and integration of complex robotic tasks. The grasp synthesis is designed as integrated with the extraction of a 3D object description, so that the object visual analysis is actively driven by the needs of the grasp synthesis: visual reconstruction is performed incrementally and selectively on the regions of the object that are considered more interesting for grasping.