Conceptual knowledge reflects our multi-modal ‘semantic database’. As such, it brings meaning to all verbal and non-verbal stimuli, is the foundation for verbal and non-verbal expression and provides the basis for computing appropriate semantic generalizations. Multiple disciplines (e.g. philosophy, cognitive science, cognitive neuroscience and behavioural neurology) have striven to answer the questions of how concepts are formed, how they are represented in the brain and how they break down differentially in various neurological patient groups. A long-standing and prominent hypothesis is that concepts are distilled from our multi-modal verbal and non-verbal experience such that sensation in one modality (e.g. the smell of an apple) not only activates the intramodality long-term knowledge, but also reactivates the relevant intermodality information about that item (i.e. all the things you know about and can do with an apple). This multi-modal view of conceptualization fits with contemporary functional neuroimaging studies that observe systematic variation of activation across different modality-specific association regions dependent on the conceptual category or type of information. A second vein of interdisciplinary work argues, however, that even a smorgasbord of multi-modal features is insufficient to build coherent, generalizable concepts. Instead, an additional process or intermediate representation is required. Recent multidisciplinary work, which combines neuropsychology, neuroscience and computational models, offers evidence that conceptualization follows from a combination of modality-specific sources of information plus a transmodal ‘hub’ representational system that is supported primarily by regions within the anterior temporal lobe, bilaterally.