Design generally entails multiple kinds, or modalities, of representation and reasoning. For example, designers reason with different kinds of representations, including both imagistic (e.g., drawings, sketches, and diagrams) and propositional (e.g., function, behavior, causality, and structure). This multimodal nature of design representation and reasoning raises several issues for artificial intelligence (AI) research on design.For example, what types of knowledge are captured by various modalities of representation? What kinds of inferences are enabled and constrained by different representation modalities? How might we couple a representation in one modality with a representation in another or transform a representation in one modality to another? AI researchers have long been interested in these issues, although not necessarily in the context of design.AI research on multimodal representations and reasoning relevant to design has generally followed several important threads. In one thread, AI research has sought to understand the various modalities in terms of the types of knowledge they capture and the inferences they enable. For example, Davis (1984) describes an early effort to declaratively represent and then reason about the structure and behavior of physical systems, and Sembugamoorthy and Chandrasekaran (1986) describe an early attempt to declaratively represent functions of physical systems and relate them to their structure via their behaviors. Both efforts focused on diagnostic problem solving. In contrast, Glasgow and Papadias (1992) present an analysis of imagistic representations and use symbolic arrays to represent spatial knowledge.Another thread of AI research on multimodal representations and reasoning pertains to interpreting imagistic representations of a system by reasoning about its structure and behavior. For example, Stahovich, Davis, and Shrobe (1998) describe an attempt at abstracting the behaviors of a physical system from its schematic sketch. A third research thread is concerned with the coupling of reasoning across different representation modalities. For example, Funt (1980) describes an early effort in which a diagrammatic reasoner answered questions posed by a propositional problem solver, and Chandrasekaran (2006) presents a recent attempt at a multimodal cognitive architecture in which propositional and diagrammatic components cooperate to solve problems.AI research on design per se has pursued similar threads. For example, Gero (1996) has analyzed the role of imagistic representations in creative design and describes cognitive studies of imagistic representations and reasoning in design (Gero, 1999). Gebhardt et al. (1997) describe a computer-aided design system that used both diagrammatic design cases and propositional design rules. Yaner and Goel (2006) describe an organizational schema for combining functional, causal, spatial, and diagrammatic knowledge about design cases.The five papers selected for this Special Issue push the envelope of research on multimodal design...